Reasoning models are trained and tuned for tasks requiring, well, reasoning. Not helpful, I know.
They do well with tasks that require logic and/or problems that are complex. In this case, complex means having multiple steps.
Most of the major AI chatbot tools have at least one reasoning model - ChatGPT, Deepseek, Gemini, etc.
The reasoning models break down problems into logical steps and then work through the steps one at a time. Chain-of-thought, if you’re familiar with the term. Some of the models will show you this process and wait for your approval, and some hide the process from you.
Reasoning models do a really good job if they are given the right kind of problems to work on. If you aren’t sure if it is the right kind of problem, then start with a general-purpose model, and if the answers aren’t good enough, try a reasoning model.