Reasoning models are trained and tuned for tasks requiring, well, reasoning. Not helpful, I know.
They do well with tasks that require logic and/or problems that are complex. In this case, complex means having multiple steps.
Most of the major AI chatbot tools have at least one reasoning model - ChatGPT, Deepseek, Gemini, etc.
The reasoning models break down problems into logical steps and then work through the steps one at a time. Chain-of-thought, if you’re familiar with the term. Some of the models will show you this process and wait for your approval, and some hide the process from you.
Pros of reasoning models
- They approach problems more like humans by breaking them down, working through them one step at a time, and exploring alternatives.
- They do better with logic problems than general-purpose models:
- Math
- Coding
- Structured decision making
- Etc
- They do better with complex problems.
- They sometimes give you a breakdown of how they are working through a problem.
Cons of reasoning models
- They take longer to generate responses
- They cost more
- They use extra “thinking” tokens that will get added to your output token costs.
- They use more resources
- GPU
- Memory
- Etc
- They are less effective at smaller, simpler tasks - they “overthink” and can hallucinate more.
- Single answer
- Single step
- Etc
Reasoning models do a really good job if they are given the right kind of problems to work on. If you aren’t sure if it is the right kind of problem, then start with a general-purpose model, and if the answers aren’t good enough, try a reasoning model.
Comments