Back to glossaryGLOSSARY · Concepts

Reasoning

The process of an LLM working through a problem step by step before answering. Modern frontier models reason internally; some expose the reasoning trace explicitly (Claude's thinking blocks, GPT-5.5 Thinking variant).

How it works

Reasoning models are trained to produce a chain of thought before the final answer. Internal thinking can take 10-1000+ tokens depending on problem complexity. The trade-off: better answers on hard problems vs higher latency and cost on easy ones. April 2026 frontier models adapt their thinking budget automatically.

Example

A math agent asked 'What's 17% of 234?' might directly compute. Asked 'How many cans should I order to feed 47 people for 3 days at 2.3 cans per person per day?', the model reasons through the multiplication, rounding, and ordering buffer step by step.

Related terms

Need to actually use Reasoning?

We build production AI systems that put these concepts to work. 30 minutes, we map your use case.