How it works
GPT-5.5 is a multi-model deployment: a router decides whether your query needs the standard model, the Thinking variant (longer chains of internal reasoning), or the Pro variant (highest capability, longest context). API model IDs: gpt-5.5, gpt-5.5-thinking, gpt-5.5-pro.
Example
A customer-facing agent uses gpt-5.5 by default for cost efficiency, automatically escalates to gpt-5.5-thinking when the query requires multi-step reasoning, and falls back to gpt-5.5-pro for the rare cases that need maximum capability.
