How it works
The LLM is shown a list of available tools with their schemas (input/output types) and descriptions. When the LLM decides to call a tool, it emits structured output naming the tool and its arguments. The runtime executes the tool and feeds the result back as a new message in the conversation.
Example
A customer service agent has tools: get_order_status, process_refund, escalate_to_human. The LLM reads a customer message, picks the right tool, the runtime executes, the result feeds back, the LLM either takes another action or responds.
