How it works
All major LLM providers (Anthropic, OpenAI, Google) standardised on JSON-schema-defined function calling by 2025. The LLM is given function schemas, decides if and which to call, returns the call as structured JSON, the runtime executes, the result returns as a tool_result message.
Example
In the Anthropic SDK: client.messages.create({ tools: [{ name: "search", input_schema: {...} }] }). When the LLM wants to search, it returns content with type "tool_use" containing the function name and parsed arguments.
