Tool Use Pattern โ Giving LLMs Hands and Feet
How LLMs interact with the outside world through function calls
LLMs can't actually call a weather API for "what's the weather today?" They just infer from training data. Connect tools and everything changes.
How It Works
- Pass tool schema to LLM: "you can use
get_weather(city: string)" - User: "What's the weather in Seoul?"
- LLM responds:
tool_use: get_weather(city="Seoul") - Client makes actual API call โ passes result to LLM
- LLM formats result naturally: "Seoul is currently 18ยฐC, clear."
The LLM doesn't call APIs directly. It requests "call this tool with these parameters," and the client (developer code) executes and returns results.
Good Tool Design
Clear names: read_file, search_code, run_test โ obvious from the name.
Detailed descriptions: LLMs decide when to use tools based on descriptions. "Reads a file" is worse than "Returns the text content of a file at the specified path. Binary files are not supported."
Minimal parameters: Keep required params minimal, set good defaults for optionals.
Useful results: Return success/failure, error messages, and execution results structurally.
MCP (Model Context Protocol)
Anthropic's tool standardization protocol. Separates tool creators from tool consumers (LLMs/agents).
MCP servers provide tools, MCP clients (agents) connect and use them. Build a tool once, reuse across multiple agents.
How It Works
Pass available tool schemas in tools parameter when calling LLM API
If LLM response includes tool_use, execute that function
Add execution result as tool_result to messages โ call LLM again
Using MCP, tools are separated into servers reusable across agents