๐Ÿ”ง

Tool Use Pattern โ€” Giving LLMs Hands and Feet

How LLMs interact with the outside world through function calls

LLMs can't actually call a weather API for "what's the weather today?" They just infer from training data. Connect tools and everything changes.

How It Works

  1. Pass tool schema to LLM: "you can use get_weather(city: string)"
  2. User: "What's the weather in Seoul?"
  3. LLM responds: tool_use: get_weather(city="Seoul")
  4. Client makes actual API call โ†’ passes result to LLM
  5. LLM formats result naturally: "Seoul is currently 18ยฐC, clear."

The LLM doesn't call APIs directly. It requests "call this tool with these parameters," and the client (developer code) executes and returns results.

Good Tool Design

Clear names: read_file, search_code, run_test โ€” obvious from the name.

Detailed descriptions: LLMs decide when to use tools based on descriptions. "Reads a file" is worse than "Returns the text content of a file at the specified path. Binary files are not supported."

Minimal parameters: Keep required params minimal, set good defaults for optionals.

Useful results: Return success/failure, error messages, and execution results structurally.

MCP (Model Context Protocol)

Anthropic's tool standardization protocol. Separates tool creators from tool consumers (LLMs/agents).

MCP servers provide tools, MCP clients (agents) connect and use them. Build a tool once, reuse across multiple agents.

How It Works

1

Pass available tool schemas in tools parameter when calling LLM API

2

If LLM response includes tool_use, execute that function

3

Add execution result as tool_result to messages โ†’ call LLM again

4

Using MCP, tools are separated into servers reusable across agents

Use Cases

Adding DB queries, email sending, payment processing to chatbots Providing file read/write and terminal execution tools to coding agents