๐Ÿ”

Agent Loop โ€” Anatomy of the Execution Loop

A single while loop is all an agent is

Dozens of agent frameworks exist, but they all share the same structure.

The Basic Loop

messages = [system_prompt]
while True:
    response = llm.call(messages, tools=available_tools)

    if response.stop_reason == 'tool_use':
        for tool_call in response.tool_calls:
            result = execute(tool_call)
            messages.append(tool_result(result))
    else:
        break

That's it. LangChain, CrewAI, AutoGen โ€” the core is this loop.

Loop Components

Message history: Where all conversations and tool results accumulate. The agent's "memory." Context window limits mean long histories need summarizing or truncating.

Tool definitions: Schema telling the LLM "you can use these tools." Tool name, description, parameters in JSON Schema. Better tool descriptions โ†’ LLM picks the right tool.

Exit conditions: When does the loop end? Usually when LLM returns text without tool calls. Safety guards like max iterations, timeouts, and cost limits are also needed.

Production Considerations

Error handling: What if tool execution fails? Pass the error message back to LLM and it tries a different approach. Don't just crash.

Context management: 100 tool calls make message history huge. Strategy needed to summarize or remove old messages before hitting token limits.

Parallel tool calls: Claude can call multiple tools in a single response. Running independent tool calls in parallel speeds things up.

Observability: Log what happens each loop iteration. Debugging without logs is hell.

How It Works

1

Initialize message history with system prompt + user message

2

Call LLM โ†’ if response has tool_use, execute tools

3

Add tool results to message history โ†’ call LLM again

4

No tool_use โ†’ exit loop โ†’ return final response

Use Cases

Building a simple CLI agent from scratch Understanding internal workings of existing agent frameworks