Skip to main content

Overview

The hard part of building agents (or any LLM application) is making them reliable enough. While they may work for a prototype, they often fail in real-world use cases.

Why do agents fail?

When agents fail, it’s usually because the LLM call inside the agent took the wrong action / didn’t do what we expected. LLMs fail for one of two reasons:
  1. The underlying LLM is not capable enough
  2. The “right” context was not passed to the LLM
More often than not - it’s actually the second reason that causes agents to not be reliable. Context engineering is providing the right information and tools in the right format so the LLM can accomplish a task. This is the number one job of AI Engineers. This lack of “right” context is the number one blocker for more reliable agents, and LangChain’s agent abstractions are uniquely designed to facilitate context engineering.
New to context engineering? Start with the conceptual overview to understand the different types of context and when to use them.

The agent loop

A typical agent loop consists of two main steps:
  1. Model call - calls the LLM with a prompt and available tools, returns either a response or a request to execute tools
  2. Tool execution - executes the tools that the LLM requested, returns tool results
Core agent loop diagram
This loop continues until the LLM decides to finish.

What you can control

To build reliable agents, you need to control what happens at each step of the agent loop, as well as what happens between steps.
Context TypeWhat You ControlTransient or Persistent
Model ContextWhat goes into model calls (instructions, message history, tools, response format)Transient
Tool ContextWhat tools can access and produce (reads/writes to state, store, runtime context)Persistent
Life-cycle ContextWhat happens between model and tool calls (summarization, guardrails, logging, etc.)Persistent

Transient context

What the LLM sees for a single call. You can modify messages, tools, or prompts without changing what’s saved in state.

Persistent context

What gets saved in state across turns. Life-cycle hooks and tool writes modify this permanently.

Data sources

Throughout this process, your agent accesses (reads / writes) different sources of data:
Data SourceAlso Known AsScopeExamples
Runtime ContextStatic configurationConversation-scopedUser ID, API keys, database connections, permissions, environment settings
StateShort-term memoryConversation-scopedCurrent messages, uploaded files, authentication status, tool results
StoreLong-term memoryCross-conversationUser preferences, extracted insights, memories, historical data

How it works

LangChain middleware is the mechanism under the hood that makes context engineering practical for developers using LangChain. Middleware allows you to hook into any step in the agent lifecycle and:
  • Update context
  • Jump to a different step in the agent lifecycle
Throughout this guide, you’ll see frequent use of the middleware API as a means to the context engineering end.

Model Context

Control what goes into each model call - instructions, available tools, which model to use, and output format. These decisions directly impact reliability and cost. All of these types of model context can draw from state (short-term memory), store (long-term memory), or runtime context (static configuration).

System Prompt

The system prompt sets the LLM’s behavior and capabilities. Different users, contexts, or conversation stages need different instructions. Successful agents draw on memories, preferences, and configuration to provide the right instructions for the current state of the conversation.
  • State
  • Store
  • Runtime Context
Access message count or conversation context from state:
from langchain.agents import create_agent
from langchain.agents.middleware import dynamic_prompt, ModelRequest

@dynamic_prompt
def state_aware_prompt(request: ModelRequest) -> str:
    # request.messages is a shortcut for request.state["messages"]
    message_count = len(request.messages)

    base = "You are a helpful assistant."

    if message_count > 10:
        base += "\nThis is a long conversation - be extra concise."

    return base

agent = create_agent(
    model="openai:gpt-4o",
    tools=[...],
    middleware=[state_aware_prompt]
)

Messages

Messages make up the prompt that is sent to the LLM. It’s critical to manage the content of messages to ensure that the LLM has the right information to respond well.
  • State
  • Store
  • Runtime Context
Inject uploaded file context from State when relevant to current query:
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable

@wrap_model_call
def inject_file_context(
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
    """Inject context about files user has uploaded this session."""
    # Read from State: get uploaded files metadata
    uploaded_files = request.state.get("uploaded_files", [])  

    if uploaded_files:
        # Build context about available files
        file_descriptions = []
        for file in uploaded_files:
            file_descriptions.append(
                f"- {file['name']} ({file['type']}): {file['summary']}"
            )

        file_context = f"""Files you have access to in this conversation:
{chr(10).join(file_descriptions)}

Reference these files when answering questions."""

        # Inject file context before recent messages
        messages = [  
            *request.messages
            {"role": "user", "content": file_context},
        ]
        request = request.override(messages=messages)  

    return handler(request)

agent = create_agent(
    model="openai:gpt-4o",
    tools=[...],
    middleware=[inject_file_context]
)
Transient vs Persistent Message Updates:The examples above use wrap_model_call to make transient updates - modifying what messages are sent to the model for a single call without changing what’s saved in state.For persistent updates that modify state (like the summarization example in Life-cycle Context), use life-cycle hooks like before_model or after_model to permanently update the conversation history. See the middleware documentation for more details.

Tools

Tools let the model interact with databases, APIs, and external systems. How you define and select tools directly impacts whether the model can complete tasks effectively.

Defining tools

Each tool needs a clear name, description, argument names, and argument descriptions. These aren’t just metadata—they guide the model’s reasoning about when and how to use the tool.
from langchain.tools import tool

@tool(parse_docstring=True)
def search_orders(
    user_id: str,
    status: str,
    limit: int = 10
) -> str:
    """Search for user orders by status.

    Use this when the user asks about order history or wants to check
    order status. Always filter by the provided status.

    Args:
        user_id: Unique identifier for the user
        status: Order status: 'pending', 'shipped', or 'delivered'
        limit: Maximum number of results to return
    """
    # Implementation here
    pass

Selecting tools

Not every tool is appropriate for every situation. Too many tools may overwhelm the model (overload context) and increase errors; too few limit capabilities. Dynamic tool selection adapts the available toolset based on authentication state, user permissions, feature flags, or conversation stage.
  • State
  • Store
  • Runtime Context
Enable advanced tools only after certain conversation milestones:
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable

@wrap_model_call
def state_based_tools(
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
    """Filter tools based on conversation State."""
    # Read from State: check if user has authenticated
    state = request.state  
    is_authenticated = state.get("authenticated", False)  
    message_count = len(state["messages"])

    # Only enable sensitive tools after authentication
    if not is_authenticated:
        tools = [t for t in request.tools if t.name.startswith("public_")]
        request = request.override(tools=tools)  
    elif message_count < 5:
        # Limit tools early in conversation
        tools = [t for t in request.tools if t.name != "advanced_search"]
        request = request.override(tools=tools)  

    return handler(request)

agent = create_agent(
    model="openai:gpt-4o",
    tools=[public_search, private_search, advanced_search],
    middleware=[state_based_tools]
)
See Dynamically selecting tools for more examples.

Model

Different models have different strengths, costs, and context windows. Select the right model for the task at hand, which might change during an agent run.
  • State
  • Store
  • Runtime Context
Use different models based on conversation length from State:
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from langchain.chat_models import init_chat_model
from typing import Callable

# Initialize models once outside the middleware
large_model = init_chat_model("anthropic:claude-sonnet-4-5")
standard_model = init_chat_model("openai:gpt-4o")
efficient_model = init_chat_model("openai:gpt-4o-mini")

@wrap_model_call
def state_based_model(
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
    """Select model based on State conversation length."""
    # request.messages is a shortcut for request.state["messages"]
    message_count = len(request.messages)  

    if message_count > 20:
        # Long conversation - use model with larger context window
        model = large_model
    elif message_count > 10:
        # Medium conversation
        model = standard_model
    else:
        # Short conversation - use efficient model
        model = efficient_model

    request = request.override(model=model)  

    return handler(request)

agent = create_agent(
    model="openai:gpt-4o-mini",
    tools=[...],
    middleware=[state_based_model]
)
See Dynamic model for more examples.

Response Format

Structured output transforms unstructured text into validated, structured data. When extracting specific fields or returning data for downstream systems, free-form text isn’t sufficient. How it works: When you provide a schema as the response format, the model’s final response is guaranteed to conform to that schema. The agent runs the model / tool calling loop until the model is done calling tools, then the final response is coerced into the provided format.

Defining formats

Schema definitions guide the model. Field names, types, and descriptions specify exactly what format the output should adhere to.
from pydantic import BaseModel, Field

class CustomerSupportTicket(BaseModel):
    """Structured ticket information extracted from customer message."""

    category: str = Field(
        description="Issue category: 'billing', 'technical', 'account', or 'product'"
    )
    priority: str = Field(
        description="Urgency level: 'low', 'medium', 'high', or 'critical'"
    )
    summary: str = Field(
        description="One-sentence summary of the customer's issue"
    )
    customer_sentiment: str = Field(
        description="Customer's emotional tone: 'frustrated', 'neutral', or 'satisfied'"
    )

Selecting formats

Dynamic response format selection adapts schemas based on user preferences, conversation stage, or role—returning simple formats early and detailed formats as complexity increases.
  • State
  • Store
  • Runtime Context
Configure structured output based on conversation state:
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from pydantic import BaseModel, Field
from typing import Callable

class SimpleResponse(BaseModel):
    """Simple response for early conversation."""
    answer: str = Field(description="A brief answer")

class DetailedResponse(BaseModel):
    """Detailed response for established conversation."""
    answer: str = Field(description="A detailed answer")
    reasoning: str = Field(description="Explanation of reasoning")
    confidence: float = Field(description="Confidence score 0-1")

@wrap_model_call
def state_based_output(
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
    """Select output format based on State."""
    # request.messages is a shortcut for request.state["messages"]
    message_count = len(request.messages)  

    if message_count < 3:
        # Early conversation - use simple format
        request = request.override(response_format=SimpleResponse)  
    else:
        # Established conversation - use detailed format
        request = request.override(response_format=DetailedResponse)  

    return handler(request)

agent = create_agent(
    model="openai:gpt-4o",
    tools=[...],
    middleware=[state_based_output]
)

Tool Context

Tools are special in that they both read and write context. In the most basic case, when a tool executes, it receives the LLM’s request parameters and returns a tool message back. The tool does its work and produces a result. Tools can also fetch important information for the model that allows it to perform and complete tasks.

Reads

Most real-world tools need more than just the LLM’s parameters. They need user IDs for database queries, API keys for external services, or current session state to make decisions. Tools read from state, store, and runtime context to access this information.
  • State
  • Store
  • Runtime Context
Read from State to check current session information:
from langchain.tools import tool, ToolRuntime
from langchain.agents import create_agent

@tool
def check_authentication(
    runtime: ToolRuntime
) -> str:
    """Check if user is authenticated."""
    # Read from State: check current auth status
    current_state = runtime.state
    is_authenticated = current_state.get("authenticated", False)

    if is_authenticated:
        return "User is authenticated"
    else:
        return "User is not authenticated"

agent = create_agent(
    model="openai:gpt-4o",
    tools=[check_authentication]
)

Writes

Tool results can be used to help an agent complete a given task. Tools can both return results directly to the model and update the memory of the agent to make important context available to future steps.
  • State
  • Store
Write to State to track session-specific information using Command:
from langchain.tools import tool, ToolRuntime
from langchain.agents import create_agent
from langgraph.types import Command

@tool
def authenticate_user(
    password: str,
    runtime: ToolRuntime
) -> Command:
    """Authenticate user and update State."""
    # Perform authentication (simplified)
    if password == "correct":
        # Write to State: mark as authenticated using Command
        return Command(
            update={"authenticated": True},
        )
    else:
        return Command(update={"authenticated": False})

agent = create_agent(
    model="openai:gpt-4o",
    tools=[authenticate_user]
)
See Tools for comprehensive examples of accessing state, store, and runtime context in tools.

Life-cycle Context

Control what happens between the core agent steps - intercepting data flow to implement cross-cutting concerns like summarization, guardrails, and logging. As you’ve seen in Model Context and Tool Context, middleware is the mechanism that makes context engineering practical. Middleware allows you to hook into any step in the agent lifecycle and either:
  1. Update context - Modify state and store to persist changes, update conversation history, or save insights
  2. Jump in the lifecycle - Move to different steps in the agent cycle based on context (e.g., skip tool execution if a condition is met, repeat model call with modified context)
Middleware hooks in the agent loop

Example: Summarization

One of the most common life-cycle patterns is automatically condensing conversation history when it gets too long. Unlike the transient message trimming shown in Model Context, summarization persistently updates state - permanently replacing old messages with a summary that’s saved for all future turns. LangChain offers built-in middleware for this:
from langchain.agents import create_agent
from langchain.agents.middleware import SummarizationMiddleware

agent = create_agent(
    model="openai:gpt-4o",
    tools=[...],
    middleware=[
        SummarizationMiddleware(
            model="openai:gpt-4o-mini",
            max_tokens_before_summary=4000,  # Trigger summarization at 4000 tokens
            messages_to_keep=20,  # Keep last 20 messages after summary
        ),
    ],
)
When the conversation exceeds the token limit, SummarizationMiddleware automatically:
  1. Summarizes older messages using a separate LLM call
  2. Replaces them with a summary message in State (permanently)
  3. Keeps recent messages intact for context
The summarized conversation history is permanently updated - future turns will see the summary instead of the original messages.
For a complete list of built-in middleware, available hooks, and how to create custom middleware, see the Middleware documentation.

Best practices

  1. Start simple - Begin with static prompts and tools, add dynamics only when needed
  2. Test incrementally - Add one context engineering feature at a time
  3. Monitor performance - Track model calls, token usage, and latency
  4. Use built-in middleware - Leverage SummarizationMiddleware, LLMToolSelectorMiddleware, etc.
  5. Document your context strategy - Make it clear what context is being passed and why
  6. Understand transient vs persistent: Model context changes are transient (per-call), while life-cycle context changes persist to state

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.