Usage & Enterprise Capabilities
Key Benefits
- Statefulness at its Core: The
StateGraphobject passes a typed state dictionary between all nodes, automatically merging and appending to execution histories. - Cycles and Self-Correction: Unlike basic chains, you can build loops. If a code execution tool fails, the agent can look at the error, loop back to the writing phase, and try again.
- Human-in-the-Loop: Easily set breakpoints in your graph. The execution pauses, allowing a human UI to review the agent's proposed action, modify the state, and resume execution.
- Highly Controllable: You explicitly define the routing logic between nodes, eliminating the unpredictability often found in generic "auto-agent" solutions.
Production Architecture Overview
- The State Definition: A standard Python
TypedDictthat defines what data is passed around the graph (e.g., a list of chat messages, current variables, tool outputs). - The Graph (Nodes and Edges): Python functions mapped to nodes. Routing functions mapped to conditional edges (e.g., "if agent called a tool -> route to tool node; else -> route to complete").
- Checkpointer (Persistence): A database mechanism (e.g., Memory, SQLite, PostgreSQL) that saves the graph state at every step. Essential for resuming dropped workflows or handling human-in-the-loop pauses.
- Deployment Layer: LangGraph workflows are usually deployed as background workers (Celery) or integrated into chat-based APIs where a persistent server maintains connection to the client via WebSockets for streaming token outputs.
Implementation Blueprint
Implementation Blueprint
Prerequisites
# Ensure Python 3.10+ is installed
python3 -m venv langgraph-env
source langgraph-env/bin/activate
# Install LangGraph, LangChain, and a model provider
pip install langgraph langchain-openaiexport OPENAI_API_KEY="sk-your-openai-api-key"Building a Basic Agent with Tools (Python)
import operator
from typing import TypedDict, Annotated, Sequence
from langchain_openai import ChatOpenAI
from langchain_core.messages import BaseMessage, HumanMessage
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolExecutor, ToolNode
# 1. Define custom tools (Mocked for example)
from langchain_core.tools import tool
@tool
def get_weather(location: str) -> str:
"""Returns the current weather for a location."""
if "san francisco" in location.lower():
return "It is 60 degrees and foggy."
return "It is 75 degrees and sunny."
tools = [get_weather]
# 2. Define the Agent State
# The state is passed and updated at every node.
class AgentState(TypedDict):
# The 'add' operator ensures messages are appended, not overwritten
messages: Annotated[Sequence[BaseMessage], operator.add]
# 3. Initialize the model and bind tools
model = ChatOpenAI(temperature=0)
model_with_tools = model.bind_tools(tools)
tool_node = ToolNode(tools) # Built-in node to execute tools
# 4. Define Graph Nodes (Functions)
def call_model(state: AgentState):
"""The node that calls the LLM."""
messages = state['messages']
response = model_with_tools.invoke(messages)
# Return a dictionary that matches the State keys to update it
return {"messages": [response]}
# 5. Define Routing Logic
def should_continue(state: AgentState) -> str:
"""Conditional edge logic to decide the next step."""
messages = state['messages']
last_message = messages[-1]
# If the LLM decided to call a tool, route to the tool node
if last_message.tool_calls:
return "continue"
# Otherwise, it's done reasoning. Route to END.
return "end"
# 6. Build the Graph
workflow = StateGraph(AgentState)
# Add nodes
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# Set the entry point
workflow.set_entry_point("agent")
# Add conditional routing from agent to either action or END
workflow.add_conditional_edges(
"agent",
should_continue,
{
"continue": "action",
"end": END
}
)
# Add a normal edge: after an action executes, always go back to the agent to interpret it
workflow.add_edge("action", "agent")
# 7. Compile the graph
app = workflow.compile()
# 8. Execute the compiled graph
inputs = {"messages": [HumanMessage(content="What is the weather in San Francisco?")]}
print("Executing LangGraph...")
# Stream the execution steps
for output in app.stream(inputs):
for key, value in output.items():
print(f"Node '{key}':")
print(value)
print("---")Adding Persistence and Human-in-the-Loop
from langgraph.checkpoint.memory import MemorySaver
# Use an in-memory saver for testing. In prod, use PostgresSaver or similar.
memory = MemorySaver()
# Compile the graph with persistence and a breakpoint *before* the action node
app_persistent = workflow.compile(
checkpointer=memory,
interrupt_before=["action"] # Wait before letting the agent execute a tool
)
# Execution requires a thread ID to track the session
config = {"configurable": {"thread_id": "session-1234"}}
inputs = {"messages": [HumanMessage(content="What is the weather in San Francisco?")]}
print("Executing until breakpoint...")
for event in app_persistent.stream(inputs, config):
print(event)
# The graph pauses here.
# A human could look at a web UI to approve the tool call.
print("Resuming execution...")
# Passing None as input tells the graph to resume from its saved state
for event in app_persistent.stream(None, config):
print(event)Scaling and Web Integration
- FastAPI / WebSockets: To stream the LLM's thought process to a frontend UI in real-time, wrap the LangGraph execution in a FastAPI WebSocket endpoint and stream the parsed token events.
- Database Persistence: Replace
MemorySaverwith robust database adapters (like standard SQL or PostgreSQL) so your agents can "remember" users over weeks and months of conversations. - Timeouts and Max Revisions: Because LangGraph supports infinite loops, always configure your compiled graphs with a
recursion_limitto prevent agents from getting stuck in an infinite retry loop, which will rapidly burn through cloud API credits.
Recommended Hosting for LangGraph
For systems like LangGraph, we recommend high-performance VPS hosting. Hostinger offers dedicated setups for open-source tools with one-click installer scripts and 24/7 priority support.
Get Started on HostingerExplore Alternative Ai Infrastructure
OpenClaw
OpenClaw is an open-source platform for autonomous AI workflows, data processing, and automation. It is production-ready, scalable, and suitable for enterprise and research deployments.
Ollama
Ollama is an open-source tool that allows you to run, create, and share large language models locally on your own hardware.