What Is LangGraph?
LangGraph is LangChain's library for building stateful, multi-step AI applications as graphs. While chains are linear (A β B β C), graphs allow branching, loops, and conditional logic β essential for complex AI agents and workflows.
π Chains vs Graphs
βοΈ Chains (Linear)
Input β Prompt β LLM β Output
Good for simple, predictable flows
π Graphs (LangGraph)
Nodes β Conditional edges β Loops β State
Complex agents, multi-step reasoning, human-in-the-loop
Installation
npm install @langchain/langgraph @langchain/core @langchain/openai
Core Concepts
π State
Shared data that flows through the graph and persists between steps
⬑ Nodes
Functions that process state β each node is one step in the workflow
β Edges
Connections between nodes β define the flow order
β Conditional Edges
Dynamic routing based on state β enables branching logic
Building Your First Graph
import { StateGraph, Annotation, END, START } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
// 1. Define the state schema
const GraphState = Annotation.Root({
messages: Annotation<string[]>({
reducer: (current, update) => [...current, ...update],
default: () => [],
}),
currentStep: Annotation<string>({
reducer: (_, update) => update,
default: () => "start",
}),
});
// 2. Define node functions
async function analyzeInput(state: typeof GraphState.State) {
const lastMessage = state.messages[state.messages.length - 1];
return {
messages: [`Analysis: The input "${lastMessage}" has been analyzed.`],
currentStep: "analyzed",
};
}
async function generateResponse(state: typeof GraphState.State) {
const model = new ChatOpenAI({ modelName: "gpt-4" });
const response = await model.invoke(state.messages.join("\n"));
return {
messages: [response.content as string],
currentStep: "completed",
};
}
// 3. Build the graph
const workflow = new StateGraph(GraphState)
.addNode("analyze", analyzeInput)
.addNode("generate", generateResponse)
.addEdge(START, "analyze")
.addEdge("analyze", "generate")
.addEdge("generate", END);
// 4. Compile and run
const app = workflow.compile();
const result = await app.invoke({
messages: ["Explain Docker in simple terms"],
});
console.log(result.messages);
Conditional Routing
Route to different nodes based on the current state β the key feature that makes graphs powerful.
const GraphState = Annotation.Root({
input: Annotation<string>(),
category: Annotation<string>(),
response: Annotation<string>(),
});
// Classifier node
async function classify(state: typeof GraphState.State) {
const model = new ChatOpenAI({ modelName: "gpt-4" });
const result = await model.invoke(
`Classify this as "technical", "general", or "complaint": ${state.input}`
);
return { category: (result.content as string).toLowerCase() };
}
// Different handler nodes
async function handleTechnical(state: typeof GraphState.State) {
return { response: `Technical support for: ${state.input}` };
}
async function handleGeneral(state: typeof GraphState.State) {
return { response: `General answer for: ${state.input}` };
}
async function handleComplaint(state: typeof GraphState.State) {
return { response: `Escalating complaint: ${state.input}` };
}
// Route function
function routeByCategory(state: typeof GraphState.State) {
if (state.category.includes("technical")) return "technical";
if (state.category.includes("complaint")) return "complaint";
return "general";
}
const workflow = new StateGraph(GraphState)
.addNode("classify", classify)
.addNode("technical", handleTechnical)
.addNode("general", handleGeneral)
.addNode("complaint", handleComplaint)
.addEdge(START, "classify")
.addConditionalEdges("classify", routeByCategory, {
technical: "technical",
general: "general",
complaint: "complaint",
})
.addEdge("technical", END)
.addEdge("general", END)
.addEdge("complaint", END);
const app = workflow.compile();
Loops and Iteration
LangGraph supports loops β a node can route back to a previous node, enabling iterative refinement.
const GraphState = Annotation.Root({
draft: Annotation<string>(),
feedback: Annotation<string>(),
iterations: Annotation<number>({
reducer: (_, update) => update,
default: () => 0,
}),
isApproved: Annotation<boolean>({
reducer: (_, update) => update,
default: () => false,
}),
});
async function writeDraft(state: typeof GraphState.State) {
const model = new ChatOpenAI({ modelName: "gpt-4" });
const prompt = state.feedback
? `Improve this draft based on feedback: ${state.feedback}\n\nDraft: ${state.draft}`
: "Write a short blog post about React hooks";
const result = await model.invoke(prompt);
return {
draft: result.content as string,
iterations: state.iterations + 1,
};
}
async function reviewDraft(state: typeof GraphState.State) {
const model = new ChatOpenAI({ modelName: "gpt-4" });
const result = await model.invoke(
`Review this draft. Say "APPROVED" if good, or give specific feedback:\n${state.draft}`
);
const content = result.content as string;
return {
feedback: content,
isApproved: content.includes("APPROVED"),
};
}
function shouldContinue(state: typeof GraphState.State) {
if (state.isApproved || state.iterations >= 3) return "end";
return "revise"; // Loop back to writing
}
const workflow = new StateGraph(GraphState)
.addNode("write", writeDraft)
.addNode("review", reviewDraft)
.addEdge(START, "write")
.addEdge("write", "review")
.addConditionalEdges("review", shouldContinue, {
revise: "write", // Loop back!
end: END,
});
const app = workflow.compile();
π‘ Key Takeaways
- β’ LangGraph builds AI workflows as directed graphs with state, nodes, and edges
- β’ Conditional edges enable dynamic routing based on LLM decisions
- β’ Loops allow iterative refinement (write β review β revise β review...)
- β’ State persists across nodes and can be customized with reducers
- β’ Use LangGraph when chains are too simple β multi-step agents, human-in-the-loop, branching