TechLead
Lesson 3 of 18
5 min read
LangChain

Chains & Memory

Build sequential chains and add conversation memory to your applications

Understanding Chains

Chains are the core building blocks in LangChain. They allow you to combine multiple components (models, prompts, tools, other chains) into a single, reusable pipeline. LangChain uses the LangChain Expression Language (LCEL) to create chains declaratively.

⛓️ Chain Concepts

  • LCEL: Declarative way to compose chains using pipe (|) syntax
  • Runnables: Building blocks that can be chained together
  • Sequential: Output of one step becomes input of the next
  • Parallel: Run multiple chains simultaneously

Basic Chain with LCEL

LCEL uses pipe syntax to connect prompts, models, and parsers into a single callable chain.

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({ modelName: "gpt-4" });

const prompt = ChatPromptTemplate.fromTemplate(
  "Tell me a {adjective} joke about {topic}"
);

// Create chain using LCEL pipe syntax
const chain = prompt.pipe(model).pipe(new StringOutputParser());

// Invoke the chain
const result = await chain.invoke({
  adjective: "funny",
  topic: "programming",
});

console.log(result);
// "Why do programmers prefer dark mode? Because light attracts bugs!"

Chain Components

Output Parsers

Output parsers turn raw model text into strings, JSON, or structured data.

import { StringOutputParser } from "@langchain/core/output_parsers";
import { JsonOutputParser } from "@langchain/core/output_parsers";

// String output
const stringChain = prompt.pipe(model).pipe(new StringOutputParser());

// JSON output
const jsonChain = prompt.pipe(model).pipe(new JsonOutputParser());

// Structured output with Zod
import { z } from "zod";
const schema = z.object({
  joke: z.string(),
  rating: z.number(),
});
const structuredChain = prompt.pipe(model.withStructuredOutput(schema));

RunnableLambda for Custom Logic

RunnableLambda inserts custom transformations before or after model calls.

import { RunnableLambda } from "@langchain/core/runnables";

const processInput = new RunnableLambda({
  func: (input: string) => input.toLowerCase().trim(),
});

const processOutput = new RunnableLambda({
  func: (output: string) => `Result: ${output}`,
});

const chain = processInput
  .pipe(prompt)
  .pipe(model)
  .pipe(new StringOutputParser())
  .pipe(processOutput);

Parallel Chains

Parallel chains run multiple branches at once and return a combined result.

import { RunnableParallel } from "@langchain/core/runnables";

const jokeChain = jokePrompt.pipe(model).pipe(new StringOutputParser());
const poemChain = poemPrompt.pipe(model).pipe(new StringOutputParser());

// Run both chains in parallel
const parallelChain = RunnableParallel.from({
  joke: jokeChain,
  poem: poemChain,
});

const result = await parallelChain.invoke({ topic: "coding" });
console.log(result);
// { joke: "...", poem: "..." }

Memory in LangChain

Memory allows your chains to remember previous interactions, essential for building conversational AI applications. LangChain provides several memory types.

🧠 Memory Types

  • Buffer Memory: Stores all messages in memory
  • Window Memory: Keeps only the last K messages
  • Summary Memory: Summarizes older messages
  • Vector Store Memory: Uses embeddings for semantic search

Simple Message History

Store message history per session and wrap the chain so it automatically includes past context.

import { ChatMessageHistory } from "langchain/stores/message/in_memory";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";

// Create a prompt with history placeholder
const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant."],
  new MessagesPlaceholder("history"),
  ["human", "{input}"],
]);

const chain = prompt.pipe(model).pipe(new StringOutputParser());

// Store for message histories (keyed by session ID)
const messageHistories: Record = {};

const getMessageHistory = (sessionId: string) => {
  if (!messageHistories[sessionId]) {
    messageHistories[sessionId] = new ChatMessageHistory();
  }
  return messageHistories[sessionId];
};

// Wrap chain with message history
const chainWithHistory = new RunnableWithMessageHistory({
  runnable: chain,
  getMessageHistory,
  inputMessagesKey: "input",
  historyMessagesKey: "history",
});

// Use with session ID
const response1 = await chainWithHistory.invoke(
  { input: "My name is Alice" },
  { configurable: { sessionId: "user-123" } }
);

const response2 = await chainWithHistory.invoke(
  { input: "What's my name?" },
  { configurable: { sessionId: "user-123" } }
);
// "Your name is Alice!"

Window Buffer Memory

Window memory keeps only the most recent messages to control context size.

import { BufferWindowMemory } from "langchain/memory";

// Keep only the last 5 messages
const memory = new BufferWindowMemory({
  k: 5,
  returnMessages: true,
  memoryKey: "history",
});

// Add messages
await memory.saveContext(
  { input: "Hi, I'm learning LangChain" },
  { output: "Great! LangChain is powerful." }
);

// Load history
const history = await memory.loadMemoryVariables({});
console.log(history);

Conversation Chain Pattern

Wrap a chain in a class to manage history and expose a clean chat API for your app.

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { HumanMessage, AIMessage } from "@langchain/core/messages";

class ConversationChain {
  private chain;
  private history: (HumanMessage | AIMessage)[] = [];

  constructor() {
    const model = new ChatOpenAI({ modelName: "gpt-4" });

    const prompt = ChatPromptTemplate.fromMessages([
      ["system", "You are a helpful coding assistant."],
      new MessagesPlaceholder("history"),
      ["human", "{input}"],
    ]);

    this.chain = prompt.pipe(model).pipe(new StringOutputParser());
  }

  async chat(input: string): Promise {
    const response = await this.chain.invoke({
      history: this.history,
      input,
    });

    // Update history
    this.history.push(new HumanMessage(input));
    this.history.push(new AIMessage(response));

    return response;
  }

  clearHistory() {
    this.history = [];
  }
}

// Usage
const conversation = new ConversationChain();
await conversation.chat("What is React?");
await conversation.chat("How do I use hooks?");
await conversation.chat("Show me an example");

💡 Key Takeaways

  • • LCEL uses pipe (|) syntax to chain components together
  • • Output parsers transform model responses into desired formats
  • • RunnableParallel enables concurrent chain execution
  • • Memory maintains conversation context across interactions
  • • Use session IDs to manage multiple conversation threads

📚 Learn More

Continue Learning