LangChain Models
LangChain provides a unified interface for working with different types of language models. Understanding how to configure and use these models effectively is fundamental to building powerful AI applications.
🤖 Model Types
- Chat Models: Optimized for conversational interactions (GPT-4, Claude, Gemini)
- LLMs: Text completion models (legacy, but still supported)
- Embedding Models: Convert text to vector representations
Chat Models
Chat models are the most commonly used model type. They work with messages instead of raw text.
Basic Usage
Create a chat model, send a system + human message, and read the response content.
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, SystemMessage, AIMessage } from "@langchain/core/messages";
const model = new ChatOpenAI({
modelName: "gpt-4",
temperature: 0.7, // 0 = deterministic, 1 = creative
maxTokens: 1000,
});
// Simple invocation
const response = await model.invoke([
new SystemMessage("You are a helpful assistant."),
new HumanMessage("What is TypeScript?"),
]);
console.log(response.content);
Message Types
Use system, human, and AI messages to structure conversations and preserve context.
import {
SystemMessage, // Sets the AI's behavior/role
HumanMessage, // User input
AIMessage, // AI responses (for history)
} from "@langchain/core/messages";
const messages = [
new SystemMessage("You are a senior React developer."),
new HumanMessage("How do I use useEffect?"),
new AIMessage("useEffect is a hook for side effects..."),
new HumanMessage("Can you show me an example?"),
];
const response = await model.invoke(messages);
Model Configuration
Tune model behavior with temperature, token limits, retries, and streaming flags.
const model = new ChatOpenAI({
modelName: "gpt-4-turbo-preview",
temperature: 0.7, // Creativity (0-1)
maxTokens: 2000, // Max response length
timeout: 60000, // Timeout in ms
maxRetries: 2, // Retry on failure
streaming: true, // Enable streaming
});
Prompt Templates
Prompt templates help you create reusable, dynamic prompts with variables.
Basic Prompt Template
Prompt templates let you inject variables into a reusable prompt string.
import { PromptTemplate } from "@langchain/core/prompts";
const template = PromptTemplate.fromTemplate(
"You are a {role}. Answer this question: {question}"
);
const prompt = await template.format({
role: "JavaScript expert",
question: "What are closures?",
});
console.log(prompt);
// "You are a JavaScript expert. Answer this question: What are closures?"
Chat Prompt Templates
Chat prompts define role-based messages and format them into a ready-to-send message array.
import { ChatPromptTemplate } from "@langchain/core/prompts";
const chatPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a {specialty} expert. Be concise and helpful."],
["human", "{question}"],
]);
const messages = await chatPrompt.formatMessages({
specialty: "React",
question: "How do I manage state?",
});
const response = await model.invoke(messages);
Message Placeholders
Placeholders make it easy to insert conversation history or other dynamic message lists.
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant."],
new MessagesPlaceholder("history"), // For conversation history
["human", "{input}"],
]);
const messages = await prompt.formatMessages({
history: [
new HumanMessage("Hi, I'm learning React"),
new AIMessage("Great! React is a powerful library."),
],
input: "What should I learn first?",
});
Streaming Responses
Enable streaming to receive tokens progressively and improve perceived latency.
const model = new ChatOpenAI({
modelName: "gpt-4",
streaming: true,
});
// Stream tokens
const stream = await model.stream([
new HumanMessage("Write a short poem about coding"),
]);
for await (const chunk of stream) {
process.stdout.write(chunk.content);
}
Structured Output
Get structured JSON responses from the model:
Define a schema with Zod to validate and parse the model response reliably.
import { z } from "zod";
// Define the output schema
const responseSchema = z.object({
answer: z.string().describe("The answer to the question"),
confidence: z.number().describe("Confidence score 0-100"),
sources: z.array(z.string()).describe("Related topics"),
});
const structuredModel = model.withStructuredOutput(responseSchema);
const response = await structuredModel.invoke(
"What is the capital of France?"
);
console.log(response);
// { answer: "Paris", confidence: 100, sources: ["Geography", "European capitals"] }
Binding Tools to Models
Tools let the model call functions with structured inputs, then use the results in its response.
import { tool } from "@langchain/core/tools";
import { z } from "zod";
// Define a tool
const weatherTool = tool(
async ({ city }) => {
// Simulate API call
return `The weather in ${city} is sunny, 72°F`;
},
{
name: "get_weather",
description: "Get the current weather for a city",
schema: z.object({
city: z.string().describe("The city name"),
}),
}
);
// Bind tool to model
const modelWithTools = model.bindTools([weatherTool]);
const response = await modelWithTools.invoke(
"What's the weather in San Francisco?"
);
💡 Key Takeaways
- • Chat models work with message objects (System, Human, AI)
- • Temperature controls creativity vs determinism
- • Prompt templates create reusable, dynamic prompts
- • Use structured output for reliable JSON responses
- • Streaming provides better UX for long responses
📚 Learn More
-
Chat Models Documentation →
Complete guide to using chat models.
-
Prompt Templates Guide →
Advanced prompt engineering techniques.