What is RAG?
Retrieval Augmented Generation (RAG) is a technique that enhances LLM responses by retrieving relevant information from external knowledge sources. Instead of relying solely on the model's training data, RAG allows your AI to access and use your own documents, databases, or APIs.
π RAG Pipeline
- Load: Import documents from various sources
- Split: Break documents into smaller chunks
- Embed: Convert text to vector representations
- Store: Save embeddings in a vector database
- Retrieve: Find relevant chunks for a query
- Generate: Use retrieved context to generate answers
Document Loaders
LangChain provides loaders for many document types:
Loaders normalize files and web pages into a consistent document format for downstream steps.
// Text files
import { TextLoader } from "langchain/document_loaders/fs/text";
const textLoader = new TextLoader("./docs/readme.txt");
// PDF files
import { PDFLoader } from "langchain/document_loaders/fs/pdf";
const pdfLoader = new PDFLoader("./docs/manual.pdf");
// Web pages
import { CheerioWebBaseLoader } from "langchain/document_loaders/web/cheerio";
const webLoader = new CheerioWebBaseLoader("https://example.com/docs");
// JSON files
import { JSONLoader } from "langchain/document_loaders/fs/json";
const jsonLoader = new JSONLoader("./data/products.json");
// Load documents
const docs = await pdfLoader.load();
console.log(docs.length, "documents loaded");
Text Splitting
Split documents into chunks that fit within model context limits:
Smaller chunks improve retrieval accuracy and keep prompts within token limits.
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000, // Max characters per chunk
chunkOverlap: 200, // Overlap between chunks
separators: ["\n\n", "\n", " ", ""], // Split priorities
});
const docs = await loader.load();
const splitDocs = await splitter.splitDocuments(docs);
console.log(`Split into ${splitDocs.length} chunks`);
Embeddings
Convert text to vectors for semantic search:
Embeddings turn text into numeric vectors so you can compare similarity and retrieve relevant context.
import { OpenAIEmbeddings } from "@langchain/openai";
const embeddings = new OpenAIEmbeddings({
modelName: "text-embedding-3-small",
});
// Embed a single text
const vector = await embeddings.embedQuery("What is LangChain?");
console.log(vector.length); // 1536 dimensions
// Embed multiple texts
const vectors = await embeddings.embedDocuments([
"LangChain is a framework",
"It helps build AI apps",
]);
Vector Stores
Store and search embeddings efficiently:
In-Memory Vector Store
The in-memory store is great for prototypes and small datasets without external infrastructure.
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
// Create vector store from documents
const vectorStore = await MemoryVectorStore.fromDocuments(
splitDocs,
new OpenAIEmbeddings()
);
// Search for similar documents
const results = await vectorStore.similaritySearch(
"How do I use chains?",
4 // Return top 4 results
);
console.log(results);
Using Pinecone
Use a managed vector database like Pinecone for scalability and persistence.
import { Pinecone } from "@pinecone-database/pinecone";
import { PineconeStore } from "@langchain/pinecone";
const pinecone = new Pinecone();
const index = pinecone.index("langchain-docs");
// Create store
const vectorStore = await PineconeStore.fromDocuments(
splitDocs,
new OpenAIEmbeddings(),
{ pineconeIndex: index }
);
// Or connect to existing
const existingStore = await PineconeStore.fromExistingIndex(
new OpenAIEmbeddings(),
{ pineconeIndex: index }
);
Complete RAG Chain
This full example wires up embeddings, a retriever, and a prompt to answer questions using your data.
import { ChatOpenAI } from "@langchain/openai";
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence, RunnablePassthrough } from "@langchain/core/runnables";
// Setup
const model = new ChatOpenAI({ modelName: "gpt-4" });
const embeddings = new OpenAIEmbeddings();
// Create vector store from your documents
const vectorStore = await MemoryVectorStore.fromDocuments(
splitDocs,
embeddings
);
// Create retriever
const retriever = vectorStore.asRetriever({
k: 4, // Number of documents to retrieve
});
// RAG prompt
const ragPrompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the following context:
{context}
Question: {question}
If the answer is not in the context, say "I don't have enough information to answer that."
`);
// Helper to format documents
const formatDocs = (docs: Document[]) =>
docs.map(doc => doc.pageContent).join("\n\n");
// Create RAG chain
const ragChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
ragPrompt,
model,
new StringOutputParser(),
]);
// Ask questions
const answer = await ragChain.invoke(
"How do I create a chain in LangChain?"
);
console.log(answer);
Advanced: Conversational RAG
Conversational RAG rewrites user questions using chat history, then retrieves and answers with context.
import { createHistoryAwareRetriever } from "langchain/chains/history_aware_retriever";
import { createRetrievalChain } from "langchain/chains/retrieval";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
// Contextualize question based on chat history
const contextualizePrompt = ChatPromptTemplate.fromMessages([
["system", "Given chat history and a question, reformulate it to be standalone."],
new MessagesPlaceholder("chat_history"),
["human", "{input}"],
]);
const historyAwareRetriever = await createHistoryAwareRetriever({
llm: model,
retriever,
rephrasePrompt: contextualizePrompt,
});
// Answer with context
const answerPrompt = ChatPromptTemplate.fromMessages([
["system", "Answer based on context: {context}"],
new MessagesPlaceholder("chat_history"),
["human", "{input}"],
]);
const documentChain = await createStuffDocumentsChain({
llm: model,
prompt: answerPrompt,
});
const conversationalRagChain = await createRetrievalChain({
retriever: historyAwareRetriever,
combineDocsChain: documentChain,
});
// Use with history
const response = await conversationalRagChain.invoke({
chat_history: [],
input: "What is LangChain?",
});
π‘ Key Takeaways
- β’ RAG connects LLMs to your own knowledge base
- β’ Document loaders support PDFs, web pages, JSON, and more
- β’ Text splitters create optimal chunk sizes for retrieval
- β’ Vector stores enable semantic search over embeddings
- β’ Combine retrieval with prompts for context-aware answers
π Learn More
-
RAG Tutorial β
Step-by-step guide to building RAG applications.
-
Retrieval Documentation β
All document loaders, splitters, and vector stores.