AI SDK with Next.js
The Vercel AI SDK is designed to work seamlessly with Next.js, supporting both the App Router and Pages Router. This guide focuses on the App Router with Route Handlers and Server Actions.
Next.js Integration Options
- Route Handlers: API routes in app/api/ directory
- Server Actions: Server functions called from components
- Server Components: Generate AI content during SSR
- Middleware: AI-powered request processing
Project Setup
# Create Next.js app
npx create-next-app@latest my-ai-app --typescript --tailwind --app
# Install AI SDK
cd my-ai-app
npm install ai @ai-sdk/openai
# Add environment variable
echo "OPENAI_API_KEY=your-key" > .env.local
Route Handler Approach
Create API Route
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
system: 'You are a helpful assistant.',
messages,
});
return result.toDataStreamResponse();
}
Client Component
// app/chat/page.tsx
'use client';
import { useChat } from 'ai/react';
export default function ChatPage() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div className="max-w-2xl mx-auto p-4">
<h1 className="text-2xl font-bold mb-4">AI Chat</h1>
<div className="space-y-4 mb-4">
{messages.map((m) => (
<div key={m.id} className="p-4 rounded-lg bg-gray-100">
<strong>{m.role}:</strong> {m.content}
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
className="flex-1 p-2 border rounded"
placeholder="Ask something..."
/>
<button className="px-4 py-2 bg-black text-white rounded">
Send
</button>
</form>
</div>
);
}
Server Actions Approach
// app/actions.ts
'use server';
import { openai } from '@ai-sdk/openai';
import { generateText, streamText } from 'ai';
import { createStreamableValue } from 'ai/rsc';
// Non-streaming action
export async function generateResponse(prompt: string) {
const { text } = await generateText({
model: openai('gpt-4-turbo'),
prompt,
});
return text;
}
// Streaming action
export async function streamResponse(prompt: string) {
const stream = createStreamableValue('');
(async () => {
const result = streamText({
model: openai('gpt-4-turbo'),
prompt,
});
for await (const delta of result.textStream) {
stream.update(delta);
}
stream.done();
})();
return stream.value;
}
Using Server Actions
// app/generate/page.tsx
'use client';
import { useState } from 'react';
import { useStreamableValue } from 'ai/rsc';
import { streamResponse } from '../actions';
export default function GeneratePage() {
const [response, setResponse] = useState('');
const [streamedValue, setStreamedValue] = useState('');
const handleGenerate = async () => {
const stream = await streamResponse('Write a haiku about coding');
for await (const value of readStreamableValue(stream)) {
setStreamedValue(value || '');
}
};
return (
<div className="p-4">
<button onClick={handleGenerate}>Generate Haiku</button>
<p>{streamedValue}</p>
</div>
);
}
Structured Data with Server Actions
// app/actions.ts
'use server';
import { openai } from '@ai-sdk/openai';
import { generateObject } from 'ai';
import { z } from 'zod';
const RecipeSchema = z.object({
name: z.string(),
ingredients: z.array(z.string()),
steps: z.array(z.string()),
prepTime: z.number(),
cookTime: z.number(),
});
export async function generateRecipe(dish: string) {
const { object } = await generateObject({
model: openai('gpt-4-turbo'),
schema: RecipeSchema,
prompt: `Generate a recipe for ${dish}`,
});
return object;
}
Edge Runtime
// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
// Use Edge runtime for faster cold starts
export const runtime = 'edge';
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
});
return result.toDataStreamResponse();
}
Middleware Integration
// middleware.ts
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
export function middleware(request: NextRequest) {
// Add rate limiting, auth checks, etc.
const apiKey = request.headers.get('x-api-key');
if (request.nextUrl.pathname.startsWith('/api/chat')) {
if (!apiKey) {
return NextResponse.json(
{ error: 'API key required' },
{ status: 401 }
);
}
}
return NextResponse.next();
}
export const config = {
matcher: '/api/:path*',
};
Environment Variables
# .env.local
OPENAI_API_KEY=sk-...
# For multiple providers
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_GENERATIVE_AI_API_KEY=...
Deployment Considerations
When deploying to Vercel, set maxDuration based on your plan:
Hobby (10s), Pro (60s), Enterprise (900s). For long-running AI tasks,
consider using background jobs or queues.
Key Takeaways
- • Route Handlers are best for streaming chat interfaces
- • Server Actions work well for single AI operations
- • Use Edge runtime for faster cold starts globally
- • Set appropriate
maxDurationfor your deployment - • Keep API keys server-side only
Learn More
-
Next.js App Router Guide →
Official guide for Next.js App Router integration.
-
AI SDK RSC →
React Server Components integration guide.