TechLead
AI Development
February 8, 20269 min read

Getting Started with Vercel AI SDK: Build AI Features in Next.js

A hands-on guide to the Vercel AI SDK — the fastest way to add streaming AI responses, chat interfaces, and tool-calling to your Next.js app. From useChat to multi-provider setups.

By TechLead
Vercel AI SDK
Next.js
AI
Streaming
React

The Vercel AI SDK is the simplest way to add AI-powered features to a Next.js application. It provides React hooks for streaming responses, built-in support for multiple AI providers, and first-class TypeScript support. Let's build a complete AI chat app from scratch.

1. Why Vercel AI SDK?

Compared to calling OpenAI's API directly, the Vercel AI SDK gives you:

  • Streaming out of the box: Tokens appear in real-time, no manual SSE parsing.
  • React hooks: useChat and useCompletion manage state, loading, and errors automatically.
  • Provider agnostic: Swap between OpenAI, Anthropic, Google, and open-source models with a single config change.
  • Tool calling: Let the LLM invoke functions and return structured data via tool definitions.

2. Installation

npm install ai @ai-sdk/openai

3. Creating the API Route

Create a route handler that streams responses from the LLM:

// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4o"),
    system: "You are a helpful assistant.",
    messages,
  });

  return result.toDataStreamResponse();
}

4. Building the Chat UI

Use the useChat hook for a complete chat experience with just a few lines:

"use client";
import { useChat } from "ai/react";

export default function ChatPage() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } =
    useChat();

  return (
    <div className="max-w-2xl mx-auto p-4">
      <div className="space-y-4 mb-4">
        {messages.map((m) => (
          <div key={m.id} className={m.role === "user" ? "text-right" : "text-left"}>
            <span className="font-bold">{m.role === "user" ? "You" : "AI"}:</span>
            <p>{m.content}</p>
          </div>
        ))}
      </div>
      <form onSubmit={handleSubmit} className="flex gap-2">
        <input value={input} onChange={handleInputChange} placeholder="Ask me anything..." className="flex-1 border rounded px-3 py-2" />
        <button type="submit" disabled={isLoading} className="bg-primary text-white px-4 py-2 rounded">Send</button>
      </form>
    </div>
  );
}

5. Adding Tool Calling

Let the AI call functions to retrieve live data:

import { openai } from "@ai-sdk/openai";
import { streamText, tool } from "ai";
import { z } from "zod";

const result = streamText({
  model: openai("gpt-4o"),
  messages,
  tools: {
    getWeather: tool({
      description: "Get the current weather for a city",
      parameters: z.object({
        city: z.string().describe("The city name"),
      }),
      execute: async ({ city }) => {
        const data = await fetch("https://api.weather.com/" + city);
        return data.json();
      },
    }),
  },
});

6. Switching Providers

One of the SDK's best features is provider flexibility. Swap to Anthropic without changing your UI:

import { anthropic } from "@ai-sdk/anthropic";

const result = streamText({
  model: anthropic("claude-sonnet-4-20250514"),
  messages,
});

7. Next Steps

Once you have the basics working, explore:

Related Articles