TechLead
Lesson 7 of 8
5 min read
AI SDK

Multiple AI Providers

Work with OpenAI, Anthropic, Google, and other AI providers using a unified API

Working with Multiple AI Providers

One of the most powerful features of the Vercel AI SDK is its unified provider interface. You can switch between OpenAI, Anthropic, Google, Mistral, and many other providers with minimal code changes.

Supported Providers

Major Providers

  • • OpenAI (GPT-4, GPT-4-turbo, GPT-3.5)
  • • Anthropic (Claude 3.5, Claude 3 Opus, Sonnet, Haiku)
  • • Google (Gemini Pro, Gemini Ultra)
  • • Mistral (Mistral Large, Medium, Small)

Additional Providers

  • • Cohere (Command, Command-Light)
  • • Amazon Bedrock
  • • Azure OpenAI
  • • Groq, Perplexity, Fireworks, and more

Installing Providers

# Install the providers you need
npm install @ai-sdk/openai
npm install @ai-sdk/anthropic
npm install @ai-sdk/google
npm install @ai-sdk/mistral

# Or install multiple at once
npm install @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google

Provider Setup

OpenAI

import { openai } from '@ai-sdk/openai';

// Uses OPENAI_API_KEY environment variable
const model = openai('gpt-4-turbo');

// Or configure manually
import { createOpenAI } from '@ai-sdk/openai';

const openaiClient = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  organization: 'org-...',  // optional
  baseURL: 'https://custom-endpoint.com/v1', // optional
});

const model = openaiClient('gpt-4-turbo');

Anthropic (Claude)

import { anthropic } from '@ai-sdk/anthropic';

// Uses ANTHROPIC_API_KEY environment variable
const model = anthropic('claude-3-5-sonnet-20241022');

// Available models
const claude35Sonnet = anthropic('claude-3-5-sonnet-20241022');
const claude3Opus = anthropic('claude-3-opus-20240229');
const claude3Haiku = anthropic('claude-3-haiku-20240307');

Google (Gemini)

import { google } from '@ai-sdk/google';

// Uses GOOGLE_GENERATIVE_AI_API_KEY environment variable
const model = google('gemini-1.5-pro');

// Available models
const geminiPro = google('gemini-1.5-pro');
const geminiFlash = google('gemini-1.5-flash');

Mistral

import { mistral } from '@ai-sdk/mistral';

// Uses MISTRAL_API_KEY environment variable
const model = mistral('mistral-large-latest');

// Available models
const mistralLarge = mistral('mistral-large-latest');
const mistralMedium = mistral('mistral-medium-latest');
const mistralSmall = mistral('mistral-small-latest');

Unified API Usage

The same code works with any provider:

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';

// Same interface for all providers
async function generateResponse(
  provider: 'openai' | 'anthropic' | 'google',
  messages: any[]
) {
  const models = {
    openai: openai('gpt-4-turbo'),
    anthropic: anthropic('claude-3-5-sonnet-20241022'),
    google: google('gemini-1.5-pro'),
  };

  const result = streamText({
    model: models[provider],
    messages,
  });

  return result.toDataStreamResponse();
}

Dynamic Provider Selection

// app/api/chat/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';

const providers = {
  'gpt-4-turbo': openai('gpt-4-turbo'),
  'gpt-3.5-turbo': openai('gpt-3.5-turbo'),
  'claude-3-5-sonnet': anthropic('claude-3-5-sonnet-20241022'),
  'claude-3-opus': anthropic('claude-3-opus-20240229'),
  'gemini-pro': google('gemini-1.5-pro'),
};

export async function POST(req: Request) {
  const { messages, model: modelId } = await req.json();

  const model = providers[modelId as keyof typeof providers];

  if (!model) {
    return new Response('Invalid model', { status: 400 });
  }

  const result = streamText({
    model,
    messages,
  });

  return result.toDataStreamResponse();
}

Client-Side Model Selector

'use client';

import { useChat } from 'ai/react';
import { useState } from 'react';

const models = [
  { id: 'gpt-4-turbo', name: 'GPT-4 Turbo', provider: 'OpenAI' },
  { id: 'claude-3-5-sonnet', name: 'Claude 3.5 Sonnet', provider: 'Anthropic' },
  { id: 'gemini-pro', name: 'Gemini Pro', provider: 'Google' },
];

export default function Chat() {
  const [selectedModel, setSelectedModel] = useState(models[0].id);

  const { messages, input, handleInputChange, handleSubmit } = useChat({
    body: { model: selectedModel },
  });

  return (
    <div>
      <select
        value={selectedModel}
        onChange={(e) => setSelectedModel(e.target.value)}
        className="p-2 border rounded mb-4"
      >
        {models.map((model) => (
          <option key={model.id} value={model.id}>
            {model.name} ({model.provider})
          </option>
        ))}
      </select>

      {messages.map((m) => (
        <div key={m.id}>{m.content}</div>
      ))}

      <form onSubmit={handleSubmit}>
        <input value={input} onChange={handleInputChange} />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

Environment Variables

# .env.local

# OpenAI
OPENAI_API_KEY=sk-...

# Anthropic
ANTHROPIC_API_KEY=sk-ant-...

# Google
GOOGLE_GENERATIVE_AI_API_KEY=...

# Mistral
MISTRAL_API_KEY=...

# Azure OpenAI (optional)
AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com

Provider-Specific Features

// OpenAI with JSON mode
const result = streamText({
  model: openai('gpt-4-turbo'),
  messages,
  response_format: { type: 'json_object' },
});

// Anthropic with system prompt
const result = streamText({
  model: anthropic('claude-3-5-sonnet-20241022'),
  system: 'You are a helpful assistant.',
  messages,
});

// Google with safety settings
import { google } from '@ai-sdk/google';

const model = google('gemini-1.5-pro', {
  safetySettings: [
    {
      category: 'HARM_CATEGORY_HATE_SPEECH',
      threshold: 'BLOCK_LOW_AND_ABOVE',
    },
  ],
});

Fallback Strategy

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';

async function generateWithFallback(messages: any[]) {
  const providers = [
    { model: openai('gpt-4-turbo'), name: 'OpenAI' },
    { model: anthropic('claude-3-5-sonnet-20241022'), name: 'Anthropic' },
  ];

  for (const provider of providers) {
    try {
      const result = streamText({
        model: provider.model,
        messages,
      });

      // Test if the stream starts successfully
      const response = result.toDataStreamResponse();
      return response;
    } catch (error) {
      console.error(`${provider.name} failed:`, error);
      // Continue to next provider
    }
  }

  throw new Error('All providers failed');
}

Cost Optimization

// Route requests based on complexity
function selectModel(message: string) {
  const isComplex = message.length > 500 ||
    message.includes('analyze') ||
    message.includes('explain in detail');

  if (isComplex) {
    // Use more capable model for complex queries
    return anthropic('claude-3-5-sonnet-20241022');
  }

  // Use faster, cheaper model for simple queries
  return openai('gpt-3.5-turbo');
}

export async function POST(req: Request) {
  const { messages } = await req.json();
  const lastMessage = messages[messages.length - 1].content;

  const result = streamText({
    model: selectModel(lastMessage),
    messages,
  });

  return result.toDataStreamResponse();
}

Provider Differences

While the API is unified, providers have different capabilities:

  • Context windows: Claude 3 supports 200K tokens, GPT-4 Turbo supports 128K
  • Tool calling: Supported by most but with varying reliability
  • Vision: Available in GPT-4V, Claude 3, and Gemini Pro Vision
  • Pricing: Varies significantly between providers and models

Key Takeaways

  • • Install provider packages separately (@ai-sdk/openai, etc.)
  • • Same code works across all providers
  • • Use environment variables for API keys
  • • Implement fallback strategies for reliability
  • • Consider cost and capabilities when selecting models

Learn More

Continue Learning