Trabajar con múltiples proveedores de IA
Una de las funciones más potentes de Vercel AI SDK es su interfaz unificada de proveedores. Puedes cambiar entre OpenAI, Anthropic, Google, Mistral y muchos otros con cambios mínimos de código.
Proveedores compatibles
Proveedores principales
- • OpenAI (GPT-4, GPT-4-turbo, GPT-3.5)
- • Anthropic (Claude 3.5, Claude 3 Opus, Sonnet, Haiku)
- • Google (Gemini Pro, Gemini Ultra)
- • Mistral (Mistral Large, Medium, Small)
Proveedores adicionales
- • Cohere (Command, Command-Light)
- • Amazon Bedrock
- • Azure OpenAI
- • Groq, Perplexity, Fireworks y más
Instalar proveedores
# Install the providers you need
npm install @ai-sdk/openai
npm install @ai-sdk/anthropic
npm install @ai-sdk/google
npm install @ai-sdk/mistral
# Or install multiple at once
npm install @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google
Configuración de proveedores
OpenAI
import { openai } from '@ai-sdk/openai';
// Uses OPENAI_API_KEY environment variable
const model = openai('gpt-4-turbo');
// Or configure manually
import { createOpenAI } from '@ai-sdk/openai';
const openaiClient = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
organization: 'org-...', // optional
baseURL: 'https://custom-endpoint.com/v1', // optional
});
const model = openaiClient('gpt-4-turbo');
Anthropic (Claude)
import { anthropic } from '@ai-sdk/anthropic';
// Uses ANTHROPIC_API_KEY environment variable
const model = anthropic('claude-3-5-sonnet-20241022');
// Available models
const claude35Sonnet = anthropic('claude-3-5-sonnet-20241022');
const claude3Opus = anthropic('claude-3-opus-20240229');
const claude3Haiku = anthropic('claude-3-haiku-20240307');
Google (Gemini)
import { google } from '@ai-sdk/google';
// Uses GOOGLE_GENERATIVE_AI_API_KEY environment variable
const model = google('gemini-1.5-pro');
// Available models
const geminiPro = google('gemini-1.5-pro');
const geminiFlash = google('gemini-1.5-flash');
Mistral
import { mistral } from '@ai-sdk/mistral';
// Uses MISTRAL_API_KEY environment variable
const model = mistral('mistral-large-latest');
// Available models
const mistralLarge = mistral('mistral-large-latest');
const mistralMedium = mistral('mistral-medium-latest');
const mistralSmall = mistral('mistral-small-latest');
Uso de API unificada
El mismo código funciona con cualquier proveedor:
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
// Same interface for all providers
async function generateResponse(
provider: 'openai' | 'anthropic' | 'google',
messages: any[]
) {
const models = {
openai: openai('gpt-4-turbo'),
anthropic: anthropic('claude-3-5-sonnet-20241022'),
google: google('gemini-1.5-pro'),
};
const result = streamText({
model: models[provider],
messages,
});
return result.toDataStreamResponse();
}
Selección dinámica de proveedor
// app/api/chat/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
import { google } from '@ai-sdk/google';
const providers = {
'gpt-4-turbo': openai('gpt-4-turbo'),
'gpt-3.5-turbo': openai('gpt-3.5-turbo'),
'claude-3-5-sonnet': anthropic('claude-3-5-sonnet-20241022'),
'claude-3-opus': anthropic('claude-3-opus-20240229'),
'gemini-pro': google('gemini-1.5-pro'),
};
export async function POST(req: Request) {
const { messages, model: modelId } = await req.json();
const model = providers[modelId as keyof typeof providers];
if (!model) {
return new Response('Invalid model', { status: 400 });
}
const result = streamText({
model,
messages,
});
return result.toDataStreamResponse();
}
Selector de modelo en el cliente
'use client';
import { useChat } from 'ai/react';
import { useState } from 'react';
const models = [
{ id: 'gpt-4-turbo', name: 'GPT-4 Turbo', provider: 'OpenAI' },
{ id: 'claude-3-5-sonnet', name: 'Claude 3.5 Sonnet', provider: 'Anthropic' },
{ id: 'gemini-pro', name: 'Gemini Pro', provider: 'Google' },
];
export default function Chat() {
const [selectedModel, setSelectedModel] = useState(models[0].id);
const { messages, input, handleInputChange, handleSubmit } = useChat({
body: { model: selectedModel },
});
return (
<div>
<select
value={selectedModel}
onChange={(e) => setSelectedModel(e.target.value)}
className="p-2 border rounded mb-4"
>
{models.map((model) => (
<option key={model.id} value={model.id}>
{model.name} ({model.provider})
</option>
))}
</select>
{messages.map((m) => (
<div key={m.id}>{m.content}</div>
))}
<form onSubmit={handleSubmit}>
<input value={input} onChange={handleInputChange} />
<button type="submit">Send</button>
</form>
</div>
);
}
Variables de entorno
# .env.local
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Google
GOOGLE_GENERATIVE_AI_API_KEY=...
# Mistral
MISTRAL_API_KEY=...
# Azure OpenAI (optional)
AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
Funciones específicas por proveedor
// OpenAI with JSON mode
const result = streamText({
model: openai('gpt-4-turbo'),
messages,
response_format: { type: 'json_object' },
});
// Anthropic with system prompt
const result = streamText({
model: anthropic('claude-3-5-sonnet-20241022'),
system: 'You are a helpful assistant.',
messages,
});
// Google with safety settings
import { google } from '@ai-sdk/google';
const model = google('gemini-1.5-pro', {
safetySettings: [
{
category: 'HARM_CATEGORY_HATE_SPEECH',
threshold: 'BLOCK_LOW_AND_ABOVE',
},
],
});
Estrategia de fallback
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
async function generateWithFallback(messages: any[]) {
const providers = [
{ model: openai('gpt-4-turbo'), name: 'OpenAI' },
{ model: anthropic('claude-3-5-sonnet-20241022'), name: 'Anthropic' },
];
for (const provider of providers) {
try {
const result = streamText({
model: provider.model,
messages,
});
// Test if the stream starts successfully
const response = result.toDataStreamResponse();
return response;
} catch (error) {
console.error(`${provider.name} failed:`, error);
// Continue to next provider
}
}
throw new Error('All providers failed');
}
Optimización de costos
// Route requests based on complexity
function selectModel(message: string) {
const isComplex = message.length > 500 ||
message.includes('analyze') ||
message.includes('explain in detail');
if (isComplex) {
// Use more capable model for complex queries
return anthropic('claude-3-5-sonnet-20241022');
}
// Use faster, cheaper model for simple queries
return openai('gpt-3.5-turbo');
}
export async function POST(req: Request) {
const { messages } = await req.json();
const lastMessage = messages[messages.length - 1].content;
const result = streamText({
model: selectModel(lastMessage),
messages,
});
return result.toDataStreamResponse();
}
Diferencias entre proveedores
Aunque la API es unificada, los proveedores tienen capacidades distintas:
- • Ventanas de contexto: Claude 3 soporta 200K tokens, GPT-4 Turbo soporta 128K
- • Tool calling: Soportado por la mayoría pero con distinta fiabilidad
- • Visión: Disponible en GPT-4V, Claude 3 y Gemini Pro Vision
- • Precios: Varían significativamente entre proveedores y modelos
💡 Puntos clave
- • Instala paquetes de proveedores por separado (@ai-sdk/openai, etc.)
- • El mismo código funciona con cualquier proveedor
- • Usa variables de entorno para las claves API
- • Implementa fallbacks para mayor confiabilidad
- • Considera costo y capacidades al elegir modelos
📚 Más recursos
-
Proveedores de AI SDK →
Lista completa de proveedores compatibles y su configuración.
-
Guía de proveedores y modelos →
Cómo funcionan los proveedores en AI SDK.