In this tutorial, you will build and deploy an AI chat agent using Next.js and the AI SDK that:
- Engages in natural conversations with users about the weather
- Automatically calls a weather API tool when users ask about weather conditions
- Streams responses in real-time for a smooth user experience
- Integrates with a backend weather service built with Express, FastAPI, or Nitro
- Node.js and pnpm installed locally
- A Vercel account and project with AI Gateway access
- AI Gateway authentication with an OIDC token configured with your Vercel project or an AI Gateway API key
- One of the backend weather APIs running (Express, FastAPI, or Nitro)
- Basic understanding of Next.js and React
Initialize a new Next.js project with the App Router:
When prompted, select the following options:
- TypeScript: Yes
- ESLint: Yes
- Tailwind CSS: Yes
- App Router: Yes
- Use
src/directory: No - Import alias: No
Navigate to your project directory with cd nextjs-agent
Install the AI SDK and required packages:
These packages provide:
ai: Core AI SDK with agent and tool calling capabilities (version 5 required)@ai-sdk/react: React hooks for streaming chat interfaces (version 2 required)zod: Schema validation for tool inputsreact-markdown: Render formatted responses in the chat UI
Option 1: Use your Vercel project's OIDC token
Link your code to a Vercel project and pull the environment variables
Option 2: Create an AI Gateway API key
Go to your Vercel team's AI Gateway API keys dashboard and create an API key. Create a .env.local file in your project root with your AI Gateway key:
Create lib/agent.ts and add the agent configuration with a weather tool:
This agent is configured as follows:
- Uses
GPT-5as the underlying model - Defines a
getWeathertool that calls your backend weather API - Uses
Zodschema validation for type-safe tool inputs - Includes error handling for API failures
- Limits the agent to 10 reasoning steps to prevent infinite loops
Create app/api/chat/route.ts to handle agent requests:
The respond() method handles the complete agent workflow:
- Processes conversation history
- Determines when to call tools
- Streams responses back to the client
- Manages multi-turn conversations
Update app/page.tsx to create an interactive chat interface:
This chat UI provides:
- Real-time streaming with loading indicators
- Markdown rendering for formatted responses
Before testing, you need a weather API backend running. Use one of the following guides to set up a weather API using the backend of your choice:
- How to Build a Weather API with Express and Vercel
- How to Build a Weather API with FastAPI and Vercel
- How to Build a Weather API with Nitro and Vercel
Return to your Next.js project and start the development server:
Start your weather API backend from a new terminal using vercel dev and make sure that it runs in http://localhost:3001.
Open http://localhost:3000 in your browser. Try these example conversations:
- "What's the weather in London?"
- "Tell me about the weather in San Francisco"
- "How's the temperature in Tokyo today?"
- "Is it hot in Dubai right now?"
The agent will:
- Understand your weather request
- Extract the city name
- Call the `getWeather` tool automatically
- Format and present the weather data in a conversational way
- If you chose the AI Gateway API key to authenticate, add it to your Vercel's project environment variables dashboard. Otherwise, the OIDC token is already configured.
- Push the changes to your remote repository or run the
vercelcli command - Vercel will create a new preview deployment for you to test
- Merge to
mainbranch or runvercel --prodto deploy to Production
Visit your production deployment link to chat with your AI weather agent.
The AI SDK's agent system provides intelligent tool calling that:
- Automatically determines when to use tools based on user messages and available defined tools
- Include type-safe
zodschemas that check that tools receive valid inputs - Allow multi-step reasoning to allow for multiple tools to be called
Review How to build AI Agents with Vercel and the AI SDK to understand the fundamentals of building agents.
Consider adding the following:
- Retry logic for failed API calls
- Fallback responses when tools fail
- Detailed error logging for debugging
Protect your API endpoint by limiting call frequency to the LLM and to your tool endpoints by using a tool such as the Vercel firewall rate limiting SDK.
In this tutorial, you've built an AI chat agent that intelligently calls weather APIs based on natural language conversations.
You learned to:
- Configure the AI SDK with agent capabilities
- Define type-safe tools with
zodschemas - Build a streaming chat UI
- Integrate with backend APIs for real-time data
- Handle tool calling and error scenarios
Extend your knowledge by:
- Adding more tools (currency conversion, news, stock prices)
- Implementing conversation history persistence
- Adding authentication and user sessions
- Building a mobile app with React Native and the same agent
Explore references