A Large Language Model (LLM) tool is a way to connect an LLM to real-world actions by letting it call a function. While an LLM on its own generates text, an LLM tool allows it to interact with external functions, APIs, or systems.
This means that instead of just answering questions or generating code, LLMs can now book a calendar event, fetch live weather data, update a database, or send a message - all by calling developer-defined functions.
In the context of LLMs, a "tool" is:
- A named function you register with the model.
- Described with a schema (like JSON Schema or function signature).
- Callable by the model when it deems it helpful.
For example, you can define a tool like this:
{
"name": "getWeather",
"description": "Fetch the current weather for a city",
"parameters": {
"type": "object",
"properties": {
"location": { "type": "string" }
},
"required": ["location"]
}
}
When the user says: "What's the weather in Tokyo?", the model responds with:
{
"tool_call": {
"name": "getWeather",
"arguments": {
"location": "Tokyo"
}
}
}
Your app then executes the function getWeather("Tokyo")
, gets the real result, and optionally sends the updated context (tool call and tool result) back to the model to generate a final answer.
LLMs are powerful, but they can't fetch real-time data, query your database, or modify state on their own. Tools give them that ability. With tools, an LLM becomes an intelligent controller that decides:
- What to do
- Which function to call
- What arguments to pass
This enables use cases like querying internal data, performing calculations, searching documents, triggering workflows, updating databases, and generating dynamic UIs.
- Register your tools: Define function names, descriptions, and argument schemas.
- Send a prompt: The user asks a question or gives an instruction.
- The model chooses a tool: If relevant, it outputs a tool call with arguments.
- Your app runs the function: Using the arguments provided by the model.
- Optional: Send results back to the model: So it can generate a final answer.
This full loop is often called tool-use, function-calling, or tool calling, and it enables LLMs to act like intelligent agents in your system.
Tool calling (also known as function calling) is the process where an LLM decides to use one of your registered tools to accomplish a task. When the model determines that it needs external data or functionality, it will:
- Choose the appropriate tool from those available
- Generate the correct arguments based on the user's request
- Return a structured response indicating which tool to call and with what parameters
Tool calling is what transforms a simple text-generating model into an intelligent agent that can interact with the real world.
Here's how to build a weather bot with tool calling using the AI SDK. This example creates a fully functional chatbot that can fetch weather information when asked.
To follow this example, you'll need:
- Node.js 18+ and
pnpm
installed on your local development machine. - An AI Gateway API key.
-
Start by creating a new directory and initializing the project:
mkdir weather-bot cd weather-bot pnpm init
-
Install the AI SDK and other necessary dependencies:
pnpm add ai zod dotenv pnpm add -D @types/node tsx typescript
-
Create a
.env
file in your project's root directory and add your AI Gateway API key:touch .env
Edit the
.env
file:AI_GATEWAY_API_KEY=your_ai_gateway_api_key
Replace
your_ai_gateway_api_key
with your actual AI Gateway API key. -
Create an
index.ts
file in the root of your project and add the following code:
import { generateText, tool, stepCountIs } from 'ai';
import { z } from 'zod';
import 'dotenv/config';
import * as readline from 'node:readline/promises';
const terminal = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
// Define the weather tool
const getWeatherTool = tool({
description: 'Get the current weather for a city',
inputSchema: z.object({
location: z.string().describe('The city to get weather for'),
}),
execute: async ({ location }) => {
// In a real app, you'd call a weather API here
// For demo purposes, we'll return mock data
const mockWeatherData = {
'san francisco': {
temperature: '72°F',
condition: 'sunny',
humidity: '45%',
},
'new york': { temperature: '58°F', condition: 'cloudy', humidity: '65%' },
london: { temperature: '50°F', condition: 'rainy', humidity: '80%' },
tokyo: {
temperature: '68°F',
condition: 'partly cloudy',
humidity: '55%',
},
};
const normalizedLocation = location.toLowerCase();
const weather = mockWeatherData[normalizedLocation] || {
temperature: '70°F',
condition: 'partly cloudy',
humidity: '50%',
};
return {
location,
temperature: weather.temperature,
condition: weather.condition,
humidity: weather.humidity,
};
},
});
async function askWeatherBot(userMessage: string) {
const { text } = await generateText({
model: 'openai/gpt-5',
prompt: userMessage,
stopWhen: stepCountIs(3),
tools: {
getWeather: getWeatherTool,
},
});
return text;
}
async function main() {
console.log(
'🌤️ Weather Bot initialized! Ask me about the weather in any city.',
);
console.log('Type "exit" to quit.\n');
while (true) {
const userInput = await terminal.question('You: ');
if (userInput.toLowerCase() === 'exit') {
console.log('Goodbye!');
break;
}
try {
const response = await askWeatherBot(userInput);
console.log(`Bot: ${response}\n`);
} catch (error) {
console.error('Error:', error);
}
}
terminal.close();
}
main().catch(console.error);
-
Now you can run your weather bot:
pnpm tsx index.ts
Try asking questions like:
- "What's the weather in San Francisco?"
- "How's the weather in London today?"
- "Is it sunny in Tokyo?"
What happens under the hood:
- The AI SDK sends your message and tool definition to the model
- The model decides whether to call the
getWeather
tool based on your question - If weather information is needed, the model extracts the location and calls the tool
- The AI SDK automatically executes your tool function with the extracted parameters
- The model uses the returned weather data to generate a natural response
The AI SDK handles all the complex tool calling logic for you, making it easy to build powerful AI agents!
Tools are supported by a variety of LLM providers and agent frameworks, including:
Using these frameworks, you can define tools and call them to build agents that can reason, decide, and take real-world actions.