LiteLLM
Last updated October 30, 2025
LiteLLM is an open-source library that provides a unified interface to call LLMs. This guide demonstrates how to integrate Vercel AI Gateway with LiteLLM to access various AI models and providers.
First, create a new directory for your project:
terminalmkdir litellm-ai-gateway cd litellm-ai-gatewayInstall the required LiteLLM Python package:
terminalpip install litellm python-dotenvCreate a
.envfile with your Vercel AI Gateway API key:.envVERCEL_AI_GATEWAY_API_KEY=your-api-key-hereIf you're using the AI Gateway from within a Vercel deployment, you can also use the
VERCEL_OIDC_TOKENenvironment variable which will be automatically provided.Create a new file called
main.pywith the following code:main.pyimport os import litellm from dotenv import load_dotenv load_dotenv() os.environ["VERCEL_AI_GATEWAY_API_KEY"] = os.getenv("VERCEL_AI_GATEWAY_API_KEY") # Define messages messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Tell me about the food scene in San Francisco."} ] response = litellm.completion( model="vercel_ai_gateway/openai/gpt-4o", messages=messages ) print(response.choices[0].message.content)The following code:
- Uses LiteLLM's
completionfunction to make requests through Vercel AI Gateway - Specifies the model using the
vercel_ai_gateway/prefix - Makes a chat completion request and prints the response
- Uses LiteLLM's
Run your Python application:
terminalpython main.pyYou should see a response from the AI model in your console.
Was this helpful?