LiteLLM
Install
To use LiteLLM, you need to either install pydantic-ai, or install pydantic-ai-slim with the openai optional group (as it uses an OpenAI-compatible API):
pip install 'pydantic-ai-slim[openai]'
uv add 'pydantic-ai-slim[openai]'
Configuration
To use LiteLLM, set the configs as outlined in the doc. In LiteLLMProvider, you can pass api_base and api_key. The value of these configs will depend on your setup. For example, if you are using OpenAI models, then you need to pass https://api.openai.com/v1 as the api_base and your OpenAI API key as the api_key. If you are using a LiteLLM proxy server running on your local machine, then you need to pass http://localhost:<port> as the api_base and your LiteLLM API key (or a placeholder) as the api_key.
To use custom LLMs, use custom/ prefix in the model name.
Once you have the configs, use the LiteLLMProvider as follows:
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.litellm import LiteLLMProvider
model = OpenAIChatModel(
'openai/gpt-5',
provider=LiteLLMProvider(
api_base='<api-base-url>',
api_key='<api-key>'
)
)
agent = Agent(model)
result = agent.run_sync('What is the capital of France?')
print(result.output)
#> The capital of France is Paris.
...