Product Information
What is Litellm?
Python SDK, a proxy server (LLM Gateway) that calls 100+ LLM APIs in Openai format—[BedRock, Azure, Openai, openai, vertexai, Cohere, cohere, Anthropic, sagemaker, sagemaker, huggingface, replicate, groq, groq]
How to use Litellm?
LiteLLM is a Python SDK and proxy server (LLM gateway) designed to simplify calls to over 100 large language model (LLM) APIs, unifying them in OpenAI format. It offers model access, failover, cost tracking, load balancing, and authentication management.
Core Functions of Litellm
Gateway
Usage Scenarios of Litellm
- Platform teams provide developers with a unified LLM access interface, such as Azure, Gemini, Bedrock, OpenAI, and Anthropic.
- Accurately track and charge teams for LLM usage, attributing costs to specific keys, users, teams, or organizations.
- Enable load balancing and failover across multiple LLMs to ensure service stability and availability.
- Manage LLM authentication, budgets, and rate limits to control usage costs and frequency.
- Log LLM usage to storage services like S3 for easy auditing and analysis.
Common Questions about Litellm
What does LiteLLM do?
How do I use LiteLLM?
What are the core features of LiteLLM?
What are the application scenarios for LiteLLM?





















