View Project
LLM Gateway is an open-source API gateway for Large Language Models (LLMs). It acts as a middleware between your applications and various LLM providers, allowing you to:
Route requests to multiple LLM providers (OpenAI, Anthropic, Google Vertex AI, and others)
Manage API keys for different providers in one place
Track token usage and costs across all your LLM interactions
Analyze performance metrics to optimize your LLM usage
LLM Gateway provides detailed insights into your LLM usage:
Usage Metrics: Track the number of requests, tokens used, and response times
Cost Analysis: Monitor spending across different models and providers
Performance Tracking: Identify patterns and optimize your prompts based on actual usage data
Breakdown by Model: Compare different models' performance and cost-effectiveness
All this data is automatically collected and presented in an intuitive dashboard, helping you make informed decisions about your LLM strategy.
Using LLM Gateway is simple. Just swap out your current LLM provider URL with the LLM Gateway API endpoint:
```
curl -X POST https://api.llmgateway.io/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $LLM_GATEWAY_API_KEY" \ -d '{ "model": "gpt-4o", "messages": [ {"role": "user", "content": "Hello, how are you?"} ]}'
```
LLM Gateway maintains compatibility with the OpenAI API format, making migration seamless.
You can use LLM Gateway in two ways:
Hosted Version: For immediate use without setup, visit llmgateway.io to create an account and get an API key.
Self-Hosted: Deploy LLM Gateway on your own infrastructure for complete control over your data and configuration.
The self-hosted version offers additional customization options and ensures your LLM traffic never leaves your infrastructure if desired.
Built with