Skip to content

Open-source library built for fast and reliable connections to different LLM providers

License

Notifications You must be signed in to change notification settings

ottic-ai/llm-gateway

Repository files navigation

LLM Gateway

Open-source library built for fast and reliable connections to different LLM providers.

LLM Gateway for Python: https://pypi.org/project/open-llm-gateway/ https://github.com/ottic-ai/llm-gateway-python

The LLM Gateway is a lightweight, open-source library built for fast and reliable connections to LLMs.

It simplifies integrations with multiple providers, offering fallbacks, caching, and minimal latency with a client-side solution.

  • Minimize Downtime: Automatic retries and fallbacks to secondary providers like Azure or Entropic.
  • Automatic input params conversion: Automatically convert input params between OpenAI, Anthropic and Azure formats for fallbacks.
  • Faster Responses: Direct client-side requests for low latency.
  • Unified Control: A single interface to manage requests across LLMs.
  • Unified Output: Consistent output format across LLMs.
    openAI/AnthropicOutput:{...}
    llmGatewayOutput: {
        type: 'text' | 'tool_calls';
        content?: string; - content for text output
        tool_name?: string; - name of the tool for tool_calls
        arguments?: string; - arguments for the tool.
    }[]
}
  • Easy Model Switching: Change between OpenAI, Anthropic, and Azure models with a simple configuration change.

Contribute, fork, or raise issues— so we can make it better together.

Starring this repo helps other developers discover the LLM Gateway! ⭐

Installation

To install the library, use npm or yarn:

npm install llm-gateway

or

yarn add llm-gateway

Usage

Check examples in examples folder.

Here's a basic example of how to use the LLM Gateway library:

import { LLMGateway, EnumLlmModelType } from 'llm-gateway';

const openAIGateway = new LLMGateway({
    provider: EnumLLMProvider.OPENAI, // or ANTHROPIC, AZUREOPENAI, OPENAI
    apiKey: process.env['OPENAI_API_KEY'],
});
const openAIresponse = await openAIGateway.chatCompletion({
    messages: [
        { role: 'user', content: 'Write a one sentence story about a cat.' }
    ],
    model: 'gpt-4o-mini',
    max_completion_tokens:200
})

console.log('Response:', openAIresponse);

LLM Fallbacks Configuration

The LLM Gateway library supports configuring fallbacks to ensure that if one model fails, another can be used as a backup. This is useful for maintaining service availability and reliability.

Example Configuration

import {LLMGateway, EnumLLMProvider} from 'llm-gateway';

const llmGateway = new LLMGateway({
    provider: EnumLLMProvider.OPENAI, // or ANTHROPIC, AZUREOPENAI
    apiKey: 'INCORRECT_API_KEY_TO_EMULATE_FAILURE',
}, {
    fallbacks: {
        fallbackModel: 'claude-3-5-sonnet-latest',
        fallbackProvider: {
            apiKey: process.env['ANTHROPIC_API_KEY'],
            provider: EnumLLMProvider.ANTHROPIC
        }
    }
});


const response = await llmGateway.chatCompletion({
    messages: [
        { role: 'user', content: 'Write a one sentence story about a cat.' }
    ],
    model: 'gpt-4o-mini',
})

console.log('Response:', JSON.stringify(response.llmGatewayOutput, null ,2));

All examples work consistently across different providers (OpenAI, Anthropic, Azure) and automatically handle format conversion when falling back to a different provider.

Streaming Responses

The LLM Gateway supports streaming responses from all providers, with a unified interface that works consistently across OpenAI, Anthropic, and Azure.

Basic Streaming Example

const openAIGateway = new LLMGateway({
    provider: EnumLLMProvider.OPENAI, // or ANTHROPIC, AZUREOPENAI
    apiKey: process.env['OPENAI_API_KEY'],
});

const openAIStream = await openAIGateway.chatCompletionStream({
    messages: [{ role: 'user', content: 'Write a one sentence story about a cat.' }],
    model: 'gpt-4o-mini',
    temperature: 0.7,
});

for await (const chunk of openAIStream) {
    if(chunk.choices[0].finish_reason === 'stop') {
        console.log('message_stop');
    } else {
        console.log(chunk.choices[0].delta.content);
    }
}

Configuration

  • apiKey: Your API key for the chosen LLM provider.
  • modelType: The type of LLM provider you want to use (OPENAI, ANTHROPIC, AZUREOPENAI).
  • endpoint: (Optional) The endpoint for OpenAI models.
  • deployment: (Optional) The deployment name for Azure models.
  • apiVersion: (Optional) The API version for Azure models.

Contributing

Contributions are welcome! Please feel free to submit a pull request or open an issue.

License

This project is licensed under the MIT License.

About

Open-source library built for fast and reliable connections to different LLM providers

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •