This project is a reverse proxy built with Express.js to securely forward requests to the OpenAI API. It enables interaction with OpenAI models, such as gpt-4o-mini
, while securely handling your API keys using environment variables.
- Secure API Key Handling: The OpenAI API key is stored securely in an environment variable, keeping it hidden from the client-side.
- Request Forwarding: It forwards requests from clients to OpenAI's
/v1/chat/completions
endpoint. - CORS Support: CORS is enabled to allow communication between different origins.
- Error Handling: Detailed error messages are logged for debugging, and proper HTTP response codes are returned to the client.
- Node.js (version 14.x or later)
- npm (or yarn)
- Docker (optional, if you want to run it in a container)
git clone https://github.com/your-username/openai-reverse-proxy.git
cd openai-reverse-proxy
npm install
Create a .env
file in the project root and add your OpenAI API key:
touch .env
Then, add the following content to the .env
file:
OPENAI_API_KEY=sk-your-openai-api-key
Start the server locally:
npm start
By default, the server runs on http://localhost:3000
.
This route forwards requests to OpenAI's chat/completions
API.
You can send a request to the reverse proxy like this:
curl -X POST http://localhost:3000/api/openai/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Say hello to the world."}],
"max_tokens": 10
}'
{
"id": "chatcmpl-...",
"object": "chat.completion",
"created": 1634766363,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello, world!"
},
"finish_reason": "stop"
}
]
}
You can also test the basic root route by accessing:
http://localhost:3000/
You should receive:
Welcome to the OpenAI Reverse Proxy!
If you prefer to run the reverse proxy in a Docker container:
docker build -t openai-reverse-proxy .
docker run -d -p 3000:3000 --env-file .env openai-reverse-proxy
This will run the proxy on http://localhost:3000
.
If there's an error in communicating with the OpenAI API, the proxy will return a 500 Internal Server Error
with details in the response body.
{
"error": {
"message": "You must provide a model parameter.",
"type": "invalid_request_error",
"code": null
}
}
Feel free to open an issue or submit a pull request for any improvements or bug fixes.
This project is licensed under the MIT License. See the LICENSE file for details.