Skip to content

Commit

Permalink
add mocked api
Browse files Browse the repository at this point in the history
  • Loading branch information
Emil Elgaard committed Mar 24, 2023
1 parent fddde94 commit c96c9f1
Show file tree
Hide file tree
Showing 5 changed files with 1,735 additions and 0 deletions.
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,11 @@ git subtree pull --prefix src/awesome-chatgpt-prompts https://github.com/f/aweso
docker compose up -d
```

## Mocked api
If you don't want to wait for the api to respond, you can use the mocked api instead. To use the mocked api create a `.env` at root of the project
with key `VITE_API_BASE=http://localhost:5174` in it. You customize the mocked api response by including d followed by a number, it will delay the response X seconds.
You can customize the length of the response by including l followed by a number, it will return a response with X sentences. For example `d2 l10` = 2 seconds delay and 10 sentences response.

## Desktop app

You can also use ChatGPT-web as a desktop app. To do so, [install Rust first](https://www.rust-lang.org/tools/install). Then, simply run `npm run tauri dev` for the development version or `npm run tauri build` for the production version of the desktop app. The desktop app will be built in the `src-tauri/target` folder.
Expand Down
11 changes: 11 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,19 @@ services:
chatgpt_web:
container_name: chatgpt_web
restart: always
env_file:
- .env
ports:
- 5173:5173
build:
context: "."
dockerfile: Dockerfile

mocked_api:
container_name: mocked_api
build:
context: "."
dockerfile: mocked_api/Dockerfile-mockapi
restart: always
ports:
- 5174:5174
9 changes: 9 additions & 0 deletions mocked_api/Dockerfile-mockapi
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
FROM python:3.10-slim-buster
WORKDIR /work

RUN pip install fastapi uvicorn lorem-text
COPY mocked_api/mock_api.py .
COPY mocked_api/models_response.json .

CMD ["uvicorn", "mock_api:app", "--host", "0.0.0.0", "--port", "5174"]

73 changes: 73 additions & 0 deletions mocked_api/mock_api.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
import json
import re
import time
from lorem_text import lorem

from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware

app = FastAPI()

# add CORS middleware to allow requests from any origin
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_methods=["*"],
allow_headers=["*"],
)


# Define a route to handle POST requests
@app.post("/v1/chat/completions")
async def post_data(data: dict):
"""Returns mock responses for testing purposes."""

messages = data['messages']
instructions = messages[-1]['content']

delay = 0
lines = None
answer = 'Default mock answer from mocked API'

try:
delay = re.findall(r'(?<=d)\d+',instructions)[0]
except:
pass

try:
lines = re.findall(r'(?<=l)\d+',instructions)[0]
except:
pass


if delay:
time.sleep(int(delay))

if lines:
answer = "\n".join([lorem.sentence() for _ in range(int(lines))])

response = {
"id": 0,
"choices": [{
"index": 0,
"finish_reason": "stop",
"message": {"content": answer,"role": "assistant"}
}]
}
return response


@app.get('/v1/models')
async def list_models():
"""Returns a list of models to get app to work."""
with open('/work/models_response.json') as f:
result = json.load(f)

return result


@app.post('/')
async def post_data(data: dict):
"""Basic route for testing the API works"""
result = {"message": "Data received", "data": data}
return result
Loading

0 comments on commit c96c9f1

Please sign in to comment.