A bot that accepts PDF docs and lets you ask questions on it.
The LLMs are downloaded and served via Ollama.
make start
make start-gpu
When the server is up and running, access the app at: http://localhost:8501
Switch to a different model by changing the MODEL
env variable in the docker-compose.yaml. Check out the available models from here.
Note:
- It takes a while to start up since it downloads the specified model for the first time.
- If your hardware does not have a GPU and you choose to run only on CPU, expect high response time from the bot.
- Only Nvidia is supported as mentioned in Ollama's documentation. Others such as AMD isn't supported yet. Read how to use GPU on Ollama container and docker-compose.
- Make sure to have Nvidia drivers setup on your execution environment for the best results.
Image on DockerHub: https://hub.docker.com/r/amithkoujalgi/pdf-bot
PDF.Bot.Demo.mp4
- Expose model params such as
temperature
,top_k
,top_p
as configurable env vars
Contributions are most welcome! Whether it's reporting a bug, proposing an enhancement, or helping with code - any sort of contribution is much appreciated.
docker run -it -v ~/ollama:/root/.ollama -p 11434:11434 ollama/ollama
pip install -r requirements.txt
streamlit run pdf_bot/app.py
Thanks to the incredible Ollama, Langchain and Streamlit projects.