This repo uses Ollama, which should be downloaded for your OS. Install the requirements:
pip install -r requirements.txt
run the app:
streamlit run app.py
Using a local LLM of your choice you can perform the usual chat, obtain help with code etc. You can also use the RAG page to generate responses based on the text.
- LLM it made available with Ollama
- LLM indegration uses llama-index
- Vector store for document embeddings with chromadb