Skip to content

Latest commit

 

History

History
 
 

conversational-rag

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

Conversational RAG with memory

This example demonstrates how to build a conversational RAG agent with "memory".

The "memory" here is stored in state, which Burr then can help you track, manage, and introspect.

The set up of this example is that you have:

  1. Some initial "documents" i.e. knowledge.
  2. We bootstrap a vector store with these documents.
  3. We then have a pipeline that uses a vector store for a RAG query. This example uses a pre-made conversational RAG pipeline; the prompt isn't hidden under layers of abstraction.
  4. We hook everything together with Burr that will manage the state of the conversation and asking for user inputs.

To run this example, install Burr and the necessary dependencies:

pip install "burr[start]" -r requirements.txt

Then run the server in the background:

burr

Make sure you have an OPENAI_API_KEY set in your environment.

Then run

python application.py

You'll then have a text terminal where you can interact. Type exit to stop.

Application That's Defined:

Application Image

Video Walkthrough via Notebook

Open the notebook Open In Colab

Watch the video walkthrough with the notebook (1.5x+ speed recommended):

Watch the video