Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
ekuka86 authored Jul 31, 2024
1 parent ceb8827 commit 3eabab4
Showing 1 changed file with 78 additions and 35 deletions.
113 changes: 78 additions & 35 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,52 +1,95 @@
# Open-Memory
### Open-Memory

## Introduction:
## Introduction

In response to growing privacy concerns and Microsoft's "Windows Recall" feature, which records and analyzes screen activities by transmitting data to the cloud, we have developed an open-source pilot that offers similar functionality without compromising user privacy.

## Usage Flow

## Usage Flow:
- The user runs `screenshot-desktop.py` whenever they want to start recording their screens.
- The extracted text from the screenshots is processed by the DocTR OCR tool and stored in the ChromaDB vector database, along with their corresponding embeddings generated by the JinaAI embedding model.
- When the user has a query or needs to retrieve relevant information, they can input their request into `LLM Prompt`.
- The query is then processed by the JinaAI reranker model, which retrieves the most relevant text chunks from the ChromaDB database based on the semantic similarity of the embeddings.
- The retrieved information is presented to the user, providing them with the necessary context and insights without ever leaving their local device.
- This approach ensures that all data processing and storage remain under the user's control, eliminating the risks associated with cloud-based solutions and empowering users to maintain their privacy.

The user runs ‘screenshot-desktop.py’ whenever they want to start recording their screens
## Setup Guide


The extracted text from the screenshots is processed by the DocTR OCR tool and stored in the ChromaDB vector database, along with their corresponding embeddings generated by the JinaAI embedding model.
### Prerequisites


When the user has a query or needs to retrieve relevant information, they can input their request into ‘LLM Prompt’.
- Python 3.11
- Docker
- Docker Compose
- Make


The query is then processed by the JinaAI reranker model, which retrieves the most relevant text chunks from the ChromaDB database based on the semantic similarity of the embeddings.
### Installation Steps


The retrieved information is presented to the user, providing them with the necessary context and insights without ever leaving their local device.
1. **Install Docker:**


This approach ensures that all data processing and storage remain under the user's control, eliminating the risks associated with cloud-based solutions and empowering users to maintain their privacy.
Follow the official Docker installation guide for your operating system:
- [Docker for Windows](https://docs.docker.com/docker-for-windows/install/)
- [Docker for macOS](https://docs.docker.com/docker-for-mac/install/)
- [Docker for Linux](https://docs.docker.com/engine/install/)

After installing Docker, ensure it's running correctly by executing:
```sh
docker --version
```

2. **Install Docker Compose:**

## Setup Guide
* Tested using ```Python 3.11 ```
* Clone the repository (data-max-hq/open-memory)

```
cd /open-memory
pip install pyautogui
make build
make run
```

* After a couple of minutes then head over to "localhost:8080"
* In "ADD LLM" type out the LLM you want to use (we recomend qwen2:1.5b or qwen2:0.5b) and press "Add LLM"


Docker Compose is included in Docker Desktop for Windows and macOS. For Linux, you can install it by following these steps:
```sh
sudo curl -L "https://github.com/docker/compose/releases/download/v2.11.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```

## Functionality
Run "screenshot-desktop.py" to start capturin the screen
Verify the installation:
```sh
docker-compose --version
```

3. **Install Make:**

Depending on your operating system, install Make:

- **Windows:** Install [Make for Windows](http://gnuwin32.sourceforge.net/packages/make.htm).
- **macOS:** Use Homebrew (if not installed, follow [Homebrew installation](https://brew.sh/)):
```sh
brew install make
```
- **Linux:** Install Make using your distribution's package manager:
```sh
sudo apt-get install build-essential # For Debian/Ubuntu
sudo yum group install 'Development Tools' # For CentOS/RHEL
```
4. **Clone the Repository and Install Python Dependencies:**

Query ChromaDB: Test the database to retrieve relevant pieces of context for a specific query.
```sh
git clone https://github.com/data-max-hq/open-memory.git
cd open-memory
pip install pyautogui
```
5. **Build and Run the Docker Containers:**
```sh
make build
make run
```
6. **Access the Application:**
After the containers are up and running, open your web browser and navigate to:
```
http://localhost:8080
```
In the "ADD LLM" section, type the LLM you want to use (we recommend `qwen2:1.5b` or `qwen2:0.5b`) and press "Add LLM".
## Functionality

LLM Prompt: Pass a query to QWEN2:1.5b to get an explanation of what the user was doing based on the retrieved context.
- **Run `screenshot-desktop.py` to start capturing the screen.**
- **Query ChromaDB:** Test the database to retrieve relevant pieces of context for a specific query.
- **LLM Prompt:** Pass a query to QWEN2:1.5b to get an explanation of what the user was doing based on the retrieved context.

0 comments on commit 3eabab4

Please sign in to comment.