From 7e7a4439bb954f0e408b67484965d8b8deed6bd1 Mon Sep 17 00:00:00 2001 From: Ikko Eltociear Ashimine Date: Wed, 7 Jun 2023 01:42:53 +0900 Subject: [PATCH] Update README.md continous -> continuous --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index ede29bc2..a0c3cb3a 100644 --- a/README.md +++ b/README.md @@ -32,7 +32,7 @@ This README will cover the following: - [Supported Models](#supported-models) -- [Warning about running the script continuously](#continous-script-warning) +- [Warning about running the script continuously](#continuous-script-warning) # How It Works @@ -92,7 +92,7 @@ Llama integration requires llama-cpp package. You will also need the Llama model Once you have them, set LLAMA_MODEL_PATH to the path of the specific model to use. For convenience, you can link `models` in BabyAGI repo to the folder where you have the Llama model weights. Then run the script with `LLM_MODEL=llama` or `-l` argument. -# Warning +# Warning This script is designed to be run continuously as part of a task management system. Running this script continuously can result in high API usage, so please use it responsibly. Additionally, the script requires the OpenAI API to be set up correctly, so make sure you have set up the API before running the script.