diff --git a/README.md b/README.md index 36de9bced..d05a6e335 100644 --- a/README.md +++ b/README.md @@ -173,6 +173,14 @@ OpenLLM currently supports the following models. By default, OpenLLM doesn't inc ### Quickstart + + +> **Note:** Baichuan requires to install with: +> ```bash +> pip install "openllm[baichuan]" +> ``` + + Run the following command to quickly spin up a Baichuan server: ```bash @@ -244,6 +252,14 @@ openllm start baichuan-inc/baichuan-7b --backend pt ### Quickstart + + +> **Note:** ChatGLM requires to install with: +> ```bash +> pip install "openllm[chatglm]" +> ``` + + Run the following command to quickly spin up a ChatGLM server: ```bash @@ -365,6 +381,14 @@ openllm start databricks/dolly-v2-3b --backend pt ### Quickstart + + +> **Note:** Falcon requires to install with: +> ```bash +> pip install "openllm[falcon]" +> ``` + + Run the following command to quickly spin up a Falcon server: ```bash @@ -434,6 +458,14 @@ openllm start tiiuae/falcon-7b --backend pt ### Quickstart + + +> **Note:** FlanT5 requires to install with: +> ```bash +> pip install "openllm[flan-t5]" +> ``` + + Run the following command to quickly spin up a FlanT5 server: ```bash @@ -553,6 +585,14 @@ openllm start eleutherai/gpt-neox-20b --backend pt ### Quickstart + + +> **Note:** Llama requires to install with: +> ```bash +> pip install "openllm[llama]" +> ``` + + Run the following command to quickly spin up a Llama server: ```bash @@ -699,6 +739,14 @@ openllm start HuggingFaceH4/zephyr-7b-alpha --backend pt ### Quickstart + + +> **Note:** MPT requires to install with: +> ```bash +> pip install "openllm[mpt]" +> ``` + + Run the following command to quickly spin up a MPT server: ```bash @@ -771,6 +819,14 @@ openllm start mosaicml/mpt-7b --backend pt ### Quickstart + + +> **Note:** OPT requires to install with: +> ```bash +> pip install "openllm[opt]" +> ``` + + Run the following command to quickly spin up a OPT server: ```bash @@ -911,6 +967,14 @@ openllm start stabilityai/stablelm-tuned-alpha-3b --backend pt ### Quickstart + + +> **Note:** StarCoder requires to install with: +> ```bash +> pip install "openllm[starcoder]" +> ``` + + Run the following command to quickly spin up a StarCoder server: ```bash diff --git a/openllm-python/README.md b/openllm-python/README.md index 36de9bced..d05a6e335 100644 --- a/openllm-python/README.md +++ b/openllm-python/README.md @@ -173,6 +173,14 @@ OpenLLM currently supports the following models. By default, OpenLLM doesn't inc ### Quickstart + + +> **Note:** Baichuan requires to install with: +> ```bash +> pip install "openllm[baichuan]" +> ``` + + Run the following command to quickly spin up a Baichuan server: ```bash @@ -244,6 +252,14 @@ openllm start baichuan-inc/baichuan-7b --backend pt ### Quickstart + + +> **Note:** ChatGLM requires to install with: +> ```bash +> pip install "openllm[chatglm]" +> ``` + + Run the following command to quickly spin up a ChatGLM server: ```bash @@ -365,6 +381,14 @@ openllm start databricks/dolly-v2-3b --backend pt ### Quickstart + + +> **Note:** Falcon requires to install with: +> ```bash +> pip install "openllm[falcon]" +> ``` + + Run the following command to quickly spin up a Falcon server: ```bash @@ -434,6 +458,14 @@ openllm start tiiuae/falcon-7b --backend pt ### Quickstart + + +> **Note:** FlanT5 requires to install with: +> ```bash +> pip install "openllm[flan-t5]" +> ``` + + Run the following command to quickly spin up a FlanT5 server: ```bash @@ -553,6 +585,14 @@ openllm start eleutherai/gpt-neox-20b --backend pt ### Quickstart + + +> **Note:** Llama requires to install with: +> ```bash +> pip install "openllm[llama]" +> ``` + + Run the following command to quickly spin up a Llama server: ```bash @@ -699,6 +739,14 @@ openllm start HuggingFaceH4/zephyr-7b-alpha --backend pt ### Quickstart + + +> **Note:** MPT requires to install with: +> ```bash +> pip install "openllm[mpt]" +> ``` + + Run the following command to quickly spin up a MPT server: ```bash @@ -771,6 +819,14 @@ openllm start mosaicml/mpt-7b --backend pt ### Quickstart + + +> **Note:** OPT requires to install with: +> ```bash +> pip install "openllm[opt]" +> ``` + + Run the following command to quickly spin up a OPT server: ```bash @@ -911,6 +967,14 @@ openllm start stabilityai/stablelm-tuned-alpha-3b --backend pt ### Quickstart + + +> **Note:** StarCoder requires to install with: +> ```bash +> pip install "openllm[starcoder]" +> ``` + + Run the following command to quickly spin up a StarCoder server: ```bash diff --git a/tools/update-readme.py b/tools/update-readme.py index 6a57306e1..e8ef044bc 100755 --- a/tools/update-readme.py +++ b/tools/update-readme.py @@ -33,12 +33,13 @@ def main() -> int: for it in CONFIG_MAPPING.values(): it = it() - details_block = ['
\n'] architecture_name = it.__class__.__name__[:-6] + details_block = ['
\n', f'{architecture_name}\n\n', '### Quickstart\n'] + if it['start_name'] in deps: + instruction = f'> ```bash\n> pip install "openllm[{it["start_name"]}]"\n> ```' + details_block.extend(markdown_noteblock(f'{architecture_name} requires to install with:\n{instruction}\n')) details_block.extend( [ - f'{architecture_name}\n\n', - '### Quickstart\n', f'Run the following command to quickly spin up a {architecture_name} server:\n', f"""\ ```bash