Skip to content

Commit

Permalink
Add scripts to generate manifests from helm charts
Browse files Browse the repository at this point in the history
The kubernetes manifests are generated from "helm template".
Add the scripts to do that to avoid maintenance work.

Co-authored-by: Lianhao Lu <lianhao.lu@intel.com>
Signed-off-by: Dolpher Du <dolpher.du@intel.com>
  • Loading branch information
yongfengdu and lianhao committed Aug 20, 2024
1 parent c06bcea commit 273cb1d
Show file tree
Hide file tree
Showing 9 changed files with 82 additions and 48 deletions.
12 changes: 5 additions & 7 deletions helm-charts/chatqna/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,10 +50,8 @@ curl http://localhost:8888/v1/chatqna \

## Values

| Key | Type | Default | Description |
| ------------------------------- | ------ | ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| image.repository | string | `"opea/chatqna"` | |
| service.port | string | `"8888"` | |
| global.HUGGINGFACEHUB_API_TOKEN | string | `""` | Your own Hugging Face API token |
| global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, tgi will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to empty/null will force it to download models every time. |
| tgi.LLM_MODEL_ID | string | `"Intel/neural-chat-7b-v3-3"` | Models id from https://huggingface.co/, or predownloaded model directory |
| Key | Type | Default | Description |
| ---------------- | ------ | ----------------------------- | ------------------------------------------------------------------------ |
| image.repository | string | `"opea/chatqna"` | |
| service.port | string | `"8888"` | |
| tgi.LLM_MODEL_ID | string | `"Intel/neural-chat-7b-v3-3"` | Models id from https://huggingface.co/, or predownloaded model directory |
9 changes: 4 additions & 5 deletions helm-charts/chatqna/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,7 @@ global:
https_proxy: ""
no_proxy: ""
HUGGINGFACEHUB_API_TOKEN: "insert-your-huggingface-token-here"
LANGCHAIN_TRACING_V2: false
LANGCHAIN_API_KEY: "insert-your-langchain-key-here"
# set modelUseHostPath to host directory if you want to use hostPath volume for model storage
# comment out modeluseHostPath if you want to download the model from huggingface
modelUseHostPath: /mnt/opea-models
# set modelUseHostPath or modelUsePVC to use model cache.
modelUseHostPath: ""
# modelUseHostPath: /mnt/opea-models
# modelUsePVC: model-volume
12 changes: 5 additions & 7 deletions helm-charts/codegen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,8 @@ curl http://localhost:7778/v1/codegen \

## Values

| Key | Type | Default | Description |
| ------------------------------- | ------ | ------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| image.repository | string | `"opea/codegen"` | |
| service.port | string | `"7778"` | |
| global.HUGGINGFACEHUB_API_TOKEN | string | `""` | Your own Hugging Face API token |
| global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, tgi will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to empty/null will force it to download models every time. |
| tgi.LLM_MODEL_ID | string | `"meta-llama/CodeLlama-7b-hf"` | Models id from https://huggingface.co/, or predownloaded model directory |
| Key | Type | Default | Description |
| ---------------- | ------ | ------------------------------ | ------------------------------------------------------------------------ |
| image.repository | string | `"opea/codegen"` | |
| service.port | string | `"7778"` | |
| tgi.LLM_MODEL_ID | string | `"meta-llama/CodeLlama-7b-hf"` | Models id from https://huggingface.co/, or predownloaded model directory |
9 changes: 4 additions & 5 deletions helm-charts/codegen/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,7 @@ global:
https_proxy: ""
no_proxy: ""
HUGGINGFACEHUB_API_TOKEN: "insert-your-huggingface-token-here"
LANGCHAIN_TRACING_V2: false
LANGCHAIN_API_KEY: "insert-your-langchain-key-here"
# set modelUseHostPath to host directory if you want to use hostPath volume for model storage
# comment out modeluseHostPath if you want to download the model from huggingface
modelUseHostPath: /mnt/opea-models
# set modelUseHostPath or modelUsePVC to use model cache.
modelUseHostPath: ""
# modelUseHostPath: /mnt/opea-models
# modelUsePVC: model-volume
12 changes: 5 additions & 7 deletions helm-charts/codetrans/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,8 @@ curl http://localhost:7777/v1/codetrans \

## Values

| Key | Type | Default | Description |
| ------------------------------- | ------ | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| image.repository | string | `"opea/codetrans"` | |
| service.port | string | `"7777"` | |
| global.HUGGINGFACEHUB_API_TOKEN | string | `""` | Your own Hugging Face API token |
| global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, tgi will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory |
| tgi.LLM_MODEL_ID | string | `"HuggingFaceH4/mistral-7b-grok"` | Models id from https://huggingface.co/, or predownloaded model directory |
| Key | Type | Default | Description |
| ---------------- | ------ | --------------------------------- | ------------------------------------------------------------------------ |
| image.repository | string | `"opea/codetrans"` | |
| service.port | string | `"7777"` | |
| tgi.LLM_MODEL_ID | string | `"HuggingFaceH4/mistral-7b-grok"` | Models id from https://huggingface.co/, or predownloaded model directory |
9 changes: 4 additions & 5 deletions helm-charts/codetrans/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,7 @@ global:
https_proxy: ""
no_proxy: ""
HUGGINGFACEHUB_API_TOKEN: "insert-your-huggingface-token-here"
LANGCHAIN_TRACING_V2: false
LANGCHAIN_API_KEY: "insert-your-langchain-key-here"
# set modelUseHostPath to host directory if you want to use hostPath volume for model storage
# comment out modeluseHostPath if you want to download the model from huggingface
modelUseHostPath: /mnt/opea-models
# set modelUseHostPath or modelUsePVC to use model cache.
modelUseHostPath: ""
# modelUseHostPath: /mnt/opea-models
# modelUsePVC: model-volume
12 changes: 5 additions & 7 deletions helm-charts/docsum/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,8 @@ curl http://localhost:8888/v1/docsum \

## Values

| Key | Type | Default | Description |
| ------------------------------- | ------ | ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| image.repository | string | `"opea/docsum"` | |
| service.port | string | `"8888"` | |
| global.HUGGINGFACEHUB_API_TOKEN | string | `""` | Your own Hugging Face API token |
| global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, tgi will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory |
| tgi.LLM_MODEL_ID | string | `"Intel/neural-chat-7b-v3-3"` | Models id from https://huggingface.co/, or predownloaded model directory |
| Key | Type | Default | Description |
| ---------------- | ------ | ----------------------------- | ------------------------------------------------------------------------ |
| image.repository | string | `"opea/docsum"` | |
| service.port | string | `"8888"` | |
| tgi.LLM_MODEL_ID | string | `"Intel/neural-chat-7b-v3-3"` | Models id from https://huggingface.co/, or predownloaded model directory |
9 changes: 4 additions & 5 deletions helm-charts/docsum/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,7 @@ global:
https_proxy: ""
no_proxy: ""
HUGGINGFACEHUB_API_TOKEN: "insert-your-huggingface-token-here"
LANGCHAIN_TRACING_V2: false
LANGCHAIN_API_KEY: "insert-your-langchain-key-here"
# set modelUseHostPath to host directory if you want to use hostPath volume for model storage
# comment out modeluseHostPath if you want to download the model from huggingface
modelUseHostPath: /mnt/opea-models
# set modelUseHostPath or modelUsePVC to use model cache.
modelUseHostPath: ""
# modelUseHostPath: /mnt/opea-models
# modelUsePVC: model-volume
46 changes: 46 additions & 0 deletions helm-charts/update_genaiexamples.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#!/bin/bash

# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

CUR_DIR=$(cd $(dirname "$0") && pwd)
MODELPATH="/mnt/opea-models"

GENAIEXAMPLEDIR=${CUR_DIR}/../../GenAIExamples

if [ "f$1" != "f" ]; then
GENAIEXAMPLEDIR=$1
fi


if [ ! -f $GENAIEXAMPLEDIR/supported_examples.md ]; then
echo "Can NOT find GenAIExample directory."
echo "Usage: $0 [GenAIExample dir]"
exit 1
fi


#
# generate_yaml <chart> <outputdir>
#
function generate_yaml {
chart=$1
outputdir=${GENAIEXAMPLEDIR}/$2

local extraparams=""
extraparams="--set global.modelUseHostPath=${MODELPATH},image.tag=latest,asr.image.tag=latest,data-prep.image.tag=latest,embedding-usvc.image.tag=latest,llm-uservice.image.tag=latest,reranking-usvc.image.tag=latest,retriever-usvc.image.tag=latest,speecht5.image.tag=latest,tts.image.tag=latest,web-retriever.image.tag=latest,whisper.image.tag=latest"

helm dependency update $chart
helm template $chart $chart --skip-tests $extraparams -f $chart/values.yaml > $outputdir/xeon/${chart}.yaml
helm template $chart $chart --skip-tests $extraparams -f $chart/gaudi-values.yaml > $outputdir/gaudi/${chart}.yaml

}


${CUR_DIR}/update_dependency.sh
pushd ${CUR_DIR}
generate_yaml chatqna ChatQnA/kubernetes/manifests
generate_yaml codegen CodeGen/kubernetes/manifests
generate_yaml codetrans CodeTrans/kubernetes/manifests
generate_yaml docsum DocSum/kubernetes/manifests
popd

0 comments on commit 273cb1d

Please sign in to comment.