Skip to content

Feature: xorbitsai/inference support #122

Open
@imotai

Description

Context

for a better and stable inference performance

Current Solution
the client code

prompt = f"""{self.prefix}
{chat_history}
{self.user_name}: {user_input}
{self.ai_name}:"""
logging.info(f"{prompt}")
data = {
"n_predict": 500,
"grammar": self.grammar,
"prompt": prompt,
"temperature": temperature,
"stream": True,
"repeat_last_n": 256,
"top_p": 0.9,
"stop": [
"</s>",
"\n",
"%s:" % self.ai_name,
"%s:" % self.user_name,
],
}

the llama server

echo "start codellama with model name $3"
mkdir -p ${ROOT_DIR}/model_server
cd ${ROOT_DIR}/model_server && hap run -n codellama -- server -m ../model/$3 --alias codellama --host 127.0.0.1 --port 8080

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions