Problems with embedding model #6
13331112522
started this conversation in
General
Replies: 2 comments 1 reply
-
I turned off the embedding as: |
Beta Was this translation helpful? Give feedback.
1 reply
-
EMBEDDINGS_MODEL_PATH is actually not the embedding model in privateGPT but the LLM, all-MiniLM-L6-v2 from huggingface is the default embedding model. so I think it is not proper embedding model but llm. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
EMBEDDINGS_MODEL_TYPE=LlamaCpp # LlamaCpp or GPT4All
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2 # default: all-MiniLM-L6-v2
EMBEDDINGS_MODEL_PATH=models/ggml-model-q4_0.bin # default: has to be set according to used Llama model
I used ggml as my embedding model, but when I turn on doc embedding, and following one throw out the error:
llm_doc_embedd = LlamaCpp(model_path=embeddings_model_path, n_ctx=embeddings_model_n_ctx, n_threads=LLAMA_THREADS_NUM, callbacks=callbacks, verbose=False, temperature=embeddings_temperature)
ValidationError: 1 validation error for LlamaCpp
callbacks
extra fields not permitted (type=value_error.extra)
how can I fix this?
Beta Was this translation helpful? Give feedback.
All reactions