-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert.py do NOT support Meta-Llama-3.1-8B-Instruct #709
Comments
@openvino-book according to this issue on hf hub, you need to upgrade transformers for loading llama3.1 https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/discussions/15 |
Thank you @eaidova , it works!
the worked version of transformers is 4.43.3 |
Hi, @eaidova , after update the transformers, the convert.py works well. However, the benchmark.py raise an error as below when running the converted Llama3.1 IR model. |
Hi, @eaidova very interesting, there are not openvino_tokenizer.xml & openvino_detokenizer.xml in the converted model D:\openvino.genai\llm_bench\python\llama31_ov\pytorch\dldt\FP16 |
@openvino-book If this issue can still be reproduced with latest openvino.genai? https://github.com/openvinotoolkit/openvino.genai |
fix the issue #709 --------- Co-authored-by: Chen Peter <peter.chen@intel.com>
Run convert.py on Meta-Llama-3.1-8B-Instruct model, Error occur as below:
The text was updated successfully, but these errors were encountered: