Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert.py do NOT support Meta-Llama-3.1-8B-Instruct #709

Closed
openvino-book opened this issue Jul 31, 2024 · 6 comments · Fixed by #801
Closed

convert.py do NOT support Meta-Llama-3.1-8B-Instruct #709

openvino-book opened this issue Jul 31, 2024 · 6 comments · Fixed by #801
Assignees

Comments

@openvino-book
Copy link

Run convert.py on Meta-Llama-3.1-8B-Instruct model, Error occur as below:

ValueError: rope_scaling must be a dictionary with two fields, type and factor, got {'factor': 8.0, 'low_freq_factor': 1.0, 'high_freq_factor': 4.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}

1722391465029

@eaidova
Copy link
Collaborator

eaidova commented Jul 31, 2024

@openvino-book according to this issue on hf hub, you need to upgrade transformers for loading llama3.1 https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/discussions/15

@openvino-book
Copy link
Author

openvino-book commented Aug 4, 2024

@openvino-book according to this issue on hf hub, you need to upgrade transformers for loading llama3.1 https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/discussions/15

Thank you @eaidova , it works!

pip install --upgrade transformers

the worked version of transformers is 4.43.3

1722734952888

@openvino-book
Copy link
Author

Hi, @eaidova , after update the transformers, the convert.py works well.

However, the benchmark.py raise an error as below when running the converted Llama3.1 IR model.

1722736391336

@openvino-book
Copy link
Author

running on the CPU has the same error
1722736622228

@openvino-book
Copy link
Author

Hi, @eaidova very interesting, there are not openvino_tokenizer.xml & openvino_detokenizer.xml in the converted model D:\openvino.genai\llm_bench\python\llama31_ov\pytorch\dldt\FP16
1722739251500

@peterchen-intel
Copy link
Collaborator

peterchen-intel commented Aug 22, 2024

Hi, @eaidova very interesting, there are not openvino_tokenizer.xml & openvino_detokenizer.xml in the converted model D:\openvino.genai\llm_bench\python\llama31_ov\pytorch\dldt\FP16
image

@openvino-book If this issue can still be reproduced with latest openvino.genai? https://github.com/openvinotoolkit/openvino.genai

github-merge-queue bot pushed a commit that referenced this issue Oct 15, 2024
fix the issue
#709

---------

Co-authored-by: Chen Peter <peter.chen@intel.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants