Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

group_size is not passed to quantizer while export model with gptq quant #2710

Closed
luguanyu1234 opened this issue Dec 19, 2024 · 1 comment · Fixed by #2720
Closed

group_size is not passed to quantizer while export model with gptq quant #2710

luguanyu1234 opened this issue Dec 19, 2024 · 1 comment · Fixed by #2720

Comments

@luguanyu1234
Copy link

luguanyu1234 commented Dec 19, 2024

Describe the bug
image

diff --git a/swift/llm/export/quant.py b/swift/llm/export/quant.py
index 4e598c03..f3e17cde 100644
--- a/swift/llm/export/quant.py
+++ b/swift/llm/export/quant.py
@@ -210,6 +210,7 @@ class QuantEngine(ProcessorMixin):
                 bits=args.quant_bits,
                 dataset=','.join(args.dataset),
                 batch_size=args.quant_batch_size,
+                group_size=args.group_size,
                 block_name_to_quantize=self.get_block_name_to_quantize(self.model, args.model_type))
             gptq_quantizer.serialization_keys.append('block_name_to_quantize')
             logger.info('Start quantizing the model...')

Your hardware and system info
Write your system info like CUDA version/system/GPU/torch version here(在这里给出硬件信息和系统信息,如CUDA版本,系统,GPU型号和torch版本等)

Additional context
Add any other context about the problem here(在这里补充其他信息)

@Jintao-Huang
Copy link
Collaborator

Sorry, I didn't understand what you meant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants