Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update config function path for neural-compressor v2.6 #801

Merged
merged 1 commit into from
Jul 4, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion optimum/intel/neural_compressor/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@
import torch.distributed as dist
from neural_compressor import training
from neural_compressor.compression import DistillationCallbacks
from neural_compressor.conf.pythonic_config import _BaseQuantizationConfig
from packaging import version
from torch import nn
from torch.utils.data import Dataset, RandomSampler
Expand Down Expand Up @@ -107,8 +106,10 @@
from optimum.exporters.onnx import OnnxConfig

if is_neural_compressor_version("<", "2.6"):
from neural_compressor.conf.pythonic_config import _BaseQuantizationConfig
from neural_compressor.experimental.export import torch_to_fp32_onnx, torch_to_int8_onnx
else:
from neural_compressor.config import _BaseQuantizationConfig
from neural_compressor.utils.export import torch_to_fp32_onnx, torch_to_int8_onnx


Expand Down
Loading