Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ppyoloe模型执行export脚本后,未生成pdmodel文件 #9256

Open
3 tasks done
WindLWQ opened this issue Dec 24, 2024 · 4 comments
Open
3 tasks done

ppyoloe模型执行export脚本后,未生成pdmodel文件 #9256

WindLWQ opened this issue Dec 24, 2024 · 4 comments
Assignees

Comments

@WindLWQ
Copy link

WindLWQ commented Dec 24, 2024

问题确认 Search before asking

  • 我已经查询历史issue,没有发现相似的bug。I have searched the issues and found no similar bug report.

Bug组件 Bug Component

Export

Bug描述 Describe the Bug

使用自定义数据集完成ppyoloe模型训练后,执行export脚本
CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c /workspace/ljy/algo/smalldet/PaddleDetection-release-2.8/configs/smalldet/ppyoloe_crn_l_80e_sliced_smoke_640_025.yml -o weights=/workspace/ljy/algo/smalldet/PaddleDetection-release-2.8/output/best_model.pdparams
提示
Warning: Unable to use numba in PP-Tracking, please install numba, for example(python3.7): `pip install numba==0.56.4` Warning: Unable to use numba in PP-Tracking, please install numba, for example(python3.7): `pip install numba==0.56.4` Warning: import ppdet from source directory without installing, run 'python setup.py install' to install ppdet firstly [12/24 11:28:39] ppdet.utils.checkpoint INFO: Finish loading model weights: /workspace/ljy/algo/smalldet/PaddleDetection-release-2.8/output/best_model.pdparams loading annotations into memory... Done (t=0.00s) creating index... index created! [12/24 11:28:39] ppdet.engine INFO: Export inference config file to output_inference/ppyoloe_crn_l_80e_sliced_smoke_640_025/infer_cfg.yml [12/24 11:28:40] ppdet.engine INFO: Export model and saved in output_inference/ppyoloe_crn_l_80e_sliced_smoke_640_025
但在ppyoloe_crn_l_80e_sliced_smoke_640_025`文件夹中,仅有infer_cfg.yml, model.json, model.pdiparams文件,无pdmodel文件
使用官方提供的模型运行export转换脚本后也无pdmodel文件

复现环境 Environment

linux
GCC 11.2.0
paddle 2.8

Bug描述确认 Bug description confirmation

  • 我确认已经提供了Bug复现步骤、代码改动说明、以及环境信息,确认问题是可以复现的。I confirm that the bug replication steps, code change instructions, and environment information have been provided, and the problem can be reproduced.

是否愿意提交PR? Are you willing to submit a PR?

  • 我愿意提交PR!I'd like to help by submitting a PR!
@TingquanGao TingquanGao self-assigned this Dec 24, 2024
@TingquanGao
Copy link
Collaborator

TingquanGao commented Dec 24, 2024

最新的Paddle 3.0.0b2,使用了新的.json格式的静态图模型文件,不再使用.pdmodel格式文件,我们也正在做兼容适配。当前建议设置环境变量FLAGS_enable_pir_api=0再执行导出命令。

@WindLWQ
Copy link
Author

WindLWQ commented Dec 25, 2024

最新的Paddle 3.0.0b2,使用了新的.json格式的静态图模型文件,不再使用.pdmodel格式文件,我们也正在做兼容适配。当前建议设置环境变量FLAGS_enable_pir_api=0再执行导出命令。

请问一下,设置该环境变量后已经可以导出pdmodel了,但是使用paddle2onnx将其转为onnx模型后,onnx可以转换成功,但无法使用。在加载 ONNX 模型生成推理用 sess阶段,报错

[ONNXRuntimeError] : 1 : FAIL : Load model from F:\Code_Toilet\paddle_keypoints\PaddleDetection\ppyoloee.onnx failed:Node (Gather.8) Op (Gather) [ShapeInferenceError] data tensor must have rank >= 1
  File "F:\Code_Toilet\paddle_keypoints\PaddleDetection\model_validation.py", line 23, in <module>
    m = rt.InferenceSession(onnx_model_path, providers=providers)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from F:\Code_Toilet\paddle_keypoints\PaddleDetection\ppyoloee.onnx failed:Node (Gather.8) Op (Gather) [ShapeInferenceError] data tensor must have rank >= 1

这是什么原因呢

@TingquanGao
Copy link
Collaborator

目前Paddle模型对导出ONNX推理存在兼容性问题,Paddle这边正在解决中。

@WindLWQ
Copy link
Author

WindLWQ commented Dec 25, 2024

目前Paddle模型对导出ONNX推理存在兼容性问题,Paddle这边正在解决中。

好的谢谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants