Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Pad/AveragePool fusion #23190

Merged
merged 2 commits into from
Jan 7, 2025
Merged

Conversation

mayeut
Copy link
Contributor

@mayeut mayeut commented Dec 24, 2024

Description

Fusing Pad & AveragePool requires AveragePool to use count_include_pad=1. If the AveragePool already set some padding and count_include_pad=0, fusion can't happen.

This PR adds a condition to perform fusion depending on those attributes. If fusion occurs, count_include_pad is always set to 1.

Motivation and Context

Fix #22177 (mislabelled as a performance issue but there's an actual bug in the implementation)
Bug introduced in #21556

Fusing Pad & AveragePool requires AveragePool to use `count_include_pad=1`.
If the AveragePool already set some padding and `count_include_pad=0`, fusion can't happen.

This PR adds a condition to perform fusion depending on those attributes. If fusion occurs,  `count_include_pad` is always set to `1`.

Fix microsoft#22177
@mayeut
Copy link
Contributor Author

mayeut commented Jan 6, 2025

@smk2007, since you reviewed #21556, can you review this one please ?

@justinchuby justinchuby requested a review from Copilot January 6, 2025 07:15

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 4 out of 6 changed files in this pull request and generated no comments.

Files not reviewed (2)
  • onnxruntime/core/optimizer/pad_fusion.cc: Language not supported
  • onnxruntime/test/optimizer/graph_transform_test.cc: Language not supported
@snnn
Copy link
Member

snnn commented Jan 7, 2025

/azp run Big Models, Linux Android Emulator QNN CI Pipeline, Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline

@snnn
Copy link
Member

snnn commented Jan 7, 2025

/azp run Linux OpenVINO CI Pipeline, Linux QNN CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, Windows ARM64 QNN CI Pipeline, Windows CPU CI Pipeline, Windows GPU CUDA CI Pipeline

@snnn
Copy link
Member

snnn commented Jan 7, 2025

/azp run Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows x64 QNN CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline

Copy link

Azure Pipelines successfully started running 6 pipeline(s).

Copy link

Azure Pipelines successfully started running 5 pipeline(s).

Copy link

Azure Pipelines successfully started running 7 pipeline(s).

@mayeut
Copy link
Contributor Author

mayeut commented Jan 7, 2025

The linter issue is fixed.
The other warning relates to some disabled code in the generator that was here to check there was an issue before but not after the fix. I can remove this code and maybe add this to E2E tests somewhere ? Any advice ?

@snnn
Copy link
Member

snnn commented Jan 7, 2025

/azp run Big Models, Linux Android Emulator QNN CI Pipeline, Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline

@snnn
Copy link
Member

snnn commented Jan 7, 2025

/azp run Linux OpenVINO CI Pipeline, Linux QNN CI Pipeline, MacOS CI Pipeline, ONNX Runtime Web CI Pipeline, Windows ARM64 QNN CI Pipeline, Windows CPU CI Pipeline, Windows GPU CUDA CI Pipeline

@snnn
Copy link
Member

snnn commented Jan 7, 2025

/azp run Windows GPU DML CI Pipeline, Windows GPU Doc Gen CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows x64 QNN CI Pipeline, onnxruntime-binary-size-checks-ci-pipeline

Copy link

Azure Pipelines successfully started running 6 pipeline(s).

Copy link

Azure Pipelines successfully started running 5 pipeline(s).

Copy link

Azure Pipelines successfully started running 7 pipeline(s).

Copy link
Member

@snnn snnn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@snnn
Copy link
Member

snnn commented Jan 7, 2025

/azp run Windows GPU DML CI Pipeline

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@snnn snnn merged commit 4b0cee3 into microsoft:main Jan 7, 2025
75 checks passed
snnn pushed a commit that referenced this pull request Jan 8, 2025
### Description
Fusing Pad & AveragePool requires AveragePool to use
`count_include_pad=1`. If the AveragePool already set some padding and
`count_include_pad=0`, fusion can't happen.

This PR adds a condition to perform fusion depending on those
attributes. If fusion occurs, `count_include_pad` is always set to `1`.

### Motivation and Context
Fix #22177 (mislabelled as a performance issue but there's an actual bug
in the implementation)
Bug introduced in #21556
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Performance] Inconsistent Tensor Output with Optimizations Enabled/Disabled (Pad Fusion Issue)
2 participants