Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SPIR-V compiler backend to burn-wgpu #2386

Merged
merged 15 commits into from
Oct 21, 2024

Conversation

wingertge
Copy link
Contributor

Pull Request Template

Checklist

  • Confirmed that run-checks all script has been executed.
  • Made sure the book is up to date with changes in this PR.

Changes

Adds a feature to enable the new alternative SPIR-V compiler for the WGPU backend. When the feature is enabled the compiler defaults to SPIR-V, but can still be overriden with a generic.

Testing

New testing option has been added to run SPIR-V tests in addition to the wgsl ones. All tests pass. xtask validate passes.

@wingertge
Copy link
Contributor Author

Prev Rust version is still set to 1.80 rn, it should be 1.81. That's why prev fails rn, lint reasons were introduced in 1.81. I don't think I should update GitHub workflows myself.

Copy link
Member

@nathanielsimard nathanielsimard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new option is a killer feature, thank you so much 🔥

On another note, which could be solved in a following PR; I think we should revisit the name of our feature flags and backends. To make things simple we should have:

  • tch => libtorch
  • wgpu-spirv => vulkan
  • wgpu => webgpu
  • cuda-jit => cuda

Some feature flags are meant to customize a backend, they should be specific for that backend:

  • cuda => candle-cuda, libtorch-cuda
  • metal => candle-metal, libtorch-metal
  • openblas => ndarray-openblas
  • accelerate => ndarray-accelerate

Essentially, we're making backends made with cubecl first class, meaning we don't need to prefix things with cubecl like cubecl-vulkan, cubecl-cuda or cubecl-fusion, but for the third-party backends, I think it's more flexible, since we won't have feature flags that may clash with our own. In that sense, we may also rename burn-jit to burn-cubecl, since it's pretty much the backend built on top of cubecl.

@wingertge
Copy link
Contributor Author

That sounds like a good idea and something I think we can do along with writing a proper CUDA readme and adding cubecl to the custom kernels section in the book. For now I only updated the docs related to SPIR-V.

@nathanielsimard
Copy link
Member

That sounds like a good idea and something I think we can do along with writing a proper CUDA readme and adding cubecl to the custom kernels section in the book. For now I only updated the docs related to SPIR-V.

Gonna create an issue with the comment above.

Copy link

codecov bot commented Oct 20, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 85.22%. Comparing base (b7887b0) to head (9d13911).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2386      +/-   ##
==========================================
- Coverage   85.22%   85.22%   -0.01%     
==========================================
  Files         786      786              
  Lines      104088   104088              
==========================================
- Hits        88706    88705       -1     
- Misses      15382    15383       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@nathanielsimard nathanielsimard merged commit f3968cb into tracel-ai:main Oct 21, 2024
11 checks passed
@wingertge wingertge deleted the feat/wgpu-spirv-backend branch October 21, 2024 16:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants