-
Notifications
You must be signed in to change notification settings - Fork 22.5k
Issues: pytorch/pytorch
[Announcement] Deprecating PyTorch’s official Anaconda channel
#138506
opened Oct 21, 2024 by
seemethere
Open
3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
LLaMA v3.1 on MPS backend breaks in BinaryOp mps::add_sub_lerp_template
high priority
module: mps
Related to Apple Metal Performance Shaders framework
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#135598
opened Sep 10, 2024 by
jhavukainen
BUG: MPS backend take_along_dim crash/assertion fail
module: crash
Problem manifests as a hard crash, as opposed to a RuntimeError
module: mps
Related to Apple Metal Performance Shaders framework
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#135240
opened Sep 5, 2024 by
tylerjereddy
[MPS] MPSNDArray error: product of dimension sizes > 2**32
module: mps
Related to Apple Metal Performance Shaders framework
module: 64-bit
Problems related to incorrectly using 32-bit integers when 64-bit is needed (e.g., 8G tensors)
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#134177
opened Aug 21, 2024 by
Tim-Roy
Silent error in torch.Tensor.sum on mps
module: correctness (silent)
issue that returns an incorrect result silently
module: mps
Related to Apple Metal Performance Shaders framework
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#132732
opened Aug 6, 2024 by
nisheethlahoti
MPS backend leaks memory when input sizes vary
enhancement
Not as big of a feature, but technically not a bug. Should be easy to fix
module: memory usage
PyTorch is using more memory than it should, or it is leaking memory
module: mps
Related to Apple Metal Performance Shaders framework
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#132596
opened Aug 4, 2024 by
llllvvuu
MPS gives incorrect result when torch.nn.functional.softplus follows moveaxis
module: correctness (silent)
issue that returns an incorrect result silently
module: mps
Related to Apple Metal Performance Shaders framework
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#131736
opened Jul 25, 2024 by
JohnGoldenGardiner
BUG: max, argmax NaN handling with MPS backend vs CPU backend
module: edge cases
Adversarial inputs unlikely to occur in practice
module: mps
Related to Apple Metal Performance Shaders framework
module: NaNs and Infs
Problems related to NaN and Inf handling in floating point
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#130295
opened Jul 8, 2024 by
tylerjereddy
MPS backend thinks that subnormals are equal to zero
module: correctness (silent)
issue that returns an incorrect result silently
module: mps
Related to Apple Metal Performance Shaders framework
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#125051
opened Apr 26, 2024 by
malfet
MPS slows down after sleep
module: mps
Related to Apple Metal Performance Shaders framework
module: performance
Issues related to performance, either of kernel code or framework glue
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#124056
opened Apr 15, 2024 by
hdnhan
aten::_linalg_solve_ex.result' is not currently implemented for the MPS
feature
A request for a proper, new feature.
module: mps
Related to Apple Metal Performance Shaders framework
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#98222
opened Apr 3, 2023 by
albertjblack
ProTip!
Follow long discussions with comments:>50.