-
Notifications
You must be signed in to change notification settings - Fork 529
Insights: pytorch/FBGEMM
Overview
-
0 Active issues
-
- 0 Merged pull requests
- 10 Open pull requests
- 0 Closed issues
- 0 New issues
Could not load contribution data
Please try again later
1 Release published by 1 person
-
v1.1.0 FBGEMM_GPU v1.1.0 Release Notes
published
Jan 29, 2025
10 Pull requests opened by 6 people
-
finish #1808 cherry-pick, adjust interface
#3627 opened
Jan 28, 2025 -
Update bf16i4 gemm with new cutlass version
#3630 opened
Jan 29, 2025 -
avoid using warning tensor in cpu tbe op
#3631 opened
Jan 29, 2025 -
Partial revert of D66986498 (Optimized backward pass for ROCm devices, pt 1), 2nd attempt
#3637 opened
Jan 29, 2025 -
Adding Missing includes and explicitly declaring Tensor in aten namespace.
#3638 opened
Jan 30, 2025 -
Re-organize SLL ops, pt 6
#3647 opened
Jan 31, 2025 -
Re-organize SLL ops, pt 7
#3650 opened
Jan 31, 2025 -
Add tracing option to quantize bench
#3651 opened
Jan 31, 2025 -
FP8 Grouped Gemm Optimization
#3655 opened
Feb 4, 2025 -
Add sweep_utils.py script to tune heuristics
#3656 opened
Feb 4, 2025
6 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Undefined symbol: _ZNK5torch8autograd4Node4nameEv
#2715 commented on
Feb 4, 2025 • 0 new comments -
Add NEON and SVE implementations for Float16 conversions
#3424 commented on
Jan 30, 2025 • 0 new comments -
Implement infrenece bag packing along D
#3541 commented on
Feb 4, 2025 • 0 new comments -
AdagradW
#3605 commented on
Feb 4, 2025 • 0 new comments -
Port oss f16_fast_gemv into fbcode
#3610 commented on
Feb 4, 2025 • 0 new comments -
Updating split_table_batched_embeddings_ops_training.py
#3613 commented on
Feb 3, 2025 • 0 new comments