Highlights
- Pro
Lists (5)
Sort Name ascending (A-Z)
Starred repositories
A Heterogeneous Platform Deep Learning Compiler Framework from EdgeCortix
Automatic Schedule Exploration and Optimization Framework for Tensor Computations
Graphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into efficient execution plans.
The Tensor Algebra Compiler (taco) computes sparse tensor expressions on CPUs and GPUs
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
📙 Source code for "BenchPress: A Deep Active Benchmark Generator", PACT 2022
Snapdragon Neural Processing Engine (SNPE) SDKThe Snapdragon Neural Processing Engine (SNPE) is a Qualcomm Snapdragon software accelerated runtime for the execution of deep neural networks. With SN…
Buda Compiler Backend for Tenstorrent devices
The Triton backend for TensorRT.
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Open Source Specialized Computing Stack for Accelerating Deep Neural Networks.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
A polyhedral compiler for expressing fast and portable data parallel algorithms
Open deep learning compiler stack for Kendryte AI accelerators ✨
Enabling Flexible FPGA High-Level Synthesis of Tensorflow Deep Neural Networks
High-performance automatic differentiation of LLVM and MLIR.
SparseTIR: Sparse Tensor Compiler for Deep Learning