-
Notifications
You must be signed in to change notification settings - Fork 23k
Home
driazati edited this page Sep 17, 2021
·
129 revisions
Welcome to the PyTorch developer's wiki!
Please read our best practices if you're interested in adding a page or making edits
New to PyTorch? Don't know where to start?
- Developer FAQ
- Where should I add documentation?
- PyTorch Data Flow and Interface Diagram
- Multiprocessing Technical Notes
- Software Architecture for c10
- PyTorch JIT IR format (slightly out of date now)
- TH to ATen porting guide
- Writing Python in C++ (a manifesto)
- Introducing Quantized Tensor
- Life of a Tensor
- How to use
TensorIterator
- Running and writing tests
- Writing memory format aware operators
- Guide for adding type annotations to PyTorch
- The torch.fft module in PyTorch 1.7
- Automatic Mixed Precision package
- Automatic Mixed Precision examples
- Autograd mechanics
- Broadcasting semantics
- CPU threading and TorchScript inference
- CUDA semantics
- Frequently Asked Questions
- Extending PyTorch
- Features for large-scale deployments
- Multiprocessing best practices
- Reproducibility
- Serialization semantics
- Windows FAQ
- Python Language Reference Coverage
- Complex Numbers
- ONNX
- Android
- iOS
- How-to: Writing PyTorch & Caffe2 Operators
- CUDA IPC Refcounting implementation explained
- Autograd
- Code Coverage Tool for Pytorch
- How to write tests using FileCheck
- PyTorch Release Scripts
- Docker images for Jenkins
- Serialized operator test framework
- ONNX op coverage
- Observers
- Snapdragon NPE Support
- Using TensorBoard in ifbpy
- Caffe2
- Building Caffe2
- Doxygen Notes
- Docker & Caffe2
- Caffe2 implementation of Open Neural Network Exchange (ONNX)
- nomnigraph
- Caffe2 & TensorRT integration
- Playground for Caffe2 Models
- How to run FakeLowP vs Glow tests
- Using ONNX and ATen to export models from PyTorch to Caffe2
- An ATen operator for Caffe2
- Introduction to Quantization
- Quantization Operation coverage
- Implementing native quantized ops
- Extend PyTorch Quantization to Custom Backends
- JIT Technical Overview
- Current workflow
- Static Runtime
- TorchScript serialization
- PyTorch Fuser
- Implementation reference for the CUDA PyTorch JIT Fuser
- TorchScript
- TorchScript Language Reference
- TorchScript Unsupported Pytorch Constructs
- Distributed RPC Framework
- Distributed Autograd Design
- Remote Reference Protocol
- Distributed Data Parallel
- Distributed communication package
- Contributing to PyTorch Distributed
- PyTorch with C++
- The C++ Frontend
- PyTorch C++ API
- Tensor basics
- Tensor Creation API
- Tensor Indexing API
- MaybeOwned<Tensor>
- Installing C++ Distributions of PyTorch
- Torch Library API
- libtorch
- C++ / Python API parity tracker
- TensorExpr C++ Tests
- JIT C++ Tests
- C++ Frontend Tests
- FAQ
- Best Practices to Edit and Compile Pytorch Source Code On Window
- Distributed Data Parallel Benchmark
- Fast RNN benchmarks
- PyTorch/Caffe2 Operator Micro-benchmarks
- torch_function micro-benchmarks
- Benchmarking tool for the autograd AP
- Modular Benchmarking Components
- Continuous Integration
- Bot commands
- Code review values
- Lint as you type
- Pull request review etiquette
- Docker image build on CircleCI
- Debugging with SSH on Github Actions
- Debugging with Remote Desktop on CircleCI
- Structure of CI on CircleCI
- Using hud.pytorch.org
- Breaking Changes from Variable and Tensor merge (from 0.4 release)
- Tensor API changes for Caffe2 developers (from 1.0 release, plus some stuff on master)
- Autograd and Fork