Here you will find various samples, tutorials, and reference implementations for using ONNX Runtime. For a list of available dockerfiles and published images to help with getting started, see this page.
Inference only
- Basic Model Inferencing (single node Sigmoid) on CPU
- Model Inferencing (Resnet50) on CPU
- Model Inferencing on CPU using ONNX-Ecosystem Docker image
- Model Inferencing on CPU using ONNX Runtime Server (SSD Single Shot MultiBox Detector)
- Model Inferencing using NUPHAR Execution Provider
Inference with model conversion
Inference and deploy through AzureML
-
Inferencing on CPU using ONNX Model Zoo models:
-
Inferencing on CPU with model conversion step for existing models:
-
Inferencing on CPU with PyTorch model training:
For aditional information on training in AzureML, please see AzureML Training Notebooks
-
Inferencing on GPU with TensorRT Execution Provider (AKS)
Inference and Deploy wtih Azure IoT Edge
Other
This section contains several samples that demonstrate how to use onnxruntime Node.js binding.
-
Basic Usage - a demonstration of basic usage of ONNX Runtime Node.js binding.
-
Create Tensor - a demonstration of basic usage of creating tensors.
- Create InferenceSession - shows how to create
InferenceSession
in different ways.
In each sample's implementation subdirectory, run
npm install
node ./