This is a repository for storing ONNX models.
-
BVLC AlexNet (244 MByte)
-
BVLC GoogleNet (28 MByte)
-
BVLC CaffeNet (244 MByte)
-
BVLC R-CNN ILSVRC13 (231 MByte)
-
DenseNet-121 (33 MByte)
-
Inception-v1 (28 MByte)
-
Inception-v2 (45 MByte)
-
ResNet-50 (103 MByte)
-
ShuffleNet (5.3 MByte)
-
SqueezeNet (5 MByte)
-
VGG-19 (575 MByte)
-
ZFNet-512 (349 MByte)
-
MNIST (26 kByte)
-
Emotion-FERPlus (34 MByte)
-
Tiny-YOLOv2 (61 MByte)
Every ONNX backend should support running these models out of the box. After downloading and extracting the tarball of each model, there should be
- A protobuf file
model.onnx
which is the serialized ONNX model. - Test data.
The test data are provided in two different formats:
- Serialized Numpy archives, which are files named like
test_data_*.npz
, each file contains one set of test inputs and outputs. They can be used like this:
import numpy as np
import onnx
import onnx_backend as backend
# Load the model and sample inputs and outputs
model = onnx.load(model_pb_path)
sample = np.load(npz_path, encoding='bytes')
inputs = list(sample['inputs'])
outputs = list(sample['outputs'])
# Run the model with an onnx backend and verify the results
np.testing.assert_almost_equal(outputs, backend.run_model(model, inputs))
- Serialized protobuf TensorProtos, which are stored in folders named like
test_data_set_*
. They can be used as the following:
import numpy as np
import onnx
import os
import glob
import onnx_backend as backend
from onnx import numpy_helper
model = onnx.load('model.onnx')
test_data_dir = 'test_data_set_0'
# Load inputs
inputs = []
inputs_num = len(glob.glob(os.path.join(test_data_dir, 'input_*.pb')))
for i in range(inputs_num):
input_file = os.path.join(test_data_dir, 'input_{}.pb'.format(i))
tensor = onnx.TensorProto()
with open(input_file, 'rb') as f:
tensor.ParseFromString(f.read())
inputs.append(numpy_helper.to_array(tensor))
# Load reference outputs
ref_outputs = []
ref_outputs_num = len(glob.glob(os.path.join(test_data_dir, 'output_*.pb')))
for i in range(ref_outputs_num):
output_file = os.path.join(test_data_dir, 'output_{}.pb'.format(i))
tensor = onnx.TensorProto()
with open(output_file, 'rb') as f:
tensor.ParseFromString(f.read())
ref_outputs.append(numpy_helper.to_array(tensor))
# Run the model on the backend
outputs = list(backend.run_model(model, inputs))
# Compare the results with reference outputs.
for ref_o, o in zip(ref_outputs, outputs):
np.testing.assert_almost_equal(ref_o, o)