🛠 A lite C++ toolkit of 100+ Awesome AI models, support ORT, MNN, NCNN, TNN and TensorRT. 🎉🎉
-
Updated
Dec 9, 2024 - C++
🛠 A lite C++ toolkit of 100+ Awesome AI models, support ORT, MNN, NCNN, TNN and TensorRT. 🎉🎉
Deep Learning API and Server in C++14 support for PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
BEVDet implemented by TensorRT, C++; Achieving real-time performance on Orin
Deploy stable diffusion model with onnx/tenorrt + tritonserver
NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
ComfyUI Depth Anything (v1/v2) Tensorrt Custom Node (up to 14x faster)
Based on tensorrt v8.0+, deploy detection, pose, segment, tracking of YOLO11 with C++ and python api.
Based on tensorrt v8.0+, deploy detect, pose, segment, tracking of YOLOv8 with C++ and python api.
Yolov5 TensorRT Implementations
Анализ трафика на круговом движении с использованием компьютерного зрения
Using TensorRT for Inference Model Deployment.
you can use dbnet to detect word or bar code,Knowledge Distillation is provided,also python tensorrt inference is provided.
The YOLOv11 C++ TensorRT Project in C++ and optimized using NVIDIA TensorRT
Production-ready YOLO8 Segmentation deployment with TensorRT and ONNX support for CPU/GPU, including AI model integration guidance for Unitlab Annotate.
this is a tensorrt version unet, inspired by tensorrtx
VitPose without MMCV dependencies
C++ TensorRT Implementation of NanoSAM
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."