TensorRT module in C/C++
Linux
-
yaml-cpp
git clone https://github.com/jbeder/yaml-cpp cd yaml-cpp mkdir build && cd build cmake .. -DYAML_BUILD_SHARED_LIBS=on
-
OpenCV with CUDA support (C++/Python)
Windows
-
Download dirent.h then put inside this folder:
Visual-Studio-Installed-Path\201x\Community\VC\Tools\MSVC\xx.xx.xxxxx\include
git clone https://github.com/CuteBoiz/TensorRT_Parser_Cpp.git
cd TensorRT_Parser_Cpp
mkdir build && cd build
cmake .. -DTRT:=/path/to/tensorrt #ex: cmake .. -DTRT:=/home/pi/Libraries/TensorRT-8.4.3.1
make
./tensorrt_cpp convert /path/to/config.yaml_file
Examples
./tensorrt_cpp convert ../config/onnx_config.yaml
./tensorrt_cpp convert ../config/onnx_config_dynamic.yaml
./tensorrt_cpp infer /path/to/trt_engine /path/to/data (softmax) (gpuID)
Data could be path to video/image/images folder
gpuID for select gpuID in multi-gpu system inference
Examples
./tensorrt_cpp infer home/usrname/classifier.trt image.jpg
./tensorrt_cpp infer classifier.trt ./test_images 1
./tensorrt_cpp infer classifier.trt video.mp4 softmax
./tensorrt_cpp infer ../classifier.trt ../images/ softmax 6
- Support
- Multiple inputs.
- Multiple outputs.
- Non-image input.
- Channel first and last input (CHW/HWC).
- 2D,3D,4D,5D tensor softmax.
- kINT/kBOOL/kFLOAT tensor.
- Additions
- Switch Primary GPU.
- Add CudaStream (Multiple GPU inference).