Skip to content

Optimized inference deep head pose for limit compute & resources

Notifications You must be signed in to change notification settings

docongminh/pose-detect-optimization

Repository files navigation

ncnn-deep-head-pose

Simple implement inference deep head pose ncnn version with high performance and optimized resource. This project based on deep-head-pose project by Nataniel Ruiz. And detail paper in CVPR Workshop. I use Retinaface for face detection step.

Workflow

Re-build ncnn for arbitrary platform.

  • Official ncnn document shown in detail how to use and build ncnn for arbitrary platform.
  • And if you use my docker environment, i was build ncnn library inside docker environment with path: /home/ncnn_original/build it's contain ncnn shared static library and tools for convert and quantize ncnn models.

Convert models ncnn format.

  • As original deep head pose project used Pytorch framwork. So, We need convert Pytorch model to ncnn model.
  • In ncnn wiki had detailed this guide here. After convert pytorch to onnx format, you have to use ncnn build tools to convert onnx->ncnn. Inside my docker env, ready to use in /home/ncnn_original/build/tools/onnx
  • Notice, Netron support to visualize network in intuitive easy to get input node and output node as specification of ncnn

Build run test

Run environment

  • git clone https://github.com/docongminh/ncnn-deep-head-pose
  • cd ncnn-deep-head-pose
  • execute env: docker exec -it deep-head-pose bash
  • cd to source mounted: cd /source
  • cd to ncnn build library: cd /home/ncnn_original

Cmake project

  • In project root inside docker: `mkdir -p build && cd build

  • Cmake and build: cmake .. && make

  • Run test: ./main

  • Examples:

    cr7

    m10

Note during develop project

References