DD-Net (arxiv paper)
(A Double-feature Double-motion Network)
A lightweight network for body/hand action recognition, implemented by keras tensorflow backend. It also could be the simplest tutorial code to start skeleton-based action recognition.
conda env create -f=DD-Net_env.yml
Note: You can download the raw data and use our code to preprocess them, or, directly use our preprocessed data under /data.
JHMDB raw data download link: http://jhmdb.is.tue.mpg.de/challenge/JHMDB/datasets
SHREC raw data download link: http://www-rech.telecom-lille.fr/shrec2017-hand/
If you do not have enough resource to run this code, please go to use https://colab.research.google.com/drive/19gq3bUigdxIfyMCoWW93YhLEi1KQlBit. We have the preprocessed data under /data, you can download the data and upload them to colab->files, and then run our code on colab.
No. parameters | SHREC-14 | SHREC-28 |
---|---|---|
1.82 M | 94.6 | 91.9 |
0.15 M | 91.8 | 90.0 |
No. parameters | JHMDB |
---|---|
1.82 M | 77.2 |
0.50 M | 73.7 |
Note: if you want to test the speed, please try to run the model.predict() at leat twice and do not take the speed of first run, the model initialization takes extra time.
If you find this code is helpful, thanks for citing our work as,
@inproceedings{yang2019ddnet,
title={Make Skeleton-based Action Recognition Model Smaller, Faster and Better},
author={Fan Yang, Sakriani Sakti, Yang Wu, and Satoshi Nakamura},
booktitle={ACM International Conference on Multimedia in Asia},
year={2019}
}