Skip to content

[IEEE T-IP 2020] Deep Image-to-Video Adaptation and Fusion Networks for Action Recognition

Notifications You must be signed in to change notification settings

YangLiu9208/DIVAFN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Image-to-Video Adaptation and Fusion Networks for Action Recognition (accepted by IEEE Transactions on Image Processing)

ORCID iD iconorcid.org/0000-0002-9423-9252

Homepage: https://yangliu9208.github.io/home/

Network architecture

Image

Figure 1: Configuration of the deep neural network for image modality. "f" denotes the number of convolutional filters and their receptive field size, "st" denotes the convolutional stride, "pad" denotes the number of pixels to add to each size of the input, "LRN" denotes whether Local Response Normalization (LRN) is applied or not, and “pool” denotes the downsampling factor.

Image

Figure 2: Configuration of the deep neural network for keyframe modality.

Image

Figure 3: Configuration of the deep neural network for video modality

Datasets

Image

Stanford40->UCF101 dataset can be downloaded here. Stanford40

ASD->UCF101 dataset can be downloaded here. ASD

EAD->HMDB51 dataset can be downloaded here. EAD, HMDB51

BU101->UCF101 dataset can be downloaded here. BU101, UCF101

Codes

Soon will be available.

About

[IEEE T-IP 2020] Deep Image-to-Video Adaptation and Fusion Networks for Action Recognition

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published