(accepted by IEEE Transactions on Image Processing)
Homepage: https://yangliu9208.github.io/home/
Existing deep learning methods for action recognition in videos require a large number of labeled videos for training, which is labor-intensive and time-consuming. For the same action, the knowledge learned from different media types, e.g., videos and images, may be related and complementary. However, due to the domain shifts and heterogeneous feature representations between videos and images, the performance of classifiers trained on images may be dramatically degraded when directly deployed to videos without effective domain adaptation and feature fusion methods. In this paper, we propose a novel method, named Deep Image-to-Video Adaptation and Fusion Networks (DIVAFN), to enhance action recognition in videos by transferring knowledge from images using video keyframes as a bridge. The DIVAFN is a unified deep learning model, which integrates domain-invariant representations learning and cross-modal feature fusion into a unified optimization framework. Specifically, we design an efficient cross-modal similarities metric to reduce the modality shift among images, keyframes and videos. Then, we adopt an autoencoder architecture, whose hidden layer is constrained to be the semantic representations of the action class names. In this way, when the autoencoder is adopted to project the learned features from different domains to the same space, more compact, informative and discriminative representations can be obtained. Finally, the concatenation of the learned semantic feature representations from these three autoencoders are used to train the classifier for action recognition in videos. Comprehensive experiments on four real-world datasets show that our method outperforms some state-of-the-art domain adaptation and action recognition methods.
Figure 1: Configuration of the deep neural network for image modality. "f" denotes the number of convolutional filters and their receptive field size, "st" denotes the convolutional stride, "pad" denotes the number of pixels to add to each size of the input, "LRN" denotes whether Local Response Normalization (LRN) is applied or not, and “pool” denotes the downsampling factor.
Figure 2: Configuration of the deep neural network for keyframe modality.
Figure 3: Configuration of the deep neural network for video modality
Stanford40->UCF101 dataset can be downloaded here. Stanford40
ASD->UCF101 dataset can be downloaded here. ASD
EAD->HMDB51 dataset can be downloaded here. EAD, HMDB51
BU101->UCF101 dataset can be downloaded here. BU101, UCF101
Soon will be available.