Skip to content

Deep Impression: Audiovisual Deep Residual Networks for Multimodal Apparent Personality Trait Recognition

License

Notifications You must be signed in to change notification settings

Samir55/deep_impression

 
 

Repository files navigation

deep_impression

Deep Impression: Audiovisual Deep Residual Networks for Multimodal Apparent Personality Trait Recognition

This repository contains the implementation of the model that won the third place in the ChaLearn First Impressions Challenge @ ECCV2016.

The details of the model can be found in:

Yağmur Güçlütürk, Umut Güçlü, Marcel van Gerven, Rob van Lier. Deep Impression: Audiovisual Deep Residual Networks for Multimodal Apparent Personality Trait Recognition. ChaLearn Looking at People Workshop on Apparent Personality Analysis, ECCV Workshop proceedings, LNCS, Springer, 2016, in press.

ABSTRACT

Here, we develop an audiovisual deep residual network for multimodal apparent personality trait recognition. The network is trained end-to-end for predicting the Big Five personality traits of people from their videos. That is, the network does not require any feature engineering or visual analysis such as face detection, face landmark alignment or facial expression recognition. Recently, the network won the third place in the ChaLearn First Impressions Challenge @ ECCV2016 with a test accuracy of 0.9109.

The demo of the model can be found in:

demo.ipynb.

REQUIREMENTS

System: CUDA Toolkit, cuDNN (and a suitable NVIDIA GPU).
Python: chainer, librosa, numpy, skvideo.

Please cite the above paper if you use the model in your work.

About

Deep Impression: Audiovisual Deep Residual Networks for Multimodal Apparent Personality Trait Recognition

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 85.1%
  • Jupyter Notebook 14.9%