This repository provides a PyTorch implementation of the research paper, Deep-Emotion.
Note: This implementation is not the official one described in the paper.
- An end-to-end deep learning framework based on attentional convolutional networks.
- The attention mechanism is incorporated using spatial transformer networks.
This implementation uses the following datasets:
Make sure you have the following libraries installed:
- PyTorch >= 1.1.0
- torchvision == 0.5.0
- OpenCV
- tqdm
- Pillow (PIL)
This repository is organized as follows:
main
: Contains setup for the dataset and training loop.visualize
: Includes source code for evaluating the model on test data and real-time testing using a webcam.deep_emotion
: Defines the model class.data_loaders
: Contains the dataset class.generate_data
: Sets up the dataset.
Watch this informative video created by DeepLearning_by_PhDScholar for a comprehensive guide on using this implementation.
- Download the dataset from Kaggle.
- Decompress
train.csv
andtest.csv
into the./data
folder.
Setup the Dataset
python main.py [-s [True]] [-d [data_path]]
--setup Setup the dataset for the first time
--data Data folder that contains data files
To train the model
python main.py [-t] [--data [data_path]] [--hparams [hyperparams]]
[--epochs] [--learning_rate] [--batch_size]
--data Data folder that contains training and validation files
--train True when training
--hparams True when changing the hyperparameters
--epochs Number of epochs
--learning_rate Learning rate value
--batch_size Training/validation batch size
To validate the model
python visualize.py [-t] [-c] [--data [data_path]] [--model [model_path]]
--data Data folder that contains test images and test CSV file
--model Path to pretrained model
--test_cc Calculate the test accuracy
--cam Test the model in real-time with webcam connected via USB