My specialization as a computer science researcher lies in computer graphics and vision. My research interests are focused on VR, 3D animation, image synthesis, and image/video editing.
- 🔭 I’m currently working on Facial Expression Transfer
Muhammad Mamunur Rashid, Shihao Wu, Yongwei Nie, Guiqing Li
The Visual Computer (CGI 2023), July 2023
We propose a GAN-based facial expression transfer method. It can transfer the facial expression of a reference subject to the source subject while preserving the source identity attributes, such as shape, appearance, and illumination. Our method consists of two modules based on GAN: Parts Generation Networks (PGNs), and Parts Fusion Network (PFN). Instead of training the model on the entire image globally, our key idea is to train different PGNs for different local facial parts independently and then fuse the generated parts together using PFN. To encode the facial expression faithfully, we use a pre-trained parametric 3D head model (called photometric FLAME) to reconstruct realistic head models from both source and reference images. We also extract 3D facial feature points of the reference image to handle extreme poses and occlusions. Based on the extracted contextual information, we use PGNs to generate different parts of the head independently. Finally, PFN is used to fuse all the generated parts together to form the final image. Experiments show that the proposed model outperforms state-of-the-art approaches in faithfully transferring facial expressions, especially when the reference image has a different head pose to the source image. Ablation studies demonstrate the power of using PGNs.
Muhammad Mamunur Rashid, Guiqing Li, Chengrui Du
IET Signal Processing, June 2023
Human non-speech sounds occur during expressions in a real-life environment. Realising a person's incapability to prompt confident expressions by non-speech sounds may assist in identifying premature disorder in medical applications. A novel dataset named Nonspeech7k is introduced that contains a diverse set of human non-speech sounds, such as the sounds of breathing, coughing, crying, laughing, screaming, sneezing, and yawning. The authors then conduct a variety of classification experiments with end-to-end deep convolutional neural networks (CNN) to show the performance of the dataset. First, a set of typical deep classifiers are used to verify the reliability and validity of Nonspeech7k. Involved CNN models include 1D-2D deep CNN EnvNet, deep stack CNN M11, deep stack CNN M18, intense residual block CNN ResNet34, modified M11 named M12, and the authors’ baseline model. Among these, M12 achieves the highest accuracy of 79%. Second, to verify the heterogeneity of Nonspeech7k with respect to two typical datasets, FSD50K and VocalSound, the authors design a series of experiments to analyse the classification performance of deep neural network classifier M12 by using FSD50K, FSD50K + Nonspeech7k, VocalSound, VocalSound + Nonspeech7k as training data, respectively. Experimental results show that the classifier trained with existing datasets mixed with Nonspeech7k achieves the highest accuracy improvement of 15.7% compared to that without Nonspeech7k mixed. Nonspeech7k is 100% annotated, completely checked, and free of noise.