This fork of StyleGAN 2 modifies the face's expression after generation using a predefined editing direction. It then incorporates elements from StylePortraitVideo's expression optimization process to ensure that the edited image's lip expression matches closely with the original's while maintaining the changed expression.
This fork works with the following configuration:
- PyTorch 1.3.1
- CUDA 10.1/10.2
python generate.py --sample N_FACES --pics N_PICS --ckpt PATH_CHECKPOINT
You should change your size (--size 256 for example) if you train with another dimension.
python generate_and_copy_lips.py --sample N_FACES --pics N_PICS --ckpt PATH_CHECKPOINT
You should change your size (--size 256 for example) if you train with another dimension.
Generated Image Edited Image Optimized Image
Model details and custom CUDA kernel codes are from official repostiories: https://github.com/NVlabs/stylegan2
Codes for Learned Perceptual Image Patch Similarity, LPIPS came from https://github.com/richzhang/PerceptualSimilarity
To match FID scores more closely to tensorflow official implementations, I have used FID Inception V3 implementations in https://github.com/mseitzer/pytorch-fid