Skip to content

zhenglab/IntrinsicHarmony

Repository files navigation

Delving Deep into Intrinsic Image Harmonization

Here we provide the PyTorch implementation and pre-trained model of our latest version, if you require the code of our previous CVPR version ("Intrinsic Image Harmonization"), please click the release version.

Prerequisites

  • Linux
  • Python 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Datasets

Base Model

  • Train
CUDA_VISIBLE_DEVICES=0 python train.py --model iih_base  --name iih_base_allihd_test  --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx
  • Test
CUDA_VISIBLE_DEVICES=0 python test.py --model iih_base  --name iih_base_allihd_test  --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx
  • Apply pre-trained model

Download pre-trained model from Google Drive or BaiduCloud (access code: n4js), and put latest_net_G.pth in the directory checkpoints/iih_base_allihd. Run:

CUDA_VISIBLE_DEVICES=0 python test.py --model iih_base  --name iih_base_allihd  --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx

Base Model with Lighting

  • Train
CUDA_VISIBLE_DEVICES=0 python train.py --model iih_base_lt  --name iih_base_lt_allihd_test  --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx
  • Test
CUDA_VISIBLE_DEVICES=0 python test.py --model iih_base_lt  --name iih_base_lt_allihd_test  --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx
  • Apply pre-trained model

Download pre-trained model from Google Drive or BaiduCloud (access code: hqhw), and put latest_net_G.pth in the directory checkpoints/iih_base_lt_allihd. Run:

CUDA_VISIBLE_DEVICES=0 python test.py --model iih_base_lt  --name iih_base_lt_allihd  --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx

Base Model with Guiding

  • Train
CUDA_VISIBLE_DEVICES=0 python train.py --model iih_base_gd --name iih_base_gd_allihd_test --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx
  • Test
CUDA_VISIBLE_DEVICES=0 python test.py --model iih_base_gd --name iih_base_gd_allihd_test --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx
  • Apply pre-trained model

Download pre-trained model from Google Drive or BaiduCloud (access code: nqrc), and put latest_net_G.pth in the directory checkpoints/iih_base_gd_allihd. Run:

CUDA_VISIBLE_DEVICES=0 python test.py --model iih_base_gd --name iih_base_gd_allihd --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx

Base Model with Lighting and Guiding

  • Train
CUDA_VISIBLE_DEVICES=0 python train.py --model iih_base_lt_gd  --name iih_base_lt_gd_allihd_test  --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx
  • Test
CUDA_VISIBLE_DEVICES=0 python test.py --model iih_base_lt_gd  --name iih_base_lt_gd_allihd_test  --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx
  • Apply pre-trained model

Download pre-trained model from Google Drive or BaiduCloud (access code: kmgp), and put latest_net_G.pth in the directory checkpoints/iih_base_lt_gd_allihd. Run:

CUDA_VISIBLE_DEVICES=0 python test.py --model iih_base_lt_gd  --name iih_base_lt_gd_allihd  --dataset_root <dataset_dir> --dataset_name IHD --batch_size xx --init_port xxxx

Base Model with Lighting and Guiding on iHarmony4 and HVIDIT Datasets

  • Train
CUDA_VISIBLE_DEVICES=0 python train.py --model iih_base_lt_gd  --name iih_base_lt_gd_newihd_test  --dataset_root <dataset_dir> --dataset_name newIHD --batch_size xx --init_port xxxx
  • Test
CUDA_VISIBLE_DEVICES=0 python test.py --model iih_base_lt_gd  --name iih_base_lt_gd_newihd_test  --dataset_root <dataset_dir> --dataset_name newIHD --batch_size xx --init_port xxxx
  • Apply pre-trained model

Download pre-trained model from Google Drive or BaiduCloud (access code: jnhg), and put latest_net_G.pth in the directory checkpoints/iih_base_lt_gd_allihd. Run:

CUDA_VISIBLE_DEVICES=0 python test.py --model iih_base_lt_gd  --name iih_base_lt_gd_newihd  --dataset_root <dataset_dir> --dataset_name newIHD --batch_size xx --init_port xxxx

Evaluation

We provide the code in ih_evaluation.py. Run:

# iHarmony4 dataset
CUDA_VISIBLE_DEVICES=0 python evaluation/ih_evaluation.py --dataroot <dataset_dir> --result_root  results/experiment/test_latest/images/ --evaluation_type our --dataset_name ALL
# iHarmony4 and HVIDIT datasets
CUDA_VISIBLE_DEVICES=0 python evaluation/ih_evaluation.py --dataroot <dataset_dir> --result_root  results/experiment/test_latest/images/ --evaluation_type our --dataset_name newALL

Quantitative Result

Dataset Metrics Composite Ours
(iHarmony4)
Ours
(iHarmony4+HVIDIT)
HCOCO MSE
PSNR
SSIM
fMSE
fPSNR
fSSIM
69.37
33.99
0.9853
996.59
19.86
0.8257
21.61
37.82
0.9812
361.94
24.17
0.8736
21.51
37.81
0.9812
363.76
24.17
0.8735
HAdobe5k MSE
PSNR
SSIM
fMSE
fPSNR
fSSIM
345.54
28.52
0.9483
2051.61
17.52
0.7295
40.67
36.61
0.9362
259.05
26.36
0.8413
39.27
36.60
0.9364
259.91
26.32
0.8407
HFlickr MSE
PSNR
SSIM
fMSE
fPSNR
fSSIM
264.35
28.43
0.9620
1574.37
18.09
0.8036
94.91
32.10
0.9614
638.36
21.97
0.8444
94.25
32.06
0.9615
635.73
21.92
0.8436
Hday2night MSE
PSNR
SSIM
fMSE
fPSNR
fSSIM
109.65
34.36
0.9607
1409.98
19.14
0.6353
51.44
37.06
0.9308
740.59
22.40
0.6585
59.87
36.42
0.9318
856.95
21.73
0.6549
HVIDIT MSE
PSNR
SSIM
fMSE
fPSNR
fSSIM
53.12
38.72
0.9922
1604.41
19.01
0.7614
-
-
-
-
-
25.51
41.43
0.9919
738.66
21.86
0.7139
ALL MSE
PSNR
SSIM
fMSE
fPSNR
fSSIM
167.39
32.07
0.9724
1386.12
18.97
0.7905
35.90
36.81
0.9649
369.64
24.53
0.8571
35.09
36.99
0.9662
388.30
24.39
0.8506

Real composite image harmonnization

More compared results can be found at Google Drive or BaduCloud (access code: lgs2).

Bibtex

If you use this code for your research, please cite our papers.

@InProceedings{Guo_2021_CVPR,
    author    = {Guo, Zonghui and Zheng, Haiyong and Jiang, Yufeng and Gu, Zhaorui and Zheng, Bing},
    title     = {Intrinsic Image Harmonization},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {16367-16376}
}

Acknowledgement

For some of the data modules and model functions used in this source code, we need to acknowledge the repo of DoveNet and CycleGAN.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages