Cycle-Interactive Generative Adversarial Network for Robust Unsupervised Low-Light Enhancement

Zhangkai Ni1, Wenhan Yang2, Hanli Wang1, Shiqi Wang4, Lin Ma3, and Sam Kwong4

1Department of Computer Science and Technology, Tongji University, Shanghai, China

2School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore

3Meituan, Beijing, China

4Department of Computer Science, City University of Hong Kong, Hong Kong

Abstract

Getting rid of the fundamental limitations in fitting to the paired training data, recent unsupervised low-light enhancement methods excel in adjusting illumination and contrast of images. However, for unsupervised low light enhancement, the remaining noise suppression issue due to the lacking of supervision of detailed signal largely impedes the wide deployment of these methods in real-world applications. Herein, we propose a novel Cycle-Interactive Generative Adversarial Network (CIGAN) for unsupervised low-light image enhancement, which is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals between two domains, e.g., suppressing/synthesizing realistic noise in the cyclic enhancement/degradation process. In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN (eGAN) into the generator of degradation GAN (dGAN). With the learned information of real low-light images, dGAN can synthesize more realistic diverse illumination and contrast in low-light images. Moreover, the feature randomized perturbation module in dGAN learns to increase the feature randomness to produce diverse feature distributions, persuading the synthesized low-light images to contain realistic noise. Extensive experiments demonstrate both the superiority of the proposed method and the effectiveness of each module in CIGAN.

Cycle-Interactive Generative Adversarial Network for Image Enhancement

Dummy Image

Fig. 1. The proposed unsupervised CIGAN consists of complementary dGAN and eGAN. (i) The dGAN learns to synthesize realistic low-light images under the supervision of unpaired normal/low-light images. (ii) The eGAN restores normal-light images from synthesized low-light images under the paired supervision generated by dGAN. Our CIGAN is different from previous CycleGANs in: 1) Red dotted line: the generator of eGAN feed-forwards information of low-light features to that of dGAN via LGT to help dGAN generate more diverse and realistic illumination and contrast; 2) Blue dotted line: Random noise is injected into dGAN via Feature Randomized Perturbation (FRP) to learn to produce more diverse feature distributions for synthesizing more realistic noise. The better the synthesized low-light images, the better the eGAN that learns to enhance low-light images towards better illumination, contrast, and suppressed noise.

As shown in Fig. 1, our proposed CIGAN aims to improve the perceptual quality of low-light images by simultaneously adjusting illumination, enhancing contrast, and suppressing noise under the supervision of unpaired data. It consists of complementary degradation GAN (dGAN) and enhancement GAN (eGAN).

  • dGAN: It aims to synthesize a realistic low-light image from the input normal-light image with the help of a reference low-light image. We design two modules to synthesize more realistic low-light images with low-light illumination and contrast as well as intensive noise. As denoted by the red dotted line in Fig. 1, LGT allows generators of degradation/enhancement to interact, which helps to generate low-light images with more diverse and realistic illumination and contrast. As denoted by the blue dotted line in Fig. 1, FRP learns to inject random noise into features to make the feature distributions more diverse and synthesize more realistic image noise.
  • eGAN: Conversely, it focuses on learning to recover the latent normal-light image from the synthesized low-light image.
Experimental Results

Quantitative Comparison

Table. 1. Quantitative comparisons of different methods on real low-light test images in LOL-V2 dataset

Dummy Image

Table 1 compares the proposed CIGAN with the classical and state-of-the-art methods on LOL-V2 dataset. It can be observed that the proposed method outperforms all previous methods in the comparison because it consistently achieves the highest scores in terms of PSNR, PSNR-GC, SSIM, and SSIM-GC. This reveals that the proposed CIGAN is much more effective in illumination enhancement, structure restoration, and noise suppression. From Table 1, we can see that the proposed method is significantly superior to other state-of-the-art unsupervised methods (i.e., CycleGAN, EnlightenGAN, and ZeroDCE). This is because, on one hand, dGAN makes the attributes of synthesized low-light images consistent with those of real ones, and on the other hand, eGAN is able to restore high-quality normal-light images. Another interesting observation is that the proposed CIGAN even achieves better performance than leading supervised methods (i.e., DRD, UPE, and SICE) trained on a large number of paired images. It is worth noting that the larger PSNR gap between with and without Gamma correction shows that our method can effectively remove intensive noise and restore vivid details.

Qualitative Comparison
Dummy Image

Fig. 2. Visual quality comparisons of state-of-the-art enhancement methods. Upper left: original results. Lower right: the corresponding results after Gamma transformation correction for better comparison.

Dummy Image

Fig. 3. The visual quality comparison for a close-up region of state-of-the-art enhancement methods.

Download

If you use the code in your research, we kindly ask that you reference this website and our paper listed below.

Zhangkai Ni, Wenhan Yang, Lin Ma, Shiqi Wang, and Sam Kwong, "Cycle-Interactive Generative Adversarial Network for Robust Unsupervised Low-Light Enhancement," In Proceedings of the 30th ACM International Conference on Multimedia (ACM Multimedia), October 2022. (accepted) Paper | Code (coming soon) | BibTex

@inproceedings{ni2022cycle,
	title={Cycle-Interactive Generative Adversarial Network for Robust Unsupervised Low-Light Enhancement},
	author={Ni, Zhangkai and Yang, Wenhan and Wang, Hanli and Wang, Shiqi and Ma, Lin and Kwong, Sam},
	booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
	pages={},
	year={2022}
}

References

CIGAN
Zhangkai Ni, Wenhan Yang, Lin Ma, Shiqi Wang, and Sam Kwong, "Cycle-Interactive Generative Adversarial Network for Robust Unsupervised Low-Light Enhancement," In Proceedings of the 30th ACM International Conference on Multimedia (ACM Multimedia), October 2022. (accepted)
BIMEF
Zhenqiang Ying, Ge Li, and Wen Gao. 2017. A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv preprint arXiv:1711.00591 (2017).
DHECE
Keita Nakai, Yoshikatsu Hoshi, and Akira Taguchi. 2013. Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms. In Proceedings of the International Symposium on Intelligent Signal Processing and Communication Systems. 445–449.
LIME
Xiaojie Guo, Yu Li, and Haibin Ling. 2016. LIME: Low-light image enhancement via illumination map estimation. IEEE Transactions on image processing 26, 2 (2016), 982–993.
MEF
Xueyang Fu, Delu Zeng, Yue Huang, Yinghao Liao, Xinghao Ding, and John Paisley. 2016. A fusion-based enhancing method for weakly illuminated images. Signal Processing 129 (2016), 82–96.
JED
Xutong Ren, Mading Li, Wen-Huang Cheng, and Jiaying Liu. 2018. Joint enhancement and denoising method via sequential decomposition. In Proceedings of the IEEE International Symposium on Circuits and Systems. 1–5.
RRM
Mading Li, Jiaying Liu, Wenhan Yang, Xiaoyan Sun, and Zongming Guo. 2018. Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing 27, 6 (2018), 2828–2841.
UPE
Ruixing Wang, Qing Zhang, Chi-Wing Fu, Xiaoyong Shen, Wei-Shi Zheng, and Jiaya Jia. 2019. Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6849–6857.
EnlightenGAN
Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang. 2021. EnlightenGAN: Deep Light Enhancement Without Paired Supervision. IEEE transactions on image processing 30 (2021), 2340–2349.
ZeroDCE
Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. 2020. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1780–1789.
Contact

If you have any questions, please feel free to contact Dr. Zhangkai Ni .

Last update: Jul., 2022

Top