Dual Autoencoder Network for Retinex-Based Low-Light Image Enhancement
Citations
138Â citations
Cites methods from "Dual Autoencoder Network for Retine..."
...proposed a dual autoencoder networkmodel based onRetinex theory [255]; in this model, a stacked autoencoder was combined with a convolutional autoencoder to realize low-light enhancement and noise reduction....
[...]
87Â citations
Additional excerpts
...In the field of image processing, there are plenty of auto-encoders proposed for image denoising [14], [15], enhancement [16], [17]...
[...]
81Â citations
Cites background from "Dual Autoencoder Network for Retine..."
...proposed a dual self-coding network model based on retinex theory [30] , which combined a stacked autoencoder and a convolutional autoencoder to perform low-light enhancement and noise reduction with satisfactory results....
[...]
69Â citations
Cites methods from "Dual Autoencoder Network for Retine..."
...age them to improve the generalization of Retinex algorithms. For example, L Shen et al. [30] proposes MSR-Net by combining the MSR [17] with the feedforward convolution neural network. S Park et al. [25] constructs a dual autoencoder network based on the Retinex theory to learn the regularities of illumination and noise respectively. A deep Retinex-Net is proposed in [35] to learn the key constraints...
[...]
50Â citations
References
111,197Â citations
"Dual Autoencoder Network for Retine..." refers methods in this paper
...Adam, which is proposed by Kingma and Ba [24], is used for optimization with a learning rate of 0....
[...]
40,609Â citations
6,816Â citations
"Dual Autoencoder Network for Retine..." refers methods in this paper
...[17] performed noise removal using stacked denoising autoencoder and pairs of original and noise corrupted vectors....
[...]
6,122Â citations
3,480Â citations
"Dual Autoencoder Network for Retine..." refers background in this paper
...first proposed the retinex theory to demonstrate the process of the HVS to perceive colors from the retina to visual cortex [1], [2]....
[...]