scispace - formally typeset
Book ChapterDOI

Satellite Image Contrast Enhancement Using Fuzzy Termite Colony Optimization

01 Jan 2018-pp 115-144

...read more


References
More filters
Proceedings Article

[...]

Sergey Ioffe1, Christian Szegedy1
06 Jul 2015
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

23,723 citations

Book ChapterDOI

[...]

06 Sep 2014
TL;DR: This work proposes a deep learning method for single image super-resolution (SR) that directly learns an end-to-end mapping between the low/high-resolution images and shows that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network.
Abstract: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.

3,331 citations

Journal ArticleDOI

[...]

TL;DR: A new method for unsharp masking for contrast enhancement of images is presented that employs an adaptive filter that controls the contribution of the sharpening path in such a way that contrast enhancement occurs in high detail areas and little or no image sharpening occurs in smooth areas.
Abstract: This paper presents a new method for unsharp masking for contrast enhancement of images. The approach employs an adaptive filter that controls the contribution of the sharpening path in such a way that contrast enhancement occurs in high detail areas and little or no image sharpening occurs in smooth areas.

684 citations

Journal ArticleDOI

[...]

TL;DR: A general framework based on histogram equalization for image contrast enhancement, and a low-complexity algorithm for contrast enhancement is presented, and its performance is demonstrated against a recently proposed method.
Abstract: A general framework based on histogram equalization for image contrast enhancement is presented. In this framework, contrast enhancement is posed as an optimization problem that minimizes a cost function. Histogram equalization is an effective technique for contrast enhancement. However, a conventional histogram equalization (HE) usually results in excessive contrast enhancement, which in turn gives the processed image an unnatural look and creates visual artifacts. By introducing specifically designed penalty terms, the level of contrast enhancement can be adjusted; noise robustness, white/black stretching and mean-brightness preservation may easily be incorporated into the optimization. Analytic solutions for some of the important criteria are presented. Finally, a low-complexity algorithm for contrast enhancement is presented, and its performance is demonstrated against a recently proposed method.

681 citations

Journal ArticleDOI

[...]

TL;DR: In this paper, the authors considered the problem of decomposition of the probability density function of the original set into the weighted sum of the component fuzzy set densities, which is done by optimization of some functional defined over all possible fuzzy classifications.
Abstract: In a previous paper ^[^1^] the use of the concept of fuzzy sets in clustering was proposed. The convenience of fuzzy clustering over conventional representation was then stressed. Assigning each point a degree of belongingness to each cluster provides a way of characterizing bridges, strays, and undetermined points. This is especially useful when considering scattered data. The classificatory process may be considered as the breakdown of the probability density function of the original set into the weighted sum of the component fuzzy set densities. Such decomposition should be performed so that the components really represent clusters. This is done by optimization of some functional defined over all possible fuzzy classifications of the data set. Several functionals were suggested in ^[^1^]. The bulk of this paper is concerned with numerical techniques useful in the solution of such problems. The first two formulas treated do not provide an acceptable fuzzy classification but yield good starting points for the minimization of a third functional. This last method obtains very good dichotomies and is characterized by slower convergence than the previous processes. Using that functional, a modification is suggested to obtain partitions in more than two sets. Numerous computational experiments are presented.

533 citations