scispace - formally typeset
Search or ask a question
Book ChapterDOI

Satellite Image Contrast Enhancement Using Fuzzy Termite Colony Optimization

01 Jan 2018-pp 115-144
TL;DR: This work has proposed a Termite Colony Optimization (TCO) algorithm based on the behavior of termites and uses the proposed algorithm and fuzzy entropy for satellite image contrast enhancement.
Abstract: Image enhancement is an essential subdomain of image processing which caters to the enhancement of visual information within an image. Researchers incorporate different bio-inspired methodologies which imitate the behavior of natural species for optimization-based enhancement techniques. Particle Swarm Optimization imitates the behavior of swarms to discover the finest possible solution in the search space. The peculiar nature of ants to accumulate information about the environment by depositing pheromones is adopted by another technique called Ant Colony Optimization. However, termites have both these characteristics common in them. In this work, the authors have proposed a Termite Colony Optimization (TCO) algorithm based on the behavior of termites. Thereafter they use the proposed algorithm and fuzzy entropy for satellite image contrast enhancement. This technique offers better contrast enhancement of images by utilizing a type-2 fuzzy system and TCO. Initially two sub-images from the input image, named lower and upper in the fuzzy domain, are determined by a type-2 fuzzy system. The S-shape membership function is used for fuzzification. Then an objective function such as fuzzy entropy is optimized in terms of TCO and the adaptive parameters are defined which are applied in the proposed enhancement technique. The performance of the proposed method is evaluated and compared with a number of optimization-based enhancement methods using several test images with several statistical metrics. Moreover, the execution time of TCO is evaluated to find its applicability in real time. Better experimental results over the conventional optimization based enhancement techniques demonstrate the superiority of our proposed methodology.
References
More filters
Journal ArticleDOI
TL;DR: A new appearance matching framework is introduced to determine their parameters and it is found that using a fiber-specific scattering model is crucial to good results as it achieves considerably higher accuracy than prior work.
Abstract: Micro-appearance models explicitly model the interaction of light with microgeometry at the fiber scale to produce realistic appearance. To effectively match them to real fabrics, we introduce a new appearance matching framework to determine their parameters. Given a micro-appearance model and photographs of the fabric under many different lighting conditions, we optimize for parameters that best match the photographs using a method based on calculating derivatives during rendering. This highly applicable framework, we believe, is a useful research tool because it simplifies development and testing of new models.Using the framework, we systematically compare several types of micro-appearance models. We acquired computed microtomography (micro CT) scans of several fabrics, photographed the fabrics under many viewing/illumination conditions, and matched several appearance models to this data. We compare a new fiber-based light scattering model to the previously used microflake model. We also compare representing cloth microgeometry using volumes derived directly from the micro CT data to using explicit fibers reconstructed from the volumes. From our comparisons, we make the following conclusions: (1) given a fiber-based scattering model, volume- and fiber-based microgeometry representations are capable of very similar quality, and (2) using a fiber-specific scattering model is crucial to good results as it achieves considerably higher accuracy than prior work.

530 citations

Journal ArticleDOI
TL;DR: The presented algorithms use the fact that the relationship between stimulus and perception is logarithmic and afford a marriage between enhancement qualities and computational efficiency to choose the best parameters and transform for each enhancement.
Abstract: Many applications of histograms for the purposes of image processing are well known. However, applying this process to the transform domain by way of a transform coefficient histogram has not yet been fully explored. This paper proposes three methods of image enhancement: a) logarithmic transform histogram matching, b) logarithmic transform histogram shifting, and c) logarithmic transform histogram shaping using Gaussian distributions. They are based on the properties of the logarithmic transform domain histogram and histogram equalization. The presented algorithms use the fact that the relationship between stimulus and perception is logarithmic and afford a marriage between enhancement qualities and computational efficiency. A human visual system-based quantitative measurement of image contrast improvement is also defined. This helps choose the best parameters and transform for each enhancement. A number of experimental results are presented to illustrate the performance of the proposed algorithms

527 citations

Journal ArticleDOI
11 Nov 2016
TL;DR: A new data-driven approach forDemosaicking and denoising is introduced: a deep neural network is trained on a large corpus of images instead of using hand-tuned filters and this network and training procedure outperform state-of-the-art both on noisy and noise-free data.
Abstract: Demosaicking and denoising are the key first stages of the digital imaging pipeline but they are also a severely ill-posed problem that infers three color values per pixel from a single noisy measurement. Earlier methods rely on hand-crafted filters or priors and still exhibit disturbing visual artifacts in hard cases such as moire or thin edges. We introduce a new data-driven approach for these challenges: we train a deep neural network on a large corpus of images instead of using hand-tuned filters. While deep learning has shown great success, its naive application using existing training datasets does not give satisfactory results for our problem because these datasets lack hard cases. To create a better training set, we present metrics to identify difficult patches and techniques for mining community photographs for such patches. Our experiments show that this network and training procedure outperform state-of-the-art both on noisy and noise-free data. Furthermore, our algorithm is an order of magnitude faster than the previous best performing techniques.

457 citations

Journal ArticleDOI
Bahriye Akay1
01 Jun 2013
TL;DR: Experiments based on Kapur's entropy indicate that the ABC algorithm can be efficiently used in multilevel thresholding, and CPU time results show that the algorithms are scalable and that the running times of the algorithms seem to grow at a linear rate as the problem size increases.
Abstract: Segmentation is a critical task in image processing. Bi-level segmentation involves dividing the whole image into partitions based on a threshold value, whereas multilevel segmentation involves multiple threshold values. A successful segmentation assigns proper threshold values to optimise a criterion such as entropy or between-class variance. High computational cost and inefficiency of an exhaustive search for the optimal thresholds leads to the use of global search heuristics to set the optimal thresholds. An emerging area in global heuristics is swarm-intelligence, which models the collective behaviour of the organisms. In this paper, two successful swarm-intelligence-based global optimisation algorithms, particle swarm optimisation (PSO) and artificial bee colony (ABC), have been employed to find the optimal multilevel thresholds. Kapur's entropy, one of the maximum entropy techniques, and between-class variance have been investigated as fitness functions. Experiments have been performed on test images using various numbers of thresholds. The results were assessed using statistical tools and suggest that Otsu's technique, PSO and ABC show equal performance when the number of thresholds is two, while the ABC algorithm performs better than PSO and Otsu's technique when the number of thresholds is greater than two. Experiments based on Kapur's entropy indicate that the ABC algorithm can be efficiently used in multilevel thresholding. Moreover, segmentation methods are required to have a minimum running time in addition to high performance. Therefore, the CPU times of ABC and PSO have been investigated to check their validity in real-time. The CPU time results show that the algorithms are scalable and that the running times of the algorithms seem to grow at a linear rate as the problem size increases.

391 citations

Posted Content
TL;DR: Underlying concepts of underlying concepts, along with algorithms commonly used for image enhancement, are provided, with particular reference to point processing methods and histogram processing.
Abstract: Principle objective of Image enhancement is to process an image so that result is more suitable than original image for specific application. Digital image enhancement techniques provide a multitude of choices for improving the visual quality of images. Appropriate choice of such techniques is greatly influenced by the imaging modality, task at hand and viewing conditions. This paper will provide an overview of underlying concepts, along with algorithms commonly used for image enhancement. The paper focuses on spatial domain techniques for image enhancement, with particular reference to point processing methods and histogram processing.

363 citations