scispace - formally typeset
Search or ask a question
Author

Fan-Chieh Cheng

Bio: Fan-Chieh Cheng is an academic researcher from National Taipei University of Technology. The author has contributed to research in topics: Adaptive histogram equalization & Histogram. The author has an hindex of 16, co-authored 42 publications receiving 1257 citations. Previous affiliations of Fan-Chieh Cheng include National Taiwan University of Science and Technology & National Taipei University.

Papers
More filters
Journal ArticleDOI
TL;DR: An automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels and uses temporal information regarding the differences between each frame to reduce computational complexity is presented.
Abstract: This paper proposes an efficient method to modify histograms and enhance contrast in digital images. Enhancement plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels. To enhance video, the proposed image-enhancement method uses temporal information regarding the differences between each frame to reduce computational complexity. Experimental results demonstrate that the proposed method produces enhanced images of comparable or higher quality than those produced using previous state-of-the-art methods.

795 citations

Journal ArticleDOI
TL;DR: The proposed median filter restores corrupted images with 1-99% levels of salt-and-pepper impulse noise to satisfactory ones by intuitively and simply recognizing impulse noises, while keeping the others intact as nonnoises.

96 citations

Journal ArticleDOI
TL;DR: An illumination-sensitive background modeling approach to analyze the illumination change and detect moving objects and demonstrates the effectiveness of the proposed approach in providing a promising detection outcome and low computational cost.
Abstract: Background subtraction involves generating the background model from the video sequence to detect the foreground and object for many computer vision applications, including traffic security, human-machine interaction, object recognition, and so on. In general, many background subtraction approaches cannot update the current status of the background image in scenes with sudden illumination change. This is especially true in regard to motion detection when light is suddenly switched on or off. This paper proposes an illumination-sensitive background modeling approach to analyze the illumination change and detect moving objects. For the sudden illumination change, an illumination evaluation is used to determine two background candidates, including a light background image and a dark background image. Based on the background model and illumination evaluation, the binary mask of moving objects can be generated by the proposed thresholding function. Experimental results demonstrate the effectiveness of the proposed approach in providing a promising detection outcome and low computational cost.

89 citations

Proceedings ArticleDOI
21 Nov 2011
TL;DR: Experimental results show that the proposed histogram modification method produces enhanced images of comparable or higher quality than previous state-of-the-art methods.
Abstract: This paper proposes an efficient histogram modification method for contrast enhancement, which plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique to improve the brightness of dimmed images based on the gamma correction and probability distribution of the luminance pixel. Experimental results show that the proposed method produces enhanced images of comparable or higher quality than previous state-of-the-art methods.

63 citations

Journal ArticleDOI
TL;DR: Using the fast and accurate histogram modification allows the proposed method to transform the intensity well for both image and video, providing a promising enhancement outcome with low computational cost.
Abstract: Contrast enhancement involves transforming the intensity of pixels from the original state to feature significant impaction on many display devices, including laptops, PDAs, monitors, mobile camera phones, and so on. This paper proposes a new method to enhance the contrast of the input image and video based on Bezier curve. In order to enhance the quality and reduce the processing time, control points of the mapping curve are automatically calculated by Bezier curve which performs in dark and bright regions separately. Using the fast and accurate histogram modification allows the proposed method to transform the intensity well for both image and video. Experimental results demonstrate the effectiveness of the proposed method in providing a promising enhancement outcome with low computational cost.

54 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Computer and Robot Vision Vol.
Abstract: Computer and Robot Vision Vol. 1, by R.M. Haralick and Linda G. Shapiro, Addison-Wesley, 1992, ISBN 0-201-10887-1.

1,426 citations

Journal ArticleDOI
TL;DR: An automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels and uses temporal information regarding the differences between each frame to reduce computational complexity is presented.
Abstract: This paper proposes an efficient method to modify histograms and enhance contrast in digital images. Enhancement plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels. To enhance video, the proposed image-enhancement method uses temporal information regarding the differences between each frame to reduce computational complexity. Experimental results demonstrate that the proposed method produces enhanced images of comparable or higher quality than those produced using previous state-of-the-art methods.

795 citations

Journal ArticleDOI
TL;DR: This paper proposes to use the convolutional neural network (CNN) to train a SICE enhancer, and builds a large-scale multi-exposure image data set, which contains 589 elaborately selected high-resolution multi-Exposure sequences with 4,413 images.
Abstract: Due to the poor lighting condition and limited dynamic range of digital imaging devices, the recorded images are often under-/over-exposed and with low contrast. Most of previous single image contrast enhancement (SICE) methods adjust the tone curve to correct the contrast of an input image. Those methods, however, often fail in revealing image details because of the limited information in a single image. On the other hand, the SICE task can be better accomplished if we can learn extra information from appropriately collected training data. In this paper, we propose to use the convolutional neural network (CNN) to train a SICE enhancer. One key issue is how to construct a training data set of low-contrast and high-contrast image pairs for end-to-end CNN learning. To this end, we build a large-scale multi-exposure image data set, which contains 589 elaborately selected high-resolution multi-exposure sequences with 4,413 images. Thirteen representative multi-exposure image fusion and stack-based high dynamic range imaging algorithms are employed to generate the contrast enhanced images for each sequence, and subjective experiments are conducted to screen the best quality one as the reference image of each scene. With the constructed data set, a CNN can be easily trained as the SICE enhancer to improve the contrast of an under-/over-exposure image. Experimental results demonstrate the advantages of our method over existing SICE methods with a significant margin.

632 citations

Journal ArticleDOI
TL;DR: A novel reduced-reference image quality metric for contrast change (RIQMC) is presented using phase congruency and statistics information of the image histogram and results justify the superiority and efficiency of RIQMC over a majority of classical and state-of-the-art IQA methods.
Abstract: Proper contrast change can improve the perceptual quality of most images, but it has largely been overlooked in the current research of image quality assessment (IQA). To fill this void, we in this paper first report a new large dedicated contrast-changed image database (CCID2014), which includes 655 images and associated subjective ratings recorded from 22 inexperienced observers. We then present a novel reduced-reference image quality metric for contrast change (RIQMC) using phase congruency and statistics information of the image histogram. Validation of the proposed model is conducted on contrast related CCID2014, TID2008, CSIQ and TID2013 databases, and results justify the superiority and efficiency of RIQMC over a majority of classical and state-of-the-art IQA methods. Furthermore, we combine aforesaid subjective and objective assessments to derive the RIQMC based Optimal HIstogram Mapping (ROHIM) for automatic contrast enhancement, which is shown to outperform recently developed enhancement technologies.

335 citations

Journal ArticleDOI
TL;DR: A new no-reference (NR) IQA model is developed and a robust image enhancement framework is established based on quality optimization, which can well enhance natural images, low-contrast images,Low-light images, and dehazed images.
Abstract: In this paper, we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g., object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g., visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images, which are generally thought to be of the best quality. In this paper, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measure of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image data sets. The results of experiments on nine data sets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-reference, reduced-reference and NA IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images, and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications .

297 citations