scispace - formally typeset
Search or ask a question

Showing papers on "Grayscale published in 2015"


Journal ArticleDOI
TL;DR: This paper comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers and performs detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance.
Abstract: While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

684 citations


Journal ArticleDOI
TL;DR: In this article, a new underwater color image quality evaluation (UCIQE) metric is proposed to quantify the non-uniform color cast, blurring, and low contrast that characterize underwater engineering and monitoring images.
Abstract: Quality evaluation of underwater images is a key goal of underwater video image retrieval and intelligent processing. To date, no metric has been proposed for underwater color image quality evaluation (UCIQE). The special absorption and scattering characteristics of the water medium do not allow direct application of natural color image quality metrics especially to different underwater environments. In this paper, subjective testing for underwater image quality has been organized. The statistical distribution of the underwater image pixels in the CIELab color space related to subjective evaluation indicates the sharpness and colorful factors correlate well with subjective image quality perception. Based on these, a new UCIQE metric, which is a linear combination of chroma, saturation, and contrast, is proposed to quantify the non-uniform color cast, blurring, and low-contrast that characterize underwater engineering and monitoring images. Experiments are conducted to illustrate the performance of the proposed UCIQE metric and its capability to measure the underwater image enhancement results. They show that the proposed metric has comparable performance to the leading natural color image quality metrics and the underwater grayscale image quality metrics available in the literature, and can predict with higher accuracy the relative amount of degradation with similar image content in underwater environments. Importantly, UCIQE is a simple and fast solution for real-time underwater video processing. The effectiveness of the presented measure is also demonstrated by subjective evaluation. The results show better correlation between the UCIQE and the subjective mean opinion score.

638 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: Inspired by the recent success in deep learning techniques which provide amazing modeling of large-scale data, this paper re-formulates the colorization problem so thatDeep learning techniques can be directly employed and a joint bilateral filtering based post-processing step is proposed to ensure artifact-free quality.
Abstract: This paper investigates into the colorization problem which converts a grayscale image to a colorful version. This is a very difficult problem and normally requires manual adjustment to achieve artifact-free quality. For instance, it normally requires human-labelled color scribbles on the grayscale target image or a careful selection of colorful reference images (e.g., capturing the same scene in the grayscale target image). Unlike the previous methods, this paper aims at a high-quality fully-automatic colorization method. With the assumption of a perfect patch matching technique, the use of an extremely large-scale reference database (that contains sufficient color images) is the most reliable solution to the colorization problem. However, patch matching noise will increase with respect to the size of the reference database in practice. Inspired by the recent success in deep learning techniques which provide amazing modeling of large-scale data, this paper re-formulates the colorization problem so that deep learning techniques can be directly employed. To ensure artifact-free quality, a joint bilateral filtering based post-processing step is proposed. Numerous experiments demonstrate that our method outperforms the state-of-art algorithms both in terms of quality and speed.

439 citations


Journal ArticleDOI
TL;DR: In this paper, a multi-level thresholding method for unsupervised separation between objects and background from a natural color image using the concept of the minimum cross entropy (MCE) is proposed.

134 citations


Journal ArticleDOI
TL;DR: This work proposes and demonstrates a VLC link using mobile-phone camera with data rate higher than frame rate of the CMOS image sensor, and describes the procedure of synchronization and demodulation, which includes file format conversion, grayscale conversion, column matrix selection avoiding blooming, and polynomial fitting for threshold location.
Abstract: Complementary Metal-Oxide-Semiconductor (CMOS) image sensors are widely used in mobile-phone and cameras. Hence, it is attractive if these image sensors can be used as the visible light communication (VLC) receivers (Rxs). However, using these CMOS image sensors are challenging. In this work, we propose and demonstrate a VLC link using mobile-phone camera with data rate higher than frame rate of the CMOS image sensor. We first discuss and analyze the features of using CMOS image sensor as VLC Rx, including the rolling shutter effect, overlapping of exposure time of each row of pixels, frame-to-frame processing time gap, and also the image sensor "blooming" effect. Then, we describe the procedure of synchronization and demodulation. This includes file format conversion, grayscale conversion, column matrix selection avoiding blooming, polynomial fitting for threshold location. Finally, the evaluation of bit-error-rate (BER) is performed satisfying the forward error correction (FEC) limit.

116 citations


Journal ArticleDOI
TL;DR: This paper proposes a new image-based process monitoring approach that is capable of handling both grayscale and color images and employs low-rank tensor decomposition techniques to extract important monitoring features monitored using multivariate control charts.
Abstract: Image and video sensors are increasingly being deployed in complex systems due to the rich process information that these sensors can capture. As a result, image data play an important role in process monitoring and control in different application domains such as manufacturing processes, food industries, medical decision-making, and structural health monitoring. Existing process monitoring techniques fail to fully utilize the information of color images due to their complex data characteristics including the high-dimensionality and correlation structure (i.e., temporal, spatial and spectral correlation). This paper proposes a new image-based process monitoring approach that is capable of handling both grayscale and color images. The proposed approach models the high-dimensional structure of the image data with tensors and employs low-rank tensor decomposition techniques to extract important monitoring features monitored using multivariate control charts. In addition, this paper shows the analytical relationships between different low-rank tensor decomposition methods. The performance of the proposed method in quick detection of process changes is evaluated and compared with existing methods through extensive simulations and a case study in a steel tube manufacturing process.

116 citations


Journal ArticleDOI
TL;DR: This paper presents a robust text detection approach based on color-enhanced contrasting extremal region (CER) and neural networks that achieves superior performance on both I CDAR-2011 and ICDAR-2013 "Reading Text in Scene Images" test sets.

110 citations


Journal ArticleDOI
TL;DR: An adaptively regularized kernel-based fuzzy C-means clustering framework for segmentation of brain magnetic resonance images that is superior in preserving image details and segmentation accuracy while maintaining a low computational complexity.
Abstract: An adaptively regularized kernel-based fuzzy -means clustering framework is proposed for segmentation of brain magnetic resonance images. The framework can be in the form of three algorithms for the local average grayscale being replaced by the grayscale of the average filter, median filter, and devised weighted images, respectively. The algorithms employ the heterogeneity of grayscales in the neighborhood and exploit this measure for local contextual information and replace the standard Euclidean distance with Gaussian radial basis kernel functions. The main advantages are adaptiveness to local context, enhanced robustness to preserve image details, independence of clustering parameters, and decreased computational costs. The algorithms have been validated against both synthetic and clinical magnetic resonance images with different types and levels of noises and compared with 6 recent soft clustering algorithms. Experimental results show that the proposed algorithms are superior in preserving image details and segmentation accuracy while maintaining a low computational complexity.

103 citations


Journal ArticleDOI
TL;DR: A low intricacy technique for contrast enhancement is proposed, and its performance is exhibited against various versions of histogram-based enhancement technique using three advanced image quality assessment metrics of Universal Image Quality Index (UIQI), Structural Similarity Index (SSIM), and Feature Similarity index (FSIM).
Abstract: Image contrast is an essential visual feature that determines whether an image is of good quality. In computed tomography (CT), captured images tend to be low contrast, which is a prevalent artifact that reduces the image quality and hampers the process of extracting its useful information. A common tactic to process such artifact is by using histogram-based techniques. However, although these techniques may improve the contrast for different grayscale imaging applications, the results are mostly unacceptable for CT images due to the presentation of various faults, noise amplification, excess brightness, and imperfect contrast. Therefore, an ameliorated version of the contrast-limited adaptive histogram equalization (CLAHE) is introduced in this article to provide a good brightness with decent contrast for CT images. The novel modification to the aforesaid technique is done by adding an initial phase of a normalized gamma correction function that helps in adjusting the gamma of the processed image to avoid the common errors of the basic CLAHE of the excess brightness and imperfect contrast it produces. The newly developed technique is tested with synthetic and real-degraded low-contrast CT images, in which it highly contributed in producing better quality results. Moreover, a low intricacy technique for contrast enhancement is proposed, and its performance is also exhibited against various versions of histogram-based enhancement technique using three advanced image quality assessment metrics of Universal Image Quality Index (UIQI), Structural Similarity Index (SSIM), and Feature Similarity Index (FSIM). Finally, the proposed technique provided acceptable results with no visible artifacts and outperformed all the comparable techniques.

86 citations


Journal ArticleDOI
TL;DR: One of the first attempts to develop an objective quality model that automatically predicts the perceived quality of C2G converted images is made, Inspired by the philosophy of the structural similarity index, which evaluates the luminance, contrast, and structure similarities between the reference color image and the C1G converted image.
Abstract: Color-to-gray (C2G) image conversion is the process of transforming a color image into a grayscale one. Despite its wide usage in real-world applications, little work has been dedicated to compare the performance of C2G conversion algorithms. Subjective evaluation is reliable but is also inconvenient and time consuming. Here, we make one of the first attempts to develop an objective quality model that automatically predicts the perceived quality of C2G converted images. Inspired by the philosophy of the structural similarity index, we propose a C2G structural similarity (C2G-SSIM) index, which evaluates the luminance, contrast, and structure similarities between the reference color image and the C2G converted image. The three components are then combined depending on image type to yield an overall quality measure. Experimental results show that the proposed C2G-SSIM index has close agreement with subjective rankings and significantly outperforms existing objective quality metrics for C2G conversion. To explore the potentials of C2G-SSIM, we further demonstrate its use in two applications: 1) automatic parameter tuning for C2G conversion algorithms and 2) adaptive fusion of C2G converted images.

71 citations


Journal ArticleDOI
TL;DR: A local adaptive thresholding technique based on gray level cooccurrence matrix- (GLCM-) energy information for retinal vessel segmentation is presented and is time efficient with a higher average sensitivity and average accuracy rates in the same range of very good specificity.
Abstract: Although retinal vessel segmentation has been extensively researched, a robust and time efficient segmentation method is highly needed. This paper presents a local adaptive thresholding technique based on gray level cooccurrence matrix- (GLCM-) energy information for retinal vessel segmentation. Different thresholds were computed using GLCM-energy information. An experimental evaluation on DRIVE database using the grayscale intensity and Green Channel of the retinal image demonstrates the high performance of the proposed local adaptive thresholding technique. The maximum average accuracy rates of 0.9511 and 0.9510 with maximum average sensitivity rates of 0.7650 and 0.7641 were achieved on DRIVE and STARE databases, respectively. When compared to the widely previously used techniques on the databases, the proposed adaptive thresholding technique is time efficient with a higher average sensitivity and average accuracy rates in the same range of very good specificity.

Journal ArticleDOI
TL;DR: A simple but effective image prior, change of detail (CoD) prior, to remove haze from a single input image, which can be implemented very quickly and stable to image local regions containing objects in different depths.

Journal ArticleDOI
TL;DR: The efficacy of the proposed scheme is verified by computing mean-squared-error (MSE) between the recovered and the original images, and the scheme’s sensitivity to the encryption parameters is investigated and its robustness against occlusion and noise attacks is examined.
Abstract: We have carried out a study of optical image encryption in the Fresnel transform () domain, using a random phase mask (RPM) in the input plane and a phase mask based on devil’s vortex toroidal lens (DVTL) in the frequency plane. The original images are recovered from their corresponding encrypted images by using the correct parameters of the and the parameters of DVTL. The use of a DVTL-based structured mask enhances security by increasing the key space for encryption and also aids in overcoming the problem of axis alignment associated with an optical setup. The proposed encryption scheme is a lensless optical system and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The scheme has been validated for a grayscale and a binary image. The efficacy of the proposed scheme is verified by computing mean-squared-error (MSE) between the recovered and the original images. We have also investigated the scheme’s sensitivity to the encryption parameters and examined its robustness against occlusion and noise attacks.

Journal ArticleDOI
TL;DR: An algorithm for invisible grayscale logo watermarking that operates via adaptive texturization of the logo is presented, which demonstrates that the proposed algorithm yields better overall performance than competing methods.
Abstract: Grayscale logo watermarking is a quite well-developed area of digital image watermarking which seeks to embed into the host image another smaller logo image. The key advantage of such an approach is the ability to visually analyze the extracted logo for rapid visual authentication and other visual tasks. However, logos pose new challenges for invisible watermarking applications which need to keep the watermark imperceptible within the host image while simultaneously maintaining robustness to attacks. This paper presents an algorithm for invisible grayscale logo watermarking that operates via adaptive texturization of the logo. The central idea of our approach is to recast the watermarking task into a texture similarity task. We first separate the host image into sufficiently textured and poorly textured regions. Next, for textured regions, we transform the logo into a visually similar texture via the Arnold transform and one lossless rotation; whereas for poorly textured regions, we use only a lossless rotation. The iteration for the Arnold transform and the angle of lossless rotation are determined by a model of visual texture similarity. Finally, for each region, we embed the transformed logo into that region via a standard wavelet-based embedding scheme. We employ a multistep extraction stage, in which an affine parameter estimation is first performed to compensate for possible geometrical transformations. Testing with multiple logos on a database of host images and under a variety of attacks demonstrates that the proposed algorithm yields better overall performance than competing methods.

Journal ArticleDOI
01 May 2015-EPL
TL;DR: It is found that binary signals can be applied at reference and object beam arms, which provides a promising alternative to explore the potential of ghost imaging for some applications.
Abstract: In recent years, either quantum or classical ghost imaging has attracted much attention in many fields. Here, we report grayscale object authentication based on ghost imaging using binary signals. It is found that binary signals can be applied at reference and object beam arms, which provides a promising alternative to explore the potential of ghost imaging for some applications. Reducing the data size with only two quantization levels is beneficial to the storage and transmission. One numerical example, i.e., grayscale object authentication, is given to illustrate the feasibility of the proposed method, and it is highly expected that more studies can further explore the ghost imaging system using binary signals for other applications.

Journal ArticleDOI
TL;DR: A novel gradient correlation similarity (Gcs) measure-based decolorization model for faithfully preserving the appearance of the original color image and a discrete searching solver is proposed by determining the solution with the minimum function value from the linear parametric model-induced candidate images.
Abstract: This paper presents a novel gradient correlation similarity (Gcs) measure-based decolorization model for faithfully preserving the appearance of the original color image. Contrary to the conventional data-fidelity term consisting of gradient error-norm-based measures, the newly defined Gcs measure calculates the summation of the gradient correlation between each channel of the color image and the transformed grayscale image. Two efficient algorithms are developed to solve the proposed model. On one hand, due to the highly nonlinear nature of Gcs measure, a solver consisting of the augmented Lagrangian and alternating direction method is adopted to deal with its approximated linear parametric model. The presented algorithm exhibits excellent iterative convergence and attains superior performance. On the other hand, a discrete searching solver is proposed by determining the solution with the minimum function value from the linear parametric model-induced candidate images. The non-iterative solver has advantages in simplicity and speed with only several simple arithmetic operations, leading to real-time computational speed. In addition, it is very robust with respect to the parameter and candidates. Extensive experiments under a variety of test images and a comprehensive evaluation against existing state-of-the-art methods consistently demonstrate the potential of the proposed model and algorithms.

Journal ArticleDOI
TL;DR: Two separate methods for robust and invisible image watermarking are proposed in RGB color space and Singular Value Decomposition (SVD) is employed on the blue channel of the host image to retrieve the singular values and the watermark is embedded in these singular values.

Journal ArticleDOI
TL;DR: A cat chaotic mapping is introduced into the steps of population initialization and iterative stage of the original GSA, which forms a new algorithm called CCMGSA which is employed to optimize BP neural networks and shows better performance in terms of the convergence rate and avoidance of local minima.
Abstract: This paper proposes a novel image segmentation method based on BP neural network, which is optimized by an enhanced Gravitational Search Algorithm (GSA). GSA is a novel heuristic optimization algorithm based on the law of gravity and mass interactions. It has been proven that the GSA has good ability to search for the global optimum, but it suffers from the premature convergence due to the rapid reduction of diversity. This work introduces a cat chaotic mapping into the steps of population initialization and iterative stage of the original GSA, which forms a new algorithm called CCMGSA. Then the CCMGSA is employed to optimize BP neural networks, which forms a combination method called CCMGSA-BP and we use it for image segmentation. To verify the efficiency of this method, the visual and performance experiments are done. The visual results using our proposed method are compared with those using other segmentation methods including an improved k-means clustering algorithm (I-K-means), a hybrid region merging method (H-Region-merging), and manual segmentation. The comparison results show that the proposed method can get good segmentation results on grayscale images with specific characteristics. And we compare the performance of our proposed method with those of IGSA-BP, CLPSO-BP and RGA-BP for image segmentation. The results indicate that the CCMGSA-BP shows better performance in terms of the convergence rate and avoidance of local minima.

Journal ArticleDOI
TL;DR: A new approach to derive the image feature descriptor from the dot-diffused block truncation coding (DDBTC) compressed data stream is presented, and the proposed scheme can be considered as an effective candidate for real-time image retrieval applications.
Abstract: This paper presents a new approach to derive the image feature descriptor from the dot-diffused block truncation coding (DDBTC) compressed data stream. The image feature descriptor is simply constructed from two DDBTC representative color quantizers and its corresponding bitmap image. The color histogram feature (CHF) derived from two color quantizers represents the color distribution and image contrast, while the bit pattern feature (BPF) constructed from the bitmap image characterizes the image edges and textural information. The similarity between two images can be easily measured from their CHF and BPF values using a specific distance metric computation. Experimental results demonstrate the superiority of the proposed feature descriptor compared to the former existing schemes in image retrieval task under natural and textural images. The DDBTC method compresses an image efficiently, and at the same time, its corresponding compressed data stream can provide an effective feature descriptor for performing image retrieval and classification. Consequently, the proposed scheme can be considered as an effective candidate for real-time image retrieval applications.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed hierarchical correlation histogram analysis based on the grayscale distribution degree of pixel intensity is superior to existing methods by using two estimation image quantitative methods of PSNR and average gradient values.
Abstract: Parkinson's disease is a progressive neurodegenerative disorder that has a higher probability of occurrence in middle-aged and older adults than in the young. With the use of a computer-aided diagnosis (CAD) system, abnormal cell regions can be identified, and this identification can help medical personnel to evaluate the chance of disease. This study proposes a hierarchical correlation histogram analysis based on the grayscale distribution degree of pixel intensity by constructing a correlation histogram, that can improves the adaptive contrast enhancement for specific objects. The proposed method produces significant results during contrast enhancement preprocessing and facilitates subsequent CAD processes, thereby reducing recognition time and improving accuracy. The experimental results show that the proposed method is superior to existing methods by using two estimation image quantitative methods of PSNR and average gradient values. Furthermore, the edge information pertaining to specific cells can effectively increase the accuracy of the results.

Proceedings ArticleDOI
01 Dec 2015
TL;DR: An analog neuromorphic system is developed based on the fabricated resistive switching memory array and a novel training scheme is proposed to optimize the performance of the analog system by utilizing the segmented synaptic behavior.
Abstract: An analog neuromorphic system is developed based on the fabricated resistive switching memory array. A novel training scheme is proposed to optimize the performance of the analog system by utilizing the segmented synaptic behavior. The scheme is demonstrated on a grayscale image recognition. According to the experiment results, the optimized one improves learning accuracy from 77.83% to 91.32%, decreases energy consumption by more than two orders, and substantially boosts learning efficiency compared to the traditional training scheme.

Journal ArticleDOI
TL;DR: A convex variational model which can effectively decompose the gradient field of images into salient edges and relatively smoother illumination field through the first- and second-order total variation regularizations is proposed.
Abstract: In this paper, we propose a reflectance and illumination decomposition model for the Retinex problem via high-order total variation and $$L^{1}$$L1 decomposition. Based on the observation that illumination varies smoother than reflectance, we propose a convex variational model which can effectively decompose the gradient field of images into salient edges and relatively smoother illumination field through the first- and second-order total variation regularizations. The proposed model can be efficiently solved by a primal---dual splitting method. Numerical experiments on both grayscale and color images show the strength of the proposed model with applications to Retinex illusions, medical image bias field removal, and color image shadow correction.

Journal ArticleDOI
TL;DR: An automated Psoriasis computer-aided diagnosis (pCAD) system for classification of psoriasis skin images into psoriatic lesion and healthy skin, which solves the two major challenges: fulfills the color feature requirements and selects the powerful dominant color features while retaining high classification accuracy.

Journal ArticleDOI
TL;DR: A flower pollination algorithm with a randomized location modification used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images and proves itself to be robust and effective through numerical experimental results.
Abstract: Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

Proceedings ArticleDOI
02 Apr 2015
TL;DR: A fast MRI Brain Image segmentation method based on Artificial Bee Colony (ABC) algorithm and Fuzzy-C Means (FCM) algorithm is proposed which helps to identify the brain tumor.
Abstract: Tumor Segmentation of MRI Brain images is still a challenging problem. The paper proposes a fast MRI Brain Image segmentation method based on Artificial Bee Colony (ABC) algorithm and Fuzzy-C Means (FCM) algorithm. The value in continuous gray scale interval is searched using threshold estimation. The optimal threshold value is searched with the help of ABC algorithm. In order to get an efficient fitness function for ABC algorithm the original image is decomposed by discrete wavelet transforms. Then by performing a noise reduction to the approximation image, a filtered image reconstructed with low-frequency components, is produced. The FCM algorithm is used for clustering the segmented image which helps to identify the brain tumor.

Journal ArticleDOI
TL;DR: It is found that the proposed watermarking scheme is fast enough to carry out these operations on a real timescale and the Fuzzy-BPN is successful candidate for implementing novel gray-scale image water marking scheme meeting real timelines.

Journal ArticleDOI
TL;DR: This paper introduces an automatic, non-intrusive method for precise eye center localization in low resolution images, acquired from single low-cost cameras, that uses color information to derive a novel eye map that emphasizes the iris area and a radial symmetry transform which operates both on the original eye images and the eye map.

Journal ArticleDOI
TL;DR: A simple cryptic-free least significant bits spatial- domain-based steganographic technique that embeds information (a color or a grayscale image) into a color image in terms of peak signal-to-noise ratio and quality index is presented.
Abstract: In recent years, chaotic systems have surfaced to become an important field in steganographic matters. In this paper, we present a simple cryptic-free least significant bits spatial- domain-based steganographic technique that embeds information (a color or a grayscale image) into a color image. The proposed algorithm, called cycling chaos-based steganographic algorithm, comprises two main parts: A cycling chaos function that is used for generating the seeds for pseudorandom number generator (PRNG) and PRNG that is utilized for determining the channel and the pixel positions of the host image in which the sensitive data are stored. The proposed algorithm is compared with two powerful steganographic color image methods in terms of peak signal-to-noise ratio and quality index. The comparisons indicate that the proposed algorithm shows good hiding capacity and fulfills stego-image quality. We also compare our algorithm against some existing steganographic attacks including RS attack, Chi-square test, byte attack and visual attack. The results demonstrate that the proposed algorithm can successfully withstand against these attacks.

Journal ArticleDOI
TL;DR: A new color-to-gray conversion method that is based on a region-based saliency model that computes visual contrast among pixel regions is proposed and outperforms the state-of-the-art methods quantitatively and qualitatively.
Abstract: Image decolorization is a fundamental problem for many real-world applications, including monochrome printing and photograph rendering. In this paper, we propose a new color-to-gray conversion method that is based on a region-based saliency model. First, we construct a parametric color-to-gray mapping function based on global color information as well as local contrast. Second, we propose a region-based saliency model that computes visual contrast among pixel regions. Third, we minimize the salience difference between the original color image and the output grayscale image in order to preserve contrast discrimination. To evaluate the performance of the proposed method in preserving contrast in complex scenarios, we have constructed a new decolorization data set with 22 images, each of which contains abundant colors and patterns. Extensive experimental evaluations on the existing and the new data sets show that the proposed method outperforms the state-of-the-art methods quantitatively and qualitatively.

Journal ArticleDOI
01 Jul 2015-Talanta
TL;DR: This work proposes a simple, rapid, inexpensive, and non-destructive methodology based on digital images and pattern recognition techniques for classification of biodiesel according to oil type (cottonseed, sunflower, corn, or soybean).