scispace - formally typeset
Search or ask a question

Showing papers on "Grayscale published in 2006"


Journal ArticleDOI
TL;DR: It is proved analytically and shown experimentally that the peak signal-to-noise ratio of the marked image generated by this method versus the original image is guaranteed to be above 48 dB, which is much higher than that of all reversible data hiding techniques reported in the literature.
Abstract: A novel reversible data hiding algorithm, which can recover the original image without any distortion from the marked image after the hidden data have been extracted, is presented in this paper. This algorithm utilizes the zero or the minimum points of the histogram of an image and slightly modifies the pixel grayscale values to embed data into the image. It can embed more data than many of the existing reversible data hiding algorithms. It is proved analytically and shown experimentally that the peak signal-to-noise ratio (PSNR) of the marked image generated by this method versus the original image is guaranteed to be above 48 dB. This lower bound of PSNR is much higher than that of all reversible data hiding techniques reported in the literature. The computational complexity of our proposed technique is low and the execution time is short. The algorithm has been successfully applied to a wide range of images, including commonly used images, medical images, texture images, aerial images and all of the 1096 images in CorelDraw database. Experimental results and performance comparison with other reversible data hiding schemes are presented to demonstrate the validity of the proposed algorithm.

2,240 citations


Journal ArticleDOI
TL;DR: Based on the concepts of luminance-weighted chrominance blending and fast intrinsic distance computations, high-quality colorization results for still images and video are obtained at a fraction of the complexity and computational cost of previously reported techniques.
Abstract: Colorization, the task of coloring a grayscale image or video, involves assigning from the single dimension of intensity or luminance a quantity that varies in three dimensions, such as red, green, and blue channels. Mapping between intensity and color is, therefore, not unique, and colorization is ambiguous in nature and requires some amount of human interaction or external information. A computationally simple, yet effective, approach of colorization is presented in this paper. The method is fast and it can be conveniently used "on the fly," permitting the user to interactively get the desired results promptly after providing a reduced set of chrominance scribbles. Based on the concepts of luminance-weighted chrominance blending and fast intrinsic distance computations, high-quality colorization results for still images and video are obtained at a fraction of the complexity and computational cost of previously reported techniques. Possible extensions of the algorithm introduced here included the capability of changing the colors of an existing color image or video, as well as changing the underlying luminance, and many other special effects demonstrated here.

540 citations


Journal ArticleDOI
TL;DR: A new grayscale image quality measure that can be used as a graphical or a scalar measure to predict the distortion introduced by a wide range of noise sources based on singular value decomposition is presented.
Abstract: The important criteria used in subjective evaluation of distorted images include the amount of distortion, the type of distortion, and the distribution of error. An ideal image quality measure should, therefore, be able to mimic the human observer. We present a new grayscale image quality measure that can be used as a graphical or a scalar measure to predict the distortion introduced by a wide range of noise sources. Based on singular value decomposition, it reliably measures the distortion not only within a distortion type at different distortion levels, but also across different distortion types. The measure was applied to five test images (airplane, boat, Goldhill, Lena, and peppers) using six types of distortion (JPEG, JPEG 2000, Gaussian blur, Gaussian noise, sharpening, and DC-shifting), each with five distortion levels. Its performance is compared with PSNR and two recent measures.

350 citations


Journal ArticleDOI
TL;DR: The basic procedure is to first group the histogram components of a low-contrast image into a proper number of bins according to a selected criterion, then redistribute these bins uniformly over the grayscale, and finally ungroup the previously grouped gray-levels.
Abstract: This is Part II of the paper, "Gray-Level Grouping (GLG): an Automatic Method for Optimized Image Contrast Enhancement". Part I of this paper introduced a new automatic contrast enhancement technique: gray-level grouping (GLG). GLG is a general and powerful technique, which can be conveniently applied to a broad variety of low-contrast images and outperforms conventional contrast enhancement techniques. However, the basic GLG method still has limitations and cannot enhance certain classes of low-contrast images well, e.g., images with a noisy background. The basic GLG also cannot fulfill certain special application purposes, e.g., enhancing only part of an image which corresponds to a certain segment of the image histogram. In order to break through these limitations, this paper introduces an extension of the basic GLG algorithm, selective gray-level grouping (SGLG), which groups the histogram components in different segments of the grayscale using different criteria and, hence, is able to enhance different parts of the histogram to various extents. This paper also introduces two new preprocessing methods to eliminate background noise in noisy low-contrast images so that such images can be properly enhanced by the (S)GLG technique. The extension of (S)GLG to color images is also discussed in this paper. SGLG and its variations extend the capability of the basic GLG to a larger variety of low-contrast images, and can fulfill special application requirements. SGLG and its variations not only produce results superior to conventional contrast enhancement techniques, but are also fully automatic under most circumstances, and are applicable to a broad variety of images.

303 citations


Proceedings ArticleDOI
02 Feb 2006
TL;DR: In this paper, an improved version of the blind steganalysis method proposed by Holotyak et al. is presented, where the features for the blind classifier are calculated in the wavelet domain as higherorder absolute moments of the noise residual.
Abstract: The contribution of this paper is two-fold. First, we describe an improved version of a blind steganalysis method previously proposed by Holotyak et al. and compare it to current state-of-the-art blind steganalyzers. The features for the blind classifier are calculated in the wavelet domain as higher-order absolute moments of the noise residual. This method clearly shows the benefit of calculating the features from the noise residual because it increases the features' sensitivity to embedding, which leads to improved detection results. Second, using this detection engine, we attempt to answer some fundamental questions, such as "how much can we improve the reliability of steganalysis given certain a priori side-information about the image source?" Moreover, we experimentally compare the security of three steganographic schemes for images stored in a raster format - (1) pseudo-random ±1 embedding using ternary matrix embedding, (2) spatially adaptive ternary ±1 embedding, and (3) perturbed quantization while converting a 16-bit per channel image to an 8-bit gray scale image.

276 citations


Proceedings ArticleDOI
09 Jul 2006
TL;DR: A new reversible image authentication technique based on watermarking where if the image is authentic, the distortion due to embedding can be completely removed from the watermarked image after the hidden data has been extracted.
Abstract: In this paper, we propose a new reversible image authentication technique based on watermarking where if the image is authentic, the distortion due to embedding can be completely removed from the watermarked image after the hidden data has been extracted. This technique utilizes histogram characteristics of the difference image and modifies pixel values slightly to embed more data than other lossless data hiding algorithm. We show that the lower bound of the PSNR (peak-signal-to-noise-ratio) values of watermarked images are 51.14 dB. Moreover, the proposed scheme is quite simple and the execution time is rather short. Experimental results demonstrate that the proposed scheme can detect any modifications of the watermarked image

192 citations


Journal ArticleDOI
Paul L. Rosin1
TL;DR: The sequential floating forward search method for feature selection was used to select good rule sets for a range of tasks, namely noise filtering, noise filtering using threshold decomposition, thinning, and convex hulls.
Abstract: Experiments were carried out to investigate the possibility of training cellular automata (CA) to perform several image processing tasks. Even if only binary images are considered, the space of all possible rule sets is still very large, and so the training process is the main bottleneck of such an approach. In this paper, the sequential floating forward search method for feature selection was used to select good rule sets for a range of tasks, namely noise filtering (also applied to grayscale images using threshold decomposition), thinning, and convex hulls. Various objective functions for driving the search were considered. Several modifications to the standard CA formulation were made (the B-rule and two-cycle CAs), which were found, in some cases, to improve performance.

192 citations


Journal ArticleDOI
TL;DR: New techniques for unsupervised segmentation of multimodal grayscale images such that each region-of-interest relates to a single dominant mode of the empirical marginal probability distribution of grey levels are proposed.
Abstract: We propose new techniques for unsupervised segmentation of multimodal grayscale images such that each region-of-interest relates to a single dominant mode of the empirical marginal probability distribution of grey levels. We follow the most conventional approaches in that initial images and desired maps of regions are described by a joint Markov-Gibbs random field (MGRF) model of independent image signals and interdependent region labels. However, our focus is on more accurate model identification. To better specify region borders, each empirical distribution of image signals is precisely approximated by a linear combination of Gaussians (LCG) with positive and negative components. We modify an expectation-maximization (EM) algorithm to deal with the LCGs and also propose a novel EM-based sequential technique to get a close initial LCG approximation with which the modified EM algorithm should start. The proposed technique identifies individual LCG models in a mixed empirical distribution, including the number of positive and negative Gaussians. Initial segmentation based on the LCG models is then iteratively refined by using the MGRF with analytically estimated potentials. The convergence of the overall segmentation algorithm at each stage is discussed. Experiments show that the developed techniques segment different types of complex multimodal medical images more accurately than other known algorithms.

179 citations


Journal ArticleDOI
TL;DR: It is demonstrated that pixel classification-based color image segmentation in color space is equivalent to performing segmentation on grayscale image through thresholding, and a supervised learning-based two-step procedure for color cell image segmentsation is developed.
Abstract: In this paper, we present two new algorithms for cell image segmentation. First, we demonstrate that pixel classification-based color image segmentation in color space is equivalent to performing segmentation on grayscale image through thresholding. Based on this result, we develop a supervised learning-based two-step procedure for color cell image segmentation, where color image is first mapped to grayscale via a transform learned through supervised learning, thresholding is then performed on the grayscale image to segment objects out of background. Experimental results show that the supervised learning-based two-step procedure achieved a boundary disagreement (mean absolute distance) of 0.85 while the disagreement produced by the pixel classification-based color image segmentation method is 3.59. Second, we develop a new marker detection algorithm for watershed-based separation of overlapping or touching cells. The merit of the new algorithm is that it employs both photometric and shape information and combines the two naturally in the framework of pattern classification to provide more reliable markers. Extensive experiments show that the new marker detection algorithm achieved 0.4% and 0.2% over-segmentation and under-segmentation, respectively, while reconstruction-based method produced 4.4% and 1.1% over-segmentation and under-segmentation, respectively.

149 citations


Journal ArticleDOI
TL;DR: This paper presents a robust watermarking approach for hiding grayscale watermarks into digital images by plugging the codebook concept into the singular value decomposition (SVD) and embeds the singular values of the original image into the watermark one to attain the lossless objective.

142 citations


Journal ArticleDOI
TL;DR: Experimental results show that the second solution—modification of the camera together with an imbalance compensation algorithm—would effectively reduce the errors and produce better measurement results than the software-based compensation method.
Abstract: A color phase-shifting technique has been recently developed for high-speed 3-D shape measurement. In this technique, three sinusoidal phase-shifted images used for a measurement cycle in a traditional grayscale phase-shifting technique are encoded into one color image. Therefore, only a single color image is needed for reconstructing the 3-D surface shape of an object. The measurement speed can then be increased up to the frame rate of the camera. However, previous experimental results showed that the measurement accuracy of this technique was initially low, due largely to the coupling and imbalance of color channels. In this paper, two solutions, one software-based and one hardware-based, are proposed to compensate for these errors. Experimental results show that the second solution—modification of the camera together with an imbalance compensation algorithm—would effectively reduce the errors and produce better measurement results than the software-based compensation method. This technique has many potential applications in high-speed measurement, such as highway inspection and dynamic measurement of human body.

Journal ArticleDOI
R.L. de Queiroz1, Karen M. Braun1
TL;DR: In this article, a reversible method to convert color graphics and pictures to gray images was proposed, which is based on mapping colors to low-visibility high-frequency textures that are applied onto the gray image.
Abstract: We have developed a reversible method to convert color graphics and pictures to gray images. The method is based on mapping colors to low-visibility high-frequency textures that are applied onto the gray image. After receiving a monochrome textured image, the decoder can identify the textures and recover the color information. More specifically, the image is textured by carrying a subband (wavelet) transform and replacing bandpass subbands by the chrominance signals. The low-pass subband is the same as that of the luminance signal. The decoder performs a wavelet transform on the received gray image and recovers the chrominance channels. The intent is to print color images with black and white printers and to be able to recover the color information afterwards. Registration problems are discussed and examples are presented.

Journal ArticleDOI
TL;DR: In this article, the authors presented the development of a rust defect recognition method to determine whether rust defects exist in a given digital image by processing digital color information, instead of grayscale image processing commonly used in previous researches.

Journal ArticleDOI
TL;DR: The essentially nondissipative (ENoD) difference schemes for the MMC component are suggested to eliminate the impulse noise with a minimum (ideally no) introduction of dissipation to deal with the mixture of the impulse and Gaussian noises reliably.
Abstract: The paper is concerned with PDE-based image restoration. A new model is introduced by hybridizing a nonconvex variant of the total variation minimization (TVM) and the motion by mean curvature (MMC) in order to deal with the mixture of the impulse and Gaussian noises reliably. We suggest the essentially nondissipative (ENoD) difference schemes for the MMC component to eliminate the impulse noise with a minimum (ideally no) introduction of dissipation. The MMC-TVM hybrid model and the ENoD schemes are applied for both gray-scale and color images. For color image denoising, we consider the chromaticity-brightness decomposition with the chromaticity formulated in the angle domain. An incomplete Crank-Nicolson alternating direction implicit time-stepping procedure is adopted to solve those differential equations efficiently. Numerical experiments have shown that the new hybrid model and the numerical schemes can remove the mixture of the impulse and Gaussian noises, efficiently and reliably, preserving edges quite satisfactorily.

Journal ArticleDOI
TL;DR: An improved face region extraction algorithm and a light dots detection algorithm are proposed for better eye detection performance.

Patent
29 Aug 2006
TL;DR: In this paper, an identification method and process for objects from digitally captured images thereof that uses data characteristics to identify an object from a plurality of objects in a database is presented, where the data is broken down into parameters such as a Shape Comparison, Grayscale Comparison, Wavelet Comparison, and Color Cube Comparison with object data in one or more databases to identify the actual object of a digital image.
Abstract: An identification method and process for objects from digitally captured images thereof that uses data characteristics to identify an object from a plurality of objects in a database. The data is broken down into parameters such as a Shape Comparison, Grayscale Comparison, Wavelet Comparison, and Color Cube Comparison with object data in one or more databases to identify the actual object of a digital image.

Journal ArticleDOI
01 Nov 2006
TL;DR: A series of linear and nonlinear pseudocoloring maps designed and applied to single energy X-ray luggage scans to assist airport screeners in identifying and detecting threat items, particularly hard to see low-density weapons in luggage are described.
Abstract: This paper describes a series of linear and nonlinear pseudocoloring maps designed and applied to single energy X-ray luggage scans to assist airport screeners in identifying and detecting threat items, particularly hard to see low-density weapons in luggage. Considerations of the psychological and physiological processing involved in the human perception of color as well as the effects of using various color models, such as the RGB and HSI models, were explored. Original grayscale data, various enhanced images, and segmented scenes were used as input to the various color mapping schemes designed in this research. A highly interactive user-friendly graphical interface and a portable test were developed and used in a performance evaluation study involving a population of actual federal airport screeners. The study proved the advantages of using color over gray level data and also allowed the ranking of color maps and selection of the best performing color schemes. Rate improvements in weapon detection of up to 97% were achieved through the use of color

Patent
16 Nov 2006
TL;DR: In this paper, a computer method of creating a super-resolved grayscale image from lower-resolution images using an L 1 norm data fidelity penalty term to enforce similarities between low and a high-resolution image estimates is provided.
Abstract: A computer method of creating a super-resolved grayscale image from lower-resolution images using an L1 norm data fidelity penalty term to enforce similarities between low and a high-resolution image estimates is provided. A spatial penalty term encourages sharp edges in the high-resolution image, the data fidelity penalty term is applied to space invariant point spread function, translational, affine, projective and dense motion models including fusing the lower-resolution images, to estimate a blurred higher-resolution image and then a deblurred image. The data fidelity penalty term uses the L1 norm in a likelihood fidelity term for motion estimation errors. The spatial penalty term uses bilateral-TV regularization with an image having horizontal and vertical pixel-shift terms, and a scalar weight between 0 and 1. The penalty terms create an overall cost function having steepest descent optimization applied for minimization. Direct image operator effects replace matrices for speed and efficiency.

Journal ArticleDOI
TL;DR: This work addresses the dynamic super-resolution problem of reconstructing a high-quality set of monochromatic or color super-resolved images from low-quality monochromaatic, color, or mosaiced frames by proposing a joint method for simultaneous SR, deblurring, and demosaicing.
Abstract: We address the dynamic super-resolution (SR) problem of reconstructing a high-quality set of monochromatic or color superresolved images from low-quality monochromatic, color, or mosaiced frames. Our approach includes a joint method for simultaneous SR, deblurring, and demosaicing, this way taking into account practical color measurements encountered in video sequences. For the case of translational motion and common space-invariant blur, the proposed method is based on a very fast and memory efficient approximation of the Kalman filter (KF). Experimental results on both simulated and real data are supplied, demonstrating the presented algorithms, and their strength.

Journal ArticleDOI
Yu-Chen Hu1
TL;DR: A novel grayscale image hiding scheme that is capable of hiding multiple secret images into a host image of the same size and provides a higher hiding capacity and a higher degree of security than that of the virtual image cryptosystem is proposed.

Journal ArticleDOI
TL;DR: A technique that involves minimal operator intervention was developed and implemented for identification and quantification of black holes on T1-weighted magnetic resonance images (T1 images) in multiple sclerosis (MS).

Book ChapterDOI
18 Sep 2006
TL;DR: Numerical and visual observations show that the performance of the proposed fuzzy wavelet shrinkage method outperforms current fuzzy non-wavelet methods and is comparable with some recent but more complex wavelets methods.
Abstract: This paper focuses on fuzzy image denoising techniques In particular, we investigate the usage of fuzzy set theory in the domain of image enhancement using wavelet thresholding We propose a simple but efficient new fuzzy wavelet shrinkage method, which can be seen as a fuzzy variant of a recently published probabilistic shrinkage method [1] for reducing adaptive Gaussian noise from digital greyscale images Experimental results show that the proposed method can efficiently and rapidly remove additive Gaussian noise from digital greyscale images Numerical and visual observations show that the performance of the proposed method outperforms current fuzzy non-wavelet methods and is comparable with some recent but more complex wavelets methods We also illustrate the main differences between this version and the probabilistic version and show the main improvements in comparison to it

Patent
09 Aug 2006
TL;DR: An image display apparatus includes an image signal processing unit which adjusts an input grayscale values included in an input image signal according to a predetermined grayscalescale characteristic.
Abstract: An image display apparatus includes: an image signal processing unit which adjusts an input grayscale values included in an input image signal according to a predetermined grayscale characteristic; a display unit which displays an image based on an image signal included in an output grayscale value adjusted by the image signal processing unit; and a grayscale characteristic changing unit which changes a correlation between the input and output grayscale values defined based on the grayscale characteristic.

Journal ArticleDOI
TL;DR: It is shown that when the Bhattacharyya coefficient is applied to gray scale images, it produces biased results.
Abstract: Bhattacharyya coefficient is a popular method that uses color histograms to correlate images. Bhattacharyya Coefficient is believed to be the absolute similarity measure for frequency coded data and it needs no bias correction. In this paper, we show that when this method is applied to gray scale images, it produces biased results. Correlation based on this measure is not adequate for common gray scale images, as the color in grayscale is not a sufficient feature. The biased ness is explored and demonstrated through numerous experiments with different kinds of non-rigid maneuvering objects in cluttered and less cluttered environments, in context to the object tracking. The spectral performance of the Bhattacharyya curve is compared with the spatial matching criterion i.e. Mean Square Difference.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: Experimental results indicate that the proposed algorithm can effectively detect shadows and highlights, adapting the background with respect to illumination changes.
Abstract: This paper presents a new adaptive background model for grayscale video sequences, that includes shadows and high-light detection. In the training period, statistics are computed for each image pixel to obtain the initial background model and an estimate of the image global noise, even in the presence of several moving objects. Each new frame is then compared to this background model, and spatio-temporal features are used to obtain foreground pixels. Local statistics are then used to detect shadows and highlights, and pixels that are detected as either shadow or highlight for a certain number of frames are adapted to become part of the background. Experimental results indicate that the proposed algorithm can effectively detect shadows and highlights, adapting the background with respect to illumination changes.

Proceedings ArticleDOI
20 Aug 2006
TL;DR: This paper presents a scheme for steganalysis of LSB matching steganography based on feature extraction and pattern recognition techniques and shape parameter of generalized Gaussian distribution in the wavelet domain is introduced to measure image complexity.
Abstract: In this paper, we present a scheme for steganalysis of LSB matching steganography based on feature extraction and pattern recognition techniques. Shape parameter of Generalized Gaussian Distribution (GGD) in the wavelet domain is introduced to measure image complexity. Several statistical pattern recognition algorithms are applied to train and classify the feature sets. Comparison of our method and others indicates our method is highly competitive. It is highly efficient for color image steganalysis. It is also efficient for grayscale steganalysis in the low image complexity domain.

Proceedings ArticleDOI
07 Jun 2006
TL;DR: An approach where one ant is assigned to each pixel of an image and then moves around the image seeking low grayscale regions indicates that an ant-based approach has the potential of becoming an established image thresholding technique.
Abstract: This study is an investigation of the application of ant colony optimization to image thresholding. This paper presents an approach where one ant is assigned to each pixel of an image and then moves around the image seeking low grayscale regions. Experimental results demonstrate that the proposed ant-based method performs better than other two established thresholding algorithms. Further work must be conducted to optimize the algorithm parameters, improve the analysis of the pheromone data and reduce computation time. However, the study indicates that an ant-based approach has the potential of becoming an established image thresholding technique.

Journal ArticleDOI
TL;DR: It is shown that it is possible to select 24 samples from the Munsell set that outperform the Gretag- Macbeth ColorChecker and that this selection can be efficiently de- rived using an algorithm called MAXMINC.
Abstract: The color characterization of digital cameras often re- quires the use of standard charts containing a fixed number of color samples. The exact choice of such characterization charts—how many (and which) known samples to include—is known to affect characterization performance. This study describes methods to se- lect optimum color samples from a set of 1269 Munsell surface col- ors. The effect of sample selection on characterization performance is evaluated and compared with performance using the standard GretagMacbeth ColorChecker and GretagMacbeth ColorChecker DC colors. The work confirms that the standard charts appear to have been well selected. However, we show that it is possible to select 24 samples from the Munsell set that outperform the Gretag- Macbeth ColorChecker and that this selection can be efficiently de- rived using an algorithm called MAXMINC. It is proposed that this al- gorithm may have general applicability; for example, to the optimal selection of samples constrained to be a subspace Munsell color solid. © 2006 Society for Imaging Science and Technology. DOI: 10.2352/J.ImagingSci.Technol.200650:5481 the ANSI IT8 charts are designed to be used in a color- management process with the aim to allow a system to re- produce colors with acceptable tolerance. These charts are typically checkerboard array targets containing a number of carefully selected and prepared squares or chips in a wide range of achromatic and chromatic colors. Many of these square patches represent the color of certain natural objects of special interest, such as human skin, foliage, and blue sky. 11,12 Primary colors for both additive and subtractive color mixing are also commonly included. A series of ach- romatic patches in the characterization charts provide a con- venient grayscale that may be used for color balance and tone-reproduction purposes. Repeated white, midgray, and black patches around the outer edge of the chart (Gretag- Macbeth ColorChecker DC, for example) allow measure- ments for spatial uniformity of illumination. Special colors, such as glossy surface colors in the GretagMacbeth ColorChecker DC and optional vendor colors in the ANSI IT8 charts, can also be included.

Journal ArticleDOI
TL;DR: This paper deals with effective separation of foreground and background in low quality document images suffering from various types of degradations including scanning noise, aging effects, uneven background, or foreground, etc and shows an excellent adaptability to tackle with problems of uneven illumination and local changes or nonuniformity in background and foreground colors.
Abstract: This paper deals with effective separation of foreground and background in low quality document images suffering from various types of degradations including scanning noise, aging effects, uneven background, or foreground, etc. The proposed algorithm shows an excellent adaptability to tackle with these problems of uneven illumination and local changes or nonuniformity in background and foreground colors. The approach is primarily designed for (not restricted to) processing of color documents but it works well in the gray scale domain too. Test document set considers samples (in color as well as in gray scale) of old historical documents including manuscripts of high importance. The data set used in this study consists of hundred images. These images are selected from different sources including image databases that had been scanned from working notebooks of famous writers who used to write with quill or pencil generating very low contrast between foreground and background. Evaluation of foreground extraction method has been judged by computing the accuracy of extracting handwritten lines and words from the test images. This evaluation shows that the proposed method can extract lines and words with accuracies of about 84% and 93%, respectively. Apart from this quantitative method, a qualitative evaluation is also presented to compare the proposed method with one popular technique for foreground/background separation in document images.

Proceedings ArticleDOI
01 Nov 2006
TL;DR: The experimental results show that after feature selection, the grayscale image based feature set achieves very competitive performance for the problem of wood defect detection relative to the color image based features.
Abstract: In this paper we address the issue of detecting defects in wood using features extracted from grayscale images The feature set proposed here is based on the concept of texture and it is computed from the co-occurrence matrices The features provide measures of properties such as smoothness, coarseness, and regularity Comparative experiments using a color image based feature set extracted from percentile histograms are carried to demonstrate the efficiency of the proposed feature set Two different learning paradigms, neural networks and support vector machines, and a feature selection algorithm based on multi-objective genetic algorithms were considered in our experiments The experimental results show that after feature selection, the grayscale image based feature set achieves very competitive performance for the problem of wood defect detection relative to the color image based features