scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Electronic Imaging in 2004"


Journal ArticleDOI
TL;DR: 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images, and the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications are identified.
Abstract: We conduct an exhaustive survey of image thresholding methods, categorize them, express their formulas under a uniform notation, and finally carry their performance comparison. The thresholding methods are categorized according to the information they are exploiting, such as histogram shape, measurement space clustering, entropy, object attributes, spatial correlation, and local gray-level surface. 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images. The comparison is based on the combined performance measures. We identify the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1631316)

4,543 citations


Journal ArticleDOI
TL;DR: An overview of 3-D digitizing techniques is presented with an emphasis on commercial techniques and systems currently available, with a focus on commercial systems that are considered good representations of the key technologies that have survived the test of years.
Abstract: We review 20 years of development in the field of 3-D laser imaging. An overview of 3-D digitizing techniques is presented with an emphasis on commercial techniques and systems currently available. It covers some of the most important methods that have been developed, both at the National Research Council of Canada (NRC) and elsewhere, with a focus on commercial systems that are considered good representations of the key technologies that have survived the test of years. © 2004 SPIE and IS&T.

1,041 citations


Journal ArticleDOI
TL;DR: This work develops the Retinex computation into a full scale automatic image enhancement algorithm—the multiscale RetineX with color restoration (MSRCR)—which com- bines color constancy with local contrast/lightness enhancement to transform digital images into renditions that approach the realism of direct scene observation.
Abstract: There has been a revivification of interest in the Retinex computation in the last six or seven years, especially in its use for image enhancement. In his last published concept (1986) for a Ret- inex computation, Land introduced a center/surround spatial form, which was inspired by the receptive field structures of neurophysi- ology. With this as our starting point, we develop the Retinex con- cept into a full scale automatic image enhancement algorithm—the multiscale Retinex with color restoration (MSRCR)—which com- bines color constancy with local contrast/lightness enhancement to transform digital images into renditions that approach the realism of direct scene observation. Recently, we have been exploring the fun- damental scientific questions raised by this form of image process- ing. 1. Is the linear representation of digital images adequate in visual terms in capturing the wide scene dynamic range? 2. Can visual quality measures using the MSRCR be developed? 3. Is there a canonical, i.e., statistically ideal, visual image? The answers to these questions can serve as the basis for automating visual as- sessment schemes, which, in turn, are a primitive first step in bring- ing visual intelligence to computers. © 2004 SPIE and IS&T.

598 citations


Journal ArticleDOI
TL;DR: This work provides concise MATLAB™ implemen- tations of two of the spatial techniques of making pixel comparisons, along with test results on several images and a discussion of the results.
Abstract: Many different descriptions of Retinex methods of light- ness computation exist. We provide concise MATLAB™ implemen- tations of two of the spatial techniques of making pixel comparisons. The code is presented, along with test results on several images and a discussion of the results. We also discuss the calibration of input images and the postRetinex processing required to display the output images. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1636761)

299 citations


Journal ArticleDOI
TL;DR: A comparison between two color equalization algorithms: Retinex, the famous model due to Land and McCann, and Automatic Color Equalization (ACE), a new algorithm recently presented by the authors, are presented.
Abstract: We present a comparison between two color equalization algorithms: Retinex, the famous model due to Land and McCann, and Automatic Color Equalization (ACE), a new algorithm recently presented by the authors. These two algorithms share a common approach to color equalization, but different computational models. We introduce the two models focusing on differences and common points. An analysis of their computational characteristics illustrates the way the Retinex approach has influenced ACE structure, and which aspects of the first algorithm have been modified in the second one and how. Their interesting equalization properties, like lightness and color constancy, image dynamic stretching, global and local filtering, and data driven dequantization, are qualitatively and quantitatively presented and compared, together with their ability to mimic the human visual system. © 2004 SPIE and IS&T.

154 citations


Journal ArticleDOI
TL;DR: It is proposed that the next significant advances in the field of color appearance modeling and image quality metrics will not come from evolutionary revisions of colorimetric color appearance models alone, but from a more revolutionary approach to make appearance and difference predictions for more complex stimuli in a wider array of viewing conditions.
Abstract: Traditional color appearance modeling has recently ma- tured to the point that available, internationally recommended mod- els such as CIECAM02 are capable of making a wide range of pre- dictions, to within the observer variability in color matching and color scaling of stimuli, in somewhat simplified viewing conditions. It is proposed that the next significant advances in the field of color ap- pearance modeling and image quality metrics will not come from evolutionary revisions of colorimetric color appearance models alone. Instead, a more revolutionary approach will be required to make appearance and difference predictions for more complex stimuli in a wider array of viewing conditions. Such an approach can be considered image appearance modeling, since it extends the concepts of color appearance modeling to stimuli and viewing envi- ronments that are spatially and temporally at the level of complexity of real natural and man-made scenes, and extends traditional image quality metrics into the color appearance domain. Thus, two previ- ously parallel and evolving research areas are combined in a new way as an attempt to instigate a significant advance. We review the concepts of image appearance modeling, present iCAM as one ex- ample of such a model, and provide a number of examples of the use of iCAM in image reproduction and image quality evaluation. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1635368)

148 citations


Journal ArticleDOI
TL;DR: Experimental results, obtained with real FLIR image sequences, illustrating a wide variety of target and clutter variability, demonstrate the effectiveness and robustness of the proposed method.
Abstract: We propose a method for automatic target detection and tracking in forward-looking infrared (FLIR) image sequences. We use morphological connected operators to extract and track targets of interest and remove undesirable clutter. The design of these operators is based on general size, connectivity, and motion criteria, using spatial intraframe and temporal interframe information. In a first step, an image sequence is filtered on a frame-by-frame basis to remove background and residual clutter and to enhance the presence of targets. Detections extracted from the first step are passed to a second step for motion-based analysis. This step exploits the spatiotemporal correlation of the data, stated in terms of a connectivity criterion along the time dimension. The proposed method is suitable for piplined implementation or time progressive coding/transmission, since only a few frames are considered at a time. Experimental results, obtained with real FLIR image sequences, illustrating a wide variety of target and clutter variability, demonstrate the effectiveness and robustness of the proposed method.

131 citations


Journal ArticleDOI
TL;DR: The concept of Color Space and Color Solid, and the Psychophysical Scaling of Color Attributes: Stimulus and Perception, are introduced.
Abstract: Preface.Chapter 1. The Concept of Color Space and Color Solid.Chapter 2. Historical Development of Color Order Systems.Chapter 3. Psychophysics.Chapter 4. Color Attributes and Perceptual Attribute Scaling.Chapter 5. Psychophysical Scaling of Color Attributes: Stimulus and Perception.Chapter 6. Historical Development of Color Space and Color Difference Formulas.Chapter 7. Major Color Order Systems and Their Psychophysical Structure.Chapter 8. From Color-Matching Error to Large Color Differences.Chapter 9. Conclusions and Outlook.Notes.Glossary.References.Credits.Index.

124 citations


Journal ArticleDOI
Robert E Sobol1
TL;DR: In this article, a ratio modification operator is introduced to change the nature of the generated contrast mask so that it is simultaneously smooth in regions of small contrast ratios, but extremely sharp at high contrast edges.
Abstract: Digital photography systems often render an image from a scene-referred description with very wide dynamic range to an output-referred description of much lesser dynamic range. Global tone maps are often used for this purpose, but can fail when called upon to perform a large amount of range compression. A luminance formulation of the Retinex ratio-reset-product-average algorithm produces a smoothly changing contrast mask of great benefit, but it too can fail where high contrast edges are encountered. A slight but critical modification to the Retinex equation - introducing a ratio modification operator - changes the nature of the generated contrast mask so that it is simultaneously smooth in regions of small contrast ratios, but extremely sharp at high contrast edges. A mask produced in this way compresses large and undesirable contrast ratios while preserving, or optionally enhancing, small ratios critical to the sensation of image contrast. Processed images may appear to have a greater contrast despite having a shorter global contrast range. Adjusting the new operator prior to processing gives control of the degree of compression at high contrast edges. Changing the operator during processing gives control over spatial frequency response.

102 citations


Journal ArticleDOI
Ozgur Ekici1, Bulent Sankur1, Baris Coskun1, Umut Naci1, Mahmut Akcay1 
TL;DR: This work compares the performance of eight semifragile watermarking algorithms in terms of their miss probability under forgery attack, and in Terms of false alarm probability under nonma- licious signal processing operations that preserve the content and quality of the image.
Abstract: Semifragile watermarking techniques aim to prevent tam- pering and fraudulent use of modified images. A semifragile water- mark monitors the integrity of the content of the image but not its numerical representation. Therefore, the watermark is designed so that the integrity is proven if the content of the image has not been tampered with, despite some mild processing on the image. How- ever, if parts of the image are replaced with the wrong key or are heavily processed, the watermark information should indicate evi- dence of forgery. We compare the performance of eight semifragile watermarking algorithms in terms of their miss probability under forgery attack, and in terms of false alarm probability under nonma- licious signal processing operations that preserve the content and quality of the image. We propose desiderata for semifragile water- marking algorithms and indicate the promising algorithms among existing ones. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1633285)

92 citations


Journal ArticleDOI
TL;DR: This analysis of the Frankle-McCann Retinex algorithm and proposed extensions that result in a better rendition of the image suppress artifacts and reduce the number of iterations required to get to the final image.
Abstract: We present our analysis of the Frankle-McCann Retinex algorithm and propose extensions that result in a better rendition of the image. These extensions suppress artifacts and reduce the number of iterations required to get to the final image. In addition, our investigation suggests that the Retinex algorithm can be viewed as a process of estimating the per-pixel gains necessary to achieve dynamic range compression in the original image. Experimental results are provided that demonstrate the efficacy of this approach as compared to using Retinex to generate the final image.

Journal ArticleDOI
TL;DR: A linear solution to the problem of hyperbola-specific fitting is revealed for the first time and two new theorems on fitting hyperbolae are presented together with rigorous proofs.
Abstract: A new method based on quadratic constrained least- mean-square fitting to simultaneously determine both the best hy- perbolic and elliptical fits to a set of scattered data is presented. Thus a linear solution to the problem of hyperbola-specific fitting is revealed for the first time. Pilu's method to fit an ellipse (with respect to distance) to observed data points is extended to select, without prejudice, both ellipses and hyperbolae as well as their degenerate forms as indicated by optimality with respect to the algebraic dis- tance. This novel method is numerically efficient and is suitable for fitting to dense datasets with low noise. Furthermore, it is deemed highly suited to initialize a better but more computationally costly least-square minimization of orthogonal distance. Moreover, Grass- mannian coordinates of the hyperbolae are introduced, and it is shown how these apply to fitting a prototypical hyperbola. Two new theorems on fitting hyperbolae are presented together with rigorous proofs. A new method to determine the spatial uncertainty of the fit from the eigen or singular values is derived and used as an indicator for the quality of fit. All proposed methods are verified using numeri- cal simulation, and working MATLAB ® programs for the implemen- tation are made available. Further, an application of the methods to automatic industrial inspection is presented. © 2004 SPIE and IS&T.


Journal ArticleDOI
TL;DR: A fast version of the MCMC (Monte Carlo Markov Chain) algorithm based on the Bartlett decomposition for the resulting data augmentation problem is proposed and the results for both synthetic and real data are shown.
Abstract: We consider the problem of the blind separation of noisy instantaneously mixed images The images are modeled by hidden Markov fields with unknown parameters Given the observed images, we give a Bayesian formulation and we propose a fast version of the MCMC (Monte Carlo Markov Chain) algorithm based on the Bartlett decomposition for the resulting data augmentation problem We separate the unknown variables into two categories: 1 The parameters of interest which are the mixing matrix, the noise covariance and the parameters of the sources distributions 2 The hidden variables which are the unobserved sources and the unobserved pixel segmentation labels The proposed algorithm provides, in the stationary regime, samples drawn from the posterior distributions of all the variables involved in the problem leading to great flexibility in the cost function choice Finally, we show the results for both synthetic and real data to illustrate the feasibility of the proposed solution

Journal ArticleDOI
TL;DR: This work extends previous work on specifying Retinex parameters by establishing methods that automatically determine values for them based on the input image, and testing these methods on the McCann-McKee-Taylor asymmetric matching data.
Abstract: Our goal is to understand how the Retinex parameters affect the predictions of the model A simplified Retinex computation is specified in the recent MATLAB TM implementation; however, there remain several free parameters that introduce significant variability into the model's predictions We extend previous work on specifying these parameters In particular, instead of looking for fixed values for the parameters, we establish methods that automatically determine values for them based on the input image These methods are tested on the McCann-McKee-Taylor asymmetric matching data, along with some previously unpublished data that include simultaneous contrast targets

Journal ArticleDOI
TL;DR: A new adaptive center-weighted hybrid mean and median filter is formulated and used within a novel optimal-size windowing framework to reduce the effects of two types of sensor noise, namely blue-channel noise and JPEG blocking artifacts, common in high-ISO digital camera images.
Abstract: This paper presents a new methodology for the reduction of sensor noise from images acquired using digital cameras at high- International Organization for Standardization (ISO) and long- exposure settings. The problem lies in the fact that the algorithm must deal with hardware-related noise that affects certain color channels more than others and is thus nonuniform over all color channels. A new adaptive center-weighted hybrid mean and median filter is formulated and used within a novel optimal-size windowing framework to reduce the effects of two types of sensor noise, namely blue-channel noise and JPEG blocking artifacts, common in high-ISO digital camera images. A third type of digital camera noise that affects long-exposure images and causes a type of sensor noise commonly known as ''stuck-pixel'' noise is dealt with by pre- processing the image with a new stuck-pixel prefilter formulation. Experimental results are presented with an analysis of the perfor- mance of the various filters in comparison with other standard noise reduction filters. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1668279)

Journal ArticleDOI
TL;DR: The research on capturing real-life scenes, calculating appearances, and rendering sensations on film and other limited dynamic-range media is recounted and the need for parallel studies of psychophysical measurements of human vision and computational algorithms used in commercial imaging systems is emphasized.
Abstract: This work recounts the research on capturing real-life scenes, calculating appearances, and rendering sensations on film and other limited dynamic-range media. It describes the first pat- ents, a hardware display used in Land's Ives Medal Address in 1968, the first computer simulations using 20 324 pixel arrays, psy- chophysical experiments and computational models of color con- stancy and dynamic range compression, and the Frankle-McCann computationally efficient retinex image processing of 512 3512 im- ages. It includes several modifications of the original approach, in- cluding recent models of human vision and gamut-mapping applica- tions. This work emphasizes the need for parallel studies of psychophysical measurements of human vision and computational algorithms used in commercial imaging systems. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1635831)

Journal ArticleDOI
TL;DR: The proposed method can eliminate the pattern noise in flat regions and the boundary effect found in some other conventional multiscale error diffusion methods and preserve the local features of the input image in the output.
Abstract: Multiscale error diffusion is superior to conventional error diffusion methods in digital halftoning as it can eliminate directional hysteresis completely. However, there is a bias to favor a particular type of dots in the course of the halftoning process. A new multiscale error diffusion method is proposed to improve the diffusion perfor- mance by reducing the aforementioned bias. The proposed method can eliminate the pattern noise in flat regions and the boundary effect found in some other conventional multiscale error diffusion methods. At the same time, it can preserve the local features of the input image in the output. This is critical to quality, especially when the resolution of the output is limited by the physical constraints of the display unit. © 2004 SPIE and IS&T. (DOI: 10.1117/1.1758728)

Journal ArticleDOI
TL;DR: The Retinex idea, which required that models of color appearance evaluate all the pixels in the field of view as input, was a distinct departure from the point-bypoint thinking that dominated physics and colorimetry.
Abstract: Edwin Land first described the Retinex idea in the 1963 RESA William Proctor Prize Address, Cleveland, Ohio, on December 30, 1963. He said that it was fruitful to suggest that receptors exist in sets. A Retinex is all mechanisms from retina to cortex necessary to form images in terms of lightness. This was a distinct departure from the point-bypoint thinking that dominated physics and colorimetry. It required that models of color appearance evaluate all the pixels in the field of view as input. It is difficult in today’s world, dominated by digital images, to imagine just how novel this idea was in the 1960s. Nevertheless, many experiments in the 60s were fundamental to our understanding of human vision. Hubel and Wiesel’s studies of cat and monkey cortex, Land’s Mondrians, and Campbell and Robson’s work on human spatial frequency responses all made a strong

Journal ArticleDOI
TL;DR: What do you do to start reading colour engineering achieving device independent colour?
Abstract: What do you do to start reading colour engineering achieving device independent colour? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this colour engineering achieving device independent colour.

Journal ArticleDOI
TL;DR: A new class of AM/FM halftoning algorithms that simultaneously modulate the dot size and density are presented, with major advantages of better stability than FM methods through the form of larger dot clusters; better moireresistance than AM methods through irregular dot placement; and improved quality through sys- tematic optimization of thedot size and dot density at each gray level.
Abstract: Conventional digital halftoning approaches function by modulating either the dot size (amplitude modulation (AM)) or the dot density (frequency modulation (FM)). Generally, AM halftoning methods have the advantage of low computation and good print stability, while FM halftoning methods typically have higher spatial resolution and resistance to moireartifacts. In this paper, we present a new class of AM/FM halftoning algorithms that simultaneously modulate the dot size and density. The major advantages of AM/FM halftoning are better stability than FM methods through the forma- tion of larger dot clusters; better moireresistance than AM methods through irregular dot placement; and improved quality through sys- tematic optimization of the dot size and dot density at each gray level. We present a general method for optimizing the AM/FM method for specific printers, and we apply this method to an elec- trophotographic printer using pulse width modulation technology.

Journal ArticleDOI
TL;DR: A general smoothing procedure with a single parameter which allows to control the degree of smoothing and a method with no parameter for smoothing zoomed binary images in 2D or 3D while preserving topology.
Abstract: We introduce the homotopic alternating sequential filter as a new method for smoothing two-dimensional (2D) and three-dimensional (3D) objects in binary images. Unlike existing methods, our method offers a strict guarantee of topology preservation. This property is ensured by the exclusive use of homotopic transformations defined in the framework of digital topology. Smoothness is obtained by the use of morphological openings and closings by metric disks or balls of increasing radius, in the manner of an alternating sequential filter. The homotopic alternating sequential filter operates both on the object and on the background, in an equilibrated way. It takes an original image X and a control image C as input, and smooths X "as much as possible" while respecting the topology of X and geometrical constraints implicitly represented by C. Based on this filter, we introduce a general smoothing procedure with a single parameter which allows to control the degree of smoothing. Furthermore, the result of this procedure presents small variations in response to small variations of the parameter value. We also propose a method with no parameter for smoothing zoomed binary images in 2D or 3D while preserving topology.

Journal ArticleDOI
TL;DR: This work enhances the registration tolerance to obtain the third image and reduces the difficulty of superimposing the image while allowing a variety of gray levels, by extending dot-clustered subpixel arrangements and enabling continuous gray-scale subpixel values.
Abstract: Extended visual cryptography [Ateniese et al., Theor. Comput. Sci. 250, 143–161 (2001)] is a method that encodes a number of images so that when the images are superimposed, a hidden image appears while the original images disappear. The decryption is done directly by human eyes without cryptographic calculations. Our proposed system takes three natural images as input and generates two images that are modifications of two of the input pictures. The third picture is viewed by superimposing the two output images. A trade-off exists between the number of gray levels and the difficulty in stacking the two sheets. Our new approach enhances the registration tolerance to obtain the third image and reduces the difficulty of superimposing the image while allowing a variety of gray levels. It is done by extending dot-clustered subpixel arrangements and enabling continuous gray-scale subpixel values. The system has considerably enhanced tolerance to the registration error. We show this by superimposing the output by computer simulation and calculating the peak SNRs with the original images.

Journal ArticleDOI
Yu-Chen Hu1
TL;DR: The proposed scheme indeed improves the perfor- mance of the MPBTC scheme and it is quite suitable for those mul- timedia applications with low computational cost requirement.
Abstract: In this paper, a novel image compression scheme based on the moment preserving block truncation coding (MPBTC) scheme is proposed. In this scheme, three techniques are employed to cut down the bit rate of moment preserving block truncation cod- ing. They are the two-dimensional prediction technique, the bit map omission technique, and the bit map coding using edge patterns. According to the experimental results, the proposed scheme not only achieves good image quality at low bit rate, but also requires little computational cost in the encoding/decoding procedures. In other words, the proposed scheme indeed improves the perfor- mance of the MPBTC scheme and it is quite suitable for those mul- timedia applications with low computational cost requirement.

Journal ArticleDOI
TL;DR: This work addresses the issues involved in the watermarking of rate-scalable video streams delivered using a practical network and presents a review of streaming video, including scalable video compression and network transport, followed by a brief review of video watermarks.
Abstract: Video streaming, or the real-time delivery of video over a data network, is the underlying technology behind many applications including video conferencing, video on demand, and the delivery of educational and entertainment content. In many applications, particularly ones involving entertainment content, security issues, such as conditional access and copy protection, must be addressed. To resolve these security issues, techniques that include encryption and watermarking need to be developed. Since video sequences will often be compressed using a scalable compression technique and transported over a lossy packet network using the Internet Protocol (IP), security techniques must be compatible with the compression method and data transport and be robust to errors. We address the issues involved in the watermarking of rate-scalable video streams delivered using a practical network. Watermarking is the embedding of a signal (the watermark) into a video stream that is imperceptible when the stream is viewed, but can be detected by a watermark detector. Many watermarking techniques have been proposed for digital images and video, but the issues of streaming have not been fully investigated. A review of streaming video is presented, including scalable video compression and network transport, followed by a brief review of video watermarking and the discussion of watermarking streaming video.

Journal ArticleDOI
TL;DR: The proposed genetic-algorithm-based fuzzy-logic approach is an effective method for computer-aided diagnosis in disease classification and achieves an average accuracy of 96% for myocardial heart disease and accuracy at 100% sensitivity level for microcalcification on mammograms.
Abstract: In this paper we present a genetic-algorithm-based fuzzy-logic approach for computer-aided diagnosis scheme in medical imaging. The scheme is applied to discriminate myocardial heart disease from echocardiographic images and to detect and classify clustered microcalcifications from mammograms. Unlike the conventional types of membership functions such as trapezoid, triangle, S curve, and singleton used in fuzzy reasoning, Gaussian-distributed fuzzy membership functions (GDMFs) are employed in the present study. The GDMFs are initially generated using various texture-based features obtained from reference images. Subsequently the shapes of GDMFs are optimized by a genetic-algorithm learning process. After optimization, the classifier is used for disease discrimination. The results of our experiments are very promising. We achieve an average accuracy of 96% for myocardial heart disease and accuracy of 88.5% at 100% sensitivity level for microcalcification on mammograms. The results demonstrated that our proposed genetic-algorithm-based fuzzy-logic approach is an effective method for computer-aided diagnosis in disease classification.

Journal ArticleDOI
TL;DR: A technique for the estimation of the regularization parameter for image resolution enhancement (superresolution) based on the assumptions that it should be a function of theRegularized noise power of the data and that its choice should yield a convex functional whose minimization would give the desired high-resolution image.
Abstract: We propose a technique for the estimation of the regularization parameter for image resolution enhancement (superresolution) based on the assumptions that it should be a function of the regularized noise power of the data and that its choice should yield a convex functional whose minimization would give the desired high-resolution image. The regularization parameter acts adaptively to determine the trade-off between fidelity to the received data and prior information about the image. Experimental results are presented and conclusions are drawn.

Journal ArticleDOI
TL;DR: On-site quantitative and qualitative evaluations of the vari- ous decluttered images by airport screeners establishes that the single slice from the image hashing algorithm outperforms tradi- tional enhancement techniques with a noted increase of 58% in low- density threat detection rates.
Abstract: Very few image processing applications have dealt with x-ray luggage scenes in the past. Concealed threats in general, and low-density items in particular, pose a major challenge to airport screeners. A simple enhancement method for data decluttering is introduced. Initially, the method is applied using manually selected thresholds to progressively generate decluttered slices. Further au- tomation of the algorithm, using a novel metric based on the Radon transform, is conducted to determine the optimum number and val- ues of thresholds and to generate a single optimum slice for screener interpretation. A comparison of the newly developed metric to other known metrics demonstrates the merits of the new ap- proach. On-site quantitative and qualitative evaluations of the vari- ous decluttered images by airport screeners further establishes that the single slice from the image hashing algorithm outperforms tradi- tional enhancement techniques with a noted increase of 58% in low- density threat detection rates. © 2004 SPIE and IS&T.

Journal ArticleDOI
TL;DR: A new color conversion method for multi-primary display that reduces the observer metamerism is proposed and gives the multi-dimensional control value of a display device to minimize the spectral approximation error under the constraints of tristimulus match.
Abstract: In the conventional color reproduction based on the colorimetric match for a standard observer, color mismatch can be perceived if the color matching functions of the observer deviate from those of the standard observer; this phenomenon is known as observer metamerism. Recently, multi-primary display, using more than three-primary colors, has attracted attention as a color reproduction media because of its expanded gamut and its possibility to reduce the color mismatch caused by observer metamerism. In this paper, a new color conversion method for multi-primary display that reduces the observer metamerism is proposed. The proposed method gives the multi-dimensional control value of a display device to minimize the spectral approximation error under the constraints of tristimulus match. Reproduced spectrum by a seven-primary display is simulated and evaluated by the color matching functions of Stiles's 20 observers. The results confirmed that the proposed method reduces the color reproduction error caused by observer variability compared to the other seven-primary reproduction and conventional three-primary reproduction. The preliminary visual evaluation results with a seven-primary display using light-emitting diodes are also introduced.

Journal ArticleDOI
TL;DR: An artificial vision system for the quality control of cherries based on the color as an indicator of ripeness, the presence of defects such as cracking, and the size is described.
Abstract: We describe an artificial vision system for the quality control of cherries. We develop image-processing software determining three types of information describing the quality of the fruit: the color as an indicator of ripeness, the presence of defects such as cracking, and the size. The sorting of cherries conditioned by all these criteria is carried out at a high cadence (20 cherries/s). Real-time image processing is then necessary. A simple sensor is used to synchronize the processing and the sorting of cherries. Air actuators are used to eject them after quality control by vision. We present the architecture of the developed system. We show its efficiency through experimental results and we give some perspectives of improvement.