scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Electronic Imaging in 1999"


Journal ArticleDOI
TL;DR: Operations based on mathematical morphology which have been developed for binary and grayscale images are extended to color images and a set-theoretic analysis of these vector operations is presented.
Abstract: In this paper operations based on mathematical morphology which have been developed for binary and grayscale images are extended to color images. We investigate two approaches for ‘‘color morphology’’—a vector approach and a component-wise approach. New vector morphological filtering operations are defined, and a set-theoretic analysis of these vector operations is presented. We also present experimental results comparing the performance of the vector approach and the component-wise approach for multiscale color image analysis and for noise suppression in color images.

173 citations


Journal ArticleDOI
Raja Balasubramanian1
TL;DR: Techniques for optimizing the Neugebauer model include optimization of the Yule–Nielsen factor that accounts for light scattering in the paper, estimation of the dot area functions, and extension to a cellular model.
Abstract: A colorimetric printer model takes as its input a set of ink values and predicts the resulting printed color, as specified by reflectance or tristimulus values. The Neugebauer model has been widely used to predict the colorimetric response of halftone color printers. In this paper, techniques for optimizing the Neugebauer model are presented and compared. These include optimization of the Yule–Nielsen factor that accounts for light scattering in the paper, estimation of the dot area functions, and extension to a cellular model. A new technique is described for optimizing the Neugebauer primaries using weighted spectral regression. Experimental results are presented for xerographic printers using two halftone screens: the random or rotated dot, and the dot-on-dot screen. Use of the Yule–Nielsen factor, the cellular framework, and spectral regression considerably increase model accuracy.

139 citations


Journal ArticleDOI
TL;DR: The results of this study showed that it was possible to maintain the perceived lightness contrast of the images by using sigmoidal contrast enhancement functions to selectively rescale images from a source device with a full dynamic range into a destination devices with a limited dynamic range.
Abstract: In color gamut mapping of pictorial images, the lightness rendition of the mapped images plays a major role in the quality of the final image. For color gamut mapping tasks, where the goal is to produce a match to the original scene, it is important to maintain the perceived lightness contrast of the original image. Typical lightness remapping functions such as linear compression, soft compression, and hard clipping reduce the lightness contrast of the input image. Sigmoidal remapping functions were utilized to overcome the natural loss in perceived lightness contrast that results when an image from a full dynamic range device is scaled into the limited dynamic range of a destination device. These functions were tuned to the particular lightness characteristics of the images used and the selected dynamic ranges. The sigmoidal remapping functions were selected based on an empirical contrast enhancement model that was developed for the result of a psychophysical adjustment experiment. The results of this study showed that it was possible to maintain the perceived lightness contrast of the images by using sigmoidal contrast enhancement functions to selectively rescale images from a source device with a full dynamic range into a destination device with a limited dynamic range.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

129 citations


Journal ArticleDOI
TL;DR: A set of multichannel camera systems and algorithms is described for recovering both the surface spectral-reflectance function and the illuminant spectral-power distribution from the data of spectral imaging.
Abstract: A set of multi-channel camera systems and algorithms is described for recovering both the surface spectral- reflectance function and the illuminant spectral-power distribution from the data of spectral imaging. We show the camera system with six spectral channels of fixed wavelength bands. This system is created by using a monochrome CCD camera, six different color filters, and a personal computer. The dynamic range of the camera is extended for sensing the high intensity level of highlights. We suppose in a scene that the object surface of an inhomogeneous dielectric material is described by the dichromatic reflection model. The process for estimating the spectral information is composed of several steps of (1) the finite- dimensional linear model representation of wavelength functions, (2) illuminant estimation, (3) data normalization and image segmentation, (4) reflectance estimation. The reliability of the camera system and the algorithms is demonstrated in an experiment. Finally a new type of system using liquid crystal filters is briefly introduced.

82 citations


Journal ArticleDOI
TL;DR: A method of enhancing color images by applying histogram equalization to the saturation component in the color difference (C-Y) color space, taking into account the relationship that exists between luminance and saturation and how the luminance value limits the range of possible saturations.
Abstract: Histogram equalization and specification have been widely used to enhance the content of grayscale images, with histogram specification having the advantage of allowing the output histogram to be specified as compared to histogram equalization which attempts to produce and output histogram which is uniform. Unfortunately, extending histogram techniques to color images is not very straightforward. Performing histogram specification on color images in the RGB color space results in specified histograms that are hard to interpret for a particular enhancement that is desired. Human perception of color interprets a color in terms of its hue, saturation, and intensity components. In this paper, we describe a method of extending graylevel histogram specification to color images by performing histogram specification on the luminance (or intensity), saturation, and hue components in the color difference (C-Y) color space. This method takes into account the correlation between the hue, saturation, and intensity components while yielding specified histograms which have physical meaning. Histogram specification was performed on an example color image and was shown to enhance the color content and details within this image without introducing unwanted artifacts.

70 citations


Journal ArticleDOI
TL;DR: Three-dimensional gamut mapping using various color difference formulae and color spaces is considered and it is concluded that ?
Abstract: Gamut mapping is a technique to transform out-of-gamut colors to the inside of the output device’s gamut. It is essential to develop effective mapping algorithms to realize ‘‘WYSIWYG’’ color reproduction. In this paper, three-dimensional gamut mapping using various color difference formulae and color spaces are considered. Visual experiments were performed to evaluate which combination of color difference formula and color space for gamut mapping was most preferred for five images. The color difference formulae used in the experiments were ?Eab * , ?Euv * , ?E94 , ?ECMC , ?EBFD , and ?Ewt . The color spaces used in the experiments were CIELAB, CIELUV, CIECAM97s, IPT, and NC-IIIC. A clipping method was used that maps all out-of-gamut colors to the surface of the gamut, and no change was made to colors inside the gamut. It was found that gamut mapping using ?E94 , ?ECMC , and ?Ewt were effective in CIELAB color space. For mapping images containing a large portion of blue colors, DEBFD and ?Euv * were found to be more effective. ?Eab * was least preferred for all images. With respect to color spaces, gamut mapping performed in the CIELUV color space was superior to any other color spaces for the blue region. We conclude that ?E94 in CIELUV and ?EBFD in CIELAB are the two most useful combinations of color difference formula and color space for gamut mapping, if we are to apply a single combination universally.

69 citations


Journal ArticleDOI
TL;DR: A new approach of bronchi segmentation in HRCT in order to estimate the bronchial caliber is addressed, and it is shown that, according to the size of the bronchi, the estimation accuracy is up to 90%.
Abstract: Accurate estimation of bronchial caliber in high resolution computerized tomography (HRCT) is essential for physicians in the management and following of patients with airway disease. Although there are at present different methods of bronchi analysis, none of them can provide an absolute diagnosis of bronchial caliber. The present paper addresses a new approach of bronchi segmentation in HRCT in order to estimate the bronchial caliber. The method developed is based on mathematical morphology theory, and relies on morphological filtering, marking techniques derived from the concept of connection cost, and conditional watershed-based segmentation. In order to evaluate the robustness of the segmentation and the accuracy of the caliber estimates, a realistic bronchi modeling based on physiological characteristics has been developed. It is shown that, according to the size of the bronchi, the estimation accuracy is up to 90%.

65 citations


Journal ArticleDOI
TL;DR: A new unified radiography/fluoroscopy solid-state detector concept is presented and a digital image processing algorithm for the reduction of noise in images acquired with low x-ray dose is described.
Abstract: This contribution discusses a selection of today’s techniques and future concepts for digital x-ray imaging in medicine. Advantages of digital imaging over conventional analog methods include the possibility to archive and transmit images in digital information systems as well as to digitally process pictures before display, for example, to enhance low contrast details. After reviewing two digital x-ray radiography systems for the capture of still x-ray images, we examine the real time acquisition of dynamic x-ray images (x-ray fluoroscopy). Here, particular attention is paid to the implications of introducing charge-coupled device cameras. We then present a new unified radiography/fluoroscopy solid-state detector concept. As digital image quality is predominantly determined by the relation of signal and noise, aspects of signal transfer, noise, and noise-related quality measures like detective quantum efficiency feature prominently in our discussions. Finally, we describe a digital image processing algorithm for the reduction of noise in images acquired with low x-ray dose. © 1999 SPIE and IS&T.

62 citations


Journal ArticleDOI
TL;DR: The results shaw that the new amorphous three-channel sensors perform comparably with commercial charge coupled device cameras under the conditions tested while the six-channel's performance is striking.
Abstract: This paper describes a new type of multichannel color sensor with the special property of having all channels per pixel at the same spatial location. This arrangement is accomplished by stacking three amorphous thin film detectors on a substrate (glass or silicon), one on tap of the other. If has the advantage that the color aliasing effect is avoided. This effect produces large color errors when objects of high spatial frequency are captured using multichannel imaging sensors with color filter arrays. The new technique enables the design of a three-channel sensor as well as a six-channel sensor, the latter being valuable even as a single-pixel device for metrology. in the six-channel case, color is captured in two "shots" by changing the bias voltages. The sensors are characterized colorimetrically, including methods such as multiple polynomial regression both for tristimulus and spectral reconstruction, and the smoothing inverse for spectral reconstruction. The results obtained with different types of regression polynomials, different sensors, and different characterization methods are compared The results shaw that the new amorphous three-channel sensors perform comparably with commercial charge coupled device cameras under the conditions tested while the six-channel's performance is striking. (C) 1999 SPIE and IS&T.

40 citations


Journal ArticleDOI
TL;DR: Transformerbased denoising methods for removing blocking and ringing artifacts from decompressed block transform or wavelet coded images are applied and the method is universal and applies to any compression method used.
Abstract: ing corthat uch 999 Abstract. A new algorithm for removing mixed noise from images based on combining an impulse removal operation with local adaptive filtering in transform domain is proposed in this paper. The key point is that the operation is designed so that it removes impulses while maintaining as much as possible of the frequency content of the original image. The second stage is an adaptive denoising operation based on local transform. The proposed algorithm works well in denoising images corrupted by a white (Gaussian, Laplacian, exponential) noise, impulsive noise, and their mixtures. Comparison of the new algorithm with known techniques for removing mixed noise from images shows the advantages of the new approach, both quantitatively and visually. In this paper we also apply transformbased denoising methods for removing blocking and ringing artifacts from decompressed block transform or wavelet coded images. The method is universal and applies to any compression method used. © 1999 SPIE and IS&T. [S1017-9909(99)00803-X]

37 citations


Journal ArticleDOI
Keith T. Knox1
TL;DR: The history of the algorithm and its modifications are reviewed and three watershed events in the development of error diffusion will be described, together with the lessons learned along the way.
Abstract: As we approach the new millenium, error diffusion is ap- proaching the 25th anniversary of its invention. Because of its ex- ceptionally high image quality, it continues to be a popular choice among digital halftoning algorithms. Over the last 24 years, many attempts have been made to modify and improve the algorithm—to eliminate unwanted textures and to extend it to printing media and color. Some of these modifications have been very successful and are in use today. This paper will review the history of the algorithm and its modifications. Three watershed events in the development of error diffusion will be described, together with the lessons learned along the way. © 1999 SPIE and IS&T. (S1017-9909(99)00104-X)

Journal ArticleDOI
TL;DR: A new technique for building stochastic clustered-dot screens is being proposed, which may improve the tone reproduction of printers having important dot gain and can be adapted to the characteristics of particular printing devices.
Abstract: A new technique for building stochastic clustered-dot screens is being proposed. A large dither matrix comprising thousands of stochastically laid out screen dots is constructed by first laying out the screen dot centers. Screen dot centers are obtained by placing discrete disks of a chosen radius at free cell locations when traversing the dither array cells according to either a discretely rotated Hilbert space-filling curve or a random space-filling curve. After Delauney triangulation of the screen dot centers, the maximal surface of each screen dot is computed and isointensity regions are created. This isointensity map is converted into an antialiased gray scale image, i.e., into an array of preliminary threshold values. These threshold values are renumbered to obtain the threshold values of the final dither threshold array. By changing the disk radius, the screen dot size can be adapted to the characteristics of particular printing devices. Larger screen dots may improve the tone reproduction of printers having important dot gain

Journal ArticleDOI
TL;DR: An adaptive vector quantizer using a clustering technique known as adaptive fuzzy leader clustering (AFLC) that is similar in concept to deterministic annealing (DA) for VQ codebook design has been developed and exhibits much better performance than the above techniques.
Abstract: An adaptive vector quantizer (VQ) using a clustering technique known as adaptive fuzzy leader clustering (AFLC) that is similar in concept to deterministic annealing (DA) for VQ codebook design has been developed. This vector quantizer, AFLC-VQ, has been designed to vector quantize wavelet decomposed sub images with optimal bit allocation. The high-resolution sub images at each level have been statistically analyzed to conform to generalized Gaussian probability distributions by selecting the optimal number of filter taps. The adaptive characteristics of AFLC-VQ result from AFLC, an algorithm that uses self-organizing neural networks with fuzzy membership values of the input samples for upgrading the cluster centroids based on well known optimization criteria. By gen- erating codebooks containing codewords of varying bits, AFLC-VQ is capable of compressing large color/monochrome medical images at extremely low bit rates (0.1 bpp and less) and yet yielding high fidelity reconstructed images. The quality of the reconstructed im- ages formed by AFLC-VQ has been compared with JPEG and EZW, the standard and the well known wavelet based compression tech- nique (using scalar quantization), respectively, in terms of statistical performance criteria as well as visual perception. AFLC-VQ exhibits much better performance than the above techniques. JPEG and EZW were chosen as comparative benchmarks since these have been used in radiographic image compression. The superior perfor- mance of AFLC-VQ over LBG-VQ has been reported in earlier pa- pers. © 1999 SPIE and IS&T. (S1017-9909(99)01301-X)

Journal ArticleDOI
TL;DR: A new method for design of image segmentation systems is reported, in which the criterion of optimality is automatically determined by learning from border trac- ing examples.
Abstract: This paper provides examples of several medical image analysis applications for which single-purpose border detection ap- proaches were developed in the past. However, the utility of these and other existing automated and semiautomated medical image analysis systems is limited by their narrow, frequently single- purpose orientation. After a general approach to graph-based opti- mal border detection is overviewed, a new method for design of image segmentation systems is reported, in which the criterion of optimality is automatically determined by learning from border trac- ing examples. Border features employed in the designed method are selected from a predefined global set using radial-basis neural networks. The method was validated in intracardiac, intravascular, and ovarian ultrasound images. The achieved performance was comparable to that of our previously reported single-purpose border detection methods. Our approach facilitates development of general multi-purpose image segmentation systems that can be trained for different types of image segmentation applications. © 1999 SPIE and IS&T. (S1017-9909(99)01201-5)

Journal ArticleDOI
TL;DR: It will be shown that, with this simple overmodulation scheme, it will be able to manipulate the dot patterns around the intermediate output levels to achieve desired halftone patterns.
Abstract: Multilevel halftoning (multitoning) is an extension of bi- tonal halftoning, in which the appearance of intermediate tones is created by the spatial modulation of more than two tones, i.e., black, white, and one or more shades of gray. In this paper, the conven- tional multitoning approach and a previously proposed approach, both using stochastic screen dithering, are investigated. A human visual model is employed to measure the perceived halftone error for both algorithms. The performance of each algorithm at gray lev- els near the printer's intermediate output levels is compared. Based on this study, a new overmodulation algorithm is proposed. The multitone output is mean preserving with respect to the input and the new algorithm requires little additional computation. It will be shown that, with this simple overmodulation scheme, we will be able to manipulate the dot patterns around the intermediate output levels to achieve desired halftone patterns. Implementation issues related to optimal output level selection and inkjet-printing simulation for this new scheme will also be reported. © 1999 SPIE and IS&T. (S1017-9909(99)00203-2)

Journal ArticleDOI
TL;DR: Results show that the proposed procedure gives quite satisfactory detection rate, and a 93% mean true positive detection rate is achieved at the price of one false positive per image when both wavelet features and gray level statistical features are used in the first step.
Abstract: The existence of clustered microcalcifications is one of the important early signs of breast cancer. This paper presents an image processing procedure for the automatic detection of clustered microcalcifications in digitized mammograms. In particular, a sensi- tivity range of around one false positive per image is targeted. The proposed method consists of two main steps. First, possible micro- calcification pixels in the mammograms are segmented out using wavelet features or both wavelet features and gray level statistical features, and labeled into potential individual microcalcification ob- jects by their spatial connectivity. Second, individual microcalcifica- tions are detected by using the structure features extracted from the potential microcalcification objects. The classifiers used in these two steps are feedforward neutral networks. The method is applied to a database of 40 mammograms (Nijmegen database) containing 105 clusters of microcalcifications. A free response operating character- istics curve is used to evaluate the performance. Results show that the proposed procedure gives quite satisfactory detection perfor- mance. In particular, a 93% mean true positive detection rate is achieved at the price of one false positive per image when both wavelet features and gray level statistical features are used in the first step. © 1999 SPIE and IS&T. (S1017-9909(99)00701-1)

Journal ArticleDOI
TL;DR: The edge of the moon is used as a high contrast target to perform a visible ''knife-edge'' modulation transfer function (MTF) test on a digital imaging system in geostationary orbit, which offers a means of testing the MTF and OTF of orbiting image acquisition devices as well as enhancing satellite imagery.
Abstract: The edge of the moon is used as a high contrast target to perform a visible ''knife-edge'' modulation transfer function (MTF) test on a digital imaging system in geostationary orbit. An image of the moon is taken in the camera's normal scanning mode, and traces across the sharpest edge are used to form an edge spread function (ESF). The ESF is then used to produce a MTF estimate. In a second trial, the imaging system stares as the lunar edge drifts by, creating an edge spread function with a much higher effective spa- tial sampling rate. In each case, a technique of combining and re- sampling traces is employed to adapt the knife-edge MTF technique for use with sampled data. The resulting MTF curves track ground test frequencies to within 5%. The phase transfer function is also extracted, and the process is repeated in the north/south direction. The functions are combined to produce a two-dimensional optical transfer function (OTF) which is used as an inverse filter to restore raw images via deconvolution. The approach thus offers a means of testing the MTF and OTF of orbiting image acquisition devices as well as enhancing satellite imagery. © 1999 SPIE and IS&T. (S1017-9909(99)00102-6) 1 Background In the past, the lunar surface has been used as a calibration target for orbiting imaging systems. 1 Its surface radiance offers a convenient, constant source against which radiom- eters can be calibrated over a wide spectral range. This paper, however, considers another use of the moon, that of an imaging performance target for space-based cameras. Satellite engineers testing orbiting image acquisition systems frequently confront the need to assess the system's imaging performance. Generally, the quantifiable measure of imaging fidelity is the modulation transfer function ~MTF! curve, the absolute value of the optical transfer function ~OTF!. But measuring the MTF of an orbiting camera is complicated by the lack of readily available MTF targets. Another problem in this context is the fact that standard MTF tests ~point spread, knife-edge, slit scan, etc. ! were developed for analog instruments producing continu- ous signals, not digital imagers as are generally employed in modern remote sensing. The need for this testing is ap- plicable both for scanning devices, having focal plane ar- rays ~FPA! that sweep across space with scanning mirrors,

Journal ArticleDOI
TL;DR: Several adaptive dithering techniques based on cluster dot dithering are introduced and discussed and it is clearly advantageous to allow variability in the dither cell size using small cell sizes in image regions of fine details and using large cell sizes on image regions where gray tones are to be accurately reproduced.
Abstract: Cluster dot dithering is one of the most common halfton- ing techniques. It is fast, low in complexity and allows for variability and inconsistencies in point spreads in printer outputs. Determina- tion of the basic dither cell size is critical for the quality of the half- toning. There is a basic tradeoff between large and small cell sizes: spatial resolution versus gray tone resolution. Large dither cell sizes produce good tone resolution but poorly reproduce spatial details in the image. Small dither cells, on the other hand, produce fine spatial resolution but lack the tone resolution which produces smooth gray tone gradients in halftone images. Typically, cluster dot dithering assumes a predefined dither cell size that compromises between fine detail reproduction and good gray tone reproduction. It is clearly advantageous to allow variability in the dither cell size using small cell sizes in image regions of fine details and using large cell sizes in image regions where gray tones are to be accurately reproduced. In this paper, we introduce and discuss several adaptive dithering techniques based on cluster dot dithering. © 1999 SPIE and IS&T. (S1017-9909(99)00402-X)


Journal ArticleDOI
TL;DR: This paper applies granulometric segmentation to digitized mammograms in an unsupervised framework to determine the algorithm structure that best accords to an expert radiologist’s view of a set of mammograms.
Abstract: Segmentation via morphological granulometric features is based on fitting structuring elements into image topography from below and above. Each structuring element captures a specific texture content. This paper applies granulometric segmentation to digitized mammograms in an unsupervised framework. Granulometries based on a number of flat and nonflat structuring elements are computed, local size distributions are tabulated at each pixel, granulometric-moment features are derived from these size distributions to produce a feature vector at each pixel, the Karhunen–Loeve transform is applied for feature reduction, and Voronoi-based clustering is performed on the reduced Karhunen–Loeve feature set. Various algorithmic choices are considered, including window size and shape, number of clusters, and type of structuring elements. The algorithm is applied using only granulometric texture features, using gray-scale intensity along with the texture features, and on a compressed mammogram. Segmentation results are clinically evaluated to determine the algorithm structure that best accords to an expert radiologist’s view of a set of mammograms.

Journal ArticleDOI
TL;DR: A general definition for a nonlinear diffusion process is formulated using the concept of an activity image that can be calculated for several image components, and how the final activity image is fed through a watershed algorithm, yielding the segmentation of the image.
Abstract: Nonlinear diffusion processes and watershed algorithms have been well studied for gray-scale image segmentation. In this paper we extend the use of these techniques to color or multichannel images. First, we formulate a general definition for a nonlinear diffusion process using the concept of an activity image that can be calculated for several image components. Then, we explain how the final activity image, obtained as a result of the nonlinear diffusion process, is fed through a watershed algorithm, yielding the segmentation of the image. The qualitative performance of the algorithm is illustrated with results for both gray-scale and color photographic images. Finally, we discuss the segmentation results obtained using a few well-known color spaces and demonstrate that a color principal component analysis gives the best results.

Journal ArticleDOI
TL;DR: This work accelerates the encoding process by a priori discarding the low variance domains from the pool that are unlikely to be chosen for the fractal code, and may be exploited for an improved encoding of the domain indices, in effect raising the compression ratio.
Abstract: In fractal image compression an image is partitioned into ranges for each of which a similar subimage, called domain, is selected from a pool of subimages. However, only a fraction of this large pool is actually used in the fractal code. This subset can be characterized in two related ways: (1) It contains domains with relatively large intensity variation. (2) The collection of used domains is localized in those image regions that show a high degree of structure. Both observations lead to improvements of fractal image compression. First, we accelerate the encoding process by a priori discarding the low variance domains from the pool that are unlikely to be chosen for the fractal code. Second, the localization of the domains may be exploited for an improved encoding of the domain indices, in effect raising the compression ratio. When considering the performance of a variable rate fractal quadtree encoder we found that a speedup by a factor of 2–3 does not degrade the ratedistortion curve ranging from compression ratio 5 up to 30.

Journal ArticleDOI
TL;DR: This work shows that the Retinex computational model is very well suited to solve the color constancy problem without any a priori information on the illuminant spectral distribution.
Abstract: Solving the color constancy problem in many applica- tions implies the understanding of chromatic adaptation. The Ret- inex theory justifies chromatic adaptation, as well as other color il- lusions, on visual perception principles. Based on the above theory, we have derived an algorithm to solve the color constancy problem and to simulate chromatic adaptation. The evaluation of the results depends on the kind of applications considered. Since our purpose is to contribute to the problem of color rendering for photorealistic image synthesis, we have devised a specific test approach. A virtual ''Mondrian'' patchwork has been created by applying a rendering algorithm with a photorealistic light model to generate images under different light sources. Trichromatic values of the computer gener- ated patches are the input data for the Retinex algorithm, computing new color corrected patches. The Euclidean and the DE94 distances in the CIELAB space, between the original and Retinex color cor- rected trichromatic values, have been calculated. A preliminary analysis of the just noticeable difference has also been done on some colors compared to the closest MacAdam ellipses. Our work shows that the Retinex computational model is very well suited to solve the color constancy problem without any a priori information on the illuminant spectral distribution. © 1999 SPIE and IS&T. (S1017-9909(99)01004-1)

Journal ArticleDOI
TL;DR: This paper shows surprisingly large discrepancies between CIE L '" a'" b* and isotropic obselVatjon~based c%r spaces, such as Munsell: chroma exaggerate yellows and underestimate blues, and computers aI/ow threedimensional lookup tables to convert instantly any measured L ljI a ljl b* to interpolated Munsell Book values.
Abstract: Before doing extensive c%r gamut experiments, we wanted to test the uniformity of CIE L'\" a'\" b* , This paper shows surprisingly large discrepancies between CIE L '\" a'\" b* and isotropic obselVatjon~based c%r spaces, such as Munsell: (1) L *a* b* chroma exaggerate yellows and underestimate blues. (2) The average discrepancy between L '\" a* b* and ideal is 27%. (3) Chips with identical L '\" a* b* hue angles are not the same color. L '\" a* b* introduces errors larger than many gamut mapping corrections. We have isotropic data in the Munsell Book. Computers aI/ow threedimensional lookup tables to convert instantly any measured L ljI a ljl b* to interpolated Munsell Book values. We call this space ML, Ma, and Mb in honor of Munsell. LUTs have been developed for both LabtoMLab and MLabtoLab. With this zero-error, isotropic space we can return our attention to the original problem of colorgamut image processing. Cl 1999 SPIE and IS&T. [5101 7-9909(99)00804-1]

Journal ArticleDOI
TL;DR: An improved dithering method using a Voronoi tessellation and three criteria to select among equally likely candidates for the locations of the largest voids and the tightest clusters is presented.
Abstract: Dithering quality of the void and cluster algorithm suffers due to fixed filter width and absence of a well-defined criterion for selecting among equally likely candidates during the computation of the locations of the tightest clusters and largest voids. Various researchers have addressed the issue of fixed filter width by adaptively changing the width with experimentally determined values. This paper addresses both aforementioned issues by using a Voronoi tessellation and three criteria to select among equally likely candidates. The algorithm uses vertices of the Voronoi tessellation, and the areas of the Voronoi regions to determine the locations of the largest voids and the tightest clusters. During void and cluster operations there may be multiple equally likely candidates for the locations of the largest voids and the tightest clusters. The selection among equally likely candidates is important when the number of candidates is larger than the number of dots for a given quantization level, or if there are candidates within the local neighborhood of one of the candidate points, or if a candidate’s Voronoi region shares one or more vertices with another candidate’s Voronoi region. Use of these methods leads to more uniform dot patterns for light and dark tones. The improved algorithm is compared with other dithering methods based on power spectrum characteristics and visual evaluation.

Journal ArticleDOI
TL;DR: Approaches presented in this paper divide the filter operation into two stages and apply constraints only to the first stage, which are advantageous since they are fully optimal with respect to certain subsets of the filter window.
Abstract: Filter design involves a trade-off between the size of the filter class over which optimization is to be performed and the size of the training sample. As the number of parameters determining the filter class grows, so too does the size of the training sample required to obtain a given degree of precision when estimating the optimal filter from the sample data. A common way to moderate the estimation problem is to use a constrained filter requiring less parameters, but then a trade-off between the theoretical filter performance and the estimation precision arises. The overall result strongly depends on the constraint type. Approaches presented in this paper divide the filter operation into two stages and apply constraints only to the first stage. Such filters are advantageous since they are fully optimal with respect to certain subsets of the filter window. Error expression, representation, and design methodology are discussed. A generic optimization algorithm for such two-stage filters is proposed. Special attention is paid to three particular cases, for which properties, design algorithms, and experimental results are provided: two-stage filters with linearly separable preprocessing, two-stage filters with restricted window preprocessing, and twostage iterative filters.

Journal ArticleDOI
TL;DR: This paper describes a method which uses the skull as a landmark for automatic registration of computer tomography to mag- netic resonance (MR) images and uses the resulting creaseness images to build a hierarchic structure which permits a robust and fast search.
Abstract: This paper describes a method which uses the skull as a landmark for automatic registration of computer tomography to mag- netic resonance (MR) images. First, the skull is extracted from both images using a new creaseness operator. Then, the resulting creaseness images are used to build a hierarchic structure which permits a robust and fast search. We have justified experimentally the performance of several choices of our algorithm, and we have thoroughly tested its accuracy and robustness against the well- known mutual information method for five different pairs of images. We have found both comparable, and for certain MR images the proposed method achieves better performance. © 1999 SPIE and IS&T. (S1017-9909(99)00403-1)



Journal ArticleDOI
TL;DR: This paper describes a system for surface recovery and visualization of the three-dimensional topography of the optic nerve head, as support of early diagnosis and follow up of glaucoma, and is being especially refined to be used with medical images for clinical evaluation of some eye diseases.
Abstract: This paper describes a system for surface recovery and visualization of the three-dimensional topography of the optic nerve head, as support of early diagnosis and follow up of glaucoma. In stereo vision, depth information is obtained from triangulation of cor- responding points in a pair of stereo images. In this paper, the use of the cepstrum transformation as a disparity measurement tech- nique between corresponding windows of different block sizes is described. This measurement process is embedded within a coarse- to-fine depth-from-stereo algorithm, providing an initial range map with the depth information encoded as gray levels. These sparse depth data are processed through a cubic B-spline interpolation technique in order to obtain a smoother representation. This meth- odology is being especially refined to be used with medical images for clinical evaluation of some eye diseases such as open angle glaucoma, and is currently under testing for clinical evaluation and analysis of reproducibility and accuracy. © 1999 SPIE and IS&T. (S1017-9909(99)01101-0)