scispace - formally typeset
Search or ask a question

Showing papers on "Grayscale published in 1998"


Patent
02 Jun 1998
TL;DR: In this paper, a method for generating a velocity-indicating, tomographic image of a sample in an optical coherence tomography system includes the steps of acquiring cross-correlation data from the interferometer, generating a grayscale image from the cross-relation data indicative of a depth-dependent positions of scatterers in the sample, processing the crosscorrelative data to produce a velocity value and location of a moving scatterer in a sample; assigning a color to the velocity value; and merging the color into the graysscale image, at a point
Abstract: A method for generating a velocity-indicating, tomographic image of a sample in an optical coherence tomography system includes the steps of (a) acquiring cross-correlation data from the interferometer; (b) generating a grayscale image from the cross-correlation data indicative of a depth-dependent positions of scatterers in the sample; (c) processing the cross-correlation data to produce a velocity value and location of a moving scatterer in the sample; (d) assigning a color to the velocity value; and (f) merging the color into the grayscale image, at a point in the grayscale image indicative of the moving scatterer's location, to produce a velocity-indicating, tomographic image. Preferably a first color is assigned for a positive velocity value and a second color is assigned for a negative velocity value.

251 citations


Journal ArticleDOI
TL;DR: The authors' results show that mean-based filtering is consistently more effective than median-based algorithms for removing inhomogeneities in MR images, and that artifacts are frequently introduced into images at the most commonly used window sizes.
Abstract: Grayscale inhomogeneities in magnetic resonance (MR) images confound quantitative analysis of these images. Homomorphic unsharp masking and its variations have been commonly used as a post-processing method to remove inhomogeneities in MR images, However, little data is available in the literature assessing the relative effectiveness of these algorithms to remove inhomogeneities, or describing how these algorithms can affect image data. In this study, the authors address these questions quantitatively using simulated images with artificially constructed and empirically measured bias fields. The authors' results show that mean-based filtering is consistently more effective than median-based algorithms for removing inhomogeneities in MR images, and that artifacts are frequently introduced into images at the most commonly used window sizes. The authors' results demonstrate dramatic improvement in the effectiveness of the algorithms with significantly larger windows than are commonly used.

243 citations


Book ChapterDOI
TL;DR: The handwritten digits taken from US envelopes are regarded as a feature vector to be used as input to a classifier, which will automatically assign a digit class based on the pixel values.
Abstract: Figure 1 shows some handwritten digits taken from US envelopes. Each image consists of 16 × 16 pixels of greyscale values ranging from 0 – 255. These 256 pixel values are regarded as a feature vector to be used as input to a classifier, which will automatically assign a digit class based on the pixel values.

132 citations


Proceedings ArticleDOI
TL;DR: This paper aims to make gamma respectable again in computer graphics and image processing, due to poor understanding of tone scale reproduction and misconceptions about nonlinear coding.
Abstract: Gamma characterizes the reproduction of tone scale in an imaging system. Gamma summarizes, in a single numerical parameter, the nonlinear relationship between code value--in an 8-bit system, from 0 through 255--and physical intensity. Nearly all image coding systems are nonlinear, and so involve values of gamma different from unity. Owing to poor understanding of tone scale reproduction, and to misconceptions about nonlinear coding, gamma has acquired a terrible reputation in computer graphics and image processing. In addition, the world-wide web suffers from poor reproduction of grayscale and color images, due to poor handling of nonlinear image coding. This paper aims to make gamma respectable again.

116 citations


Proceedings ArticleDOI
21 Jun 1998
TL;DR: Two feature extraction methods and two decision methods to retrieve images having some section in them that is like the user input image, using a Gaussian classifier and nearest neighbor classifier are presented.
Abstract: The paper presents two feature extraction methods and two decision methods to retrieve images having some section in them that is like the user input image. The features used are variances of gray level co-occurrences and line-angle-ratio statistics constituted by a 2D histogram of angles between two intersecting lines and ratio of mean gray levels inside and outside the regions spanned by those angles. The decision method involves associating with any pair of images either the class "relevant" or "irrelevant". A Gaussian classifier and nearest neighbor classifier are used. A protocol that translates a frame throughout every image to automatically, define for any pair of images whether they are in the relevance class or the irrelevance class is discussed. Experiments on a database of 300 gray scale images with 9600 ground truth image pairs showed that the classifier assigned 80% of the image pairs one was sure were relevant, to the relevance class correctly. The actual retrieval accuracy is greater than this lower bound of 80%.

99 citations


Journal ArticleDOI
TL;DR: Numerical experiments testify the efficiency of a particular watermarkingalgorithm as a reliable verification tool for proving copyright ownership of the digital image.

87 citations


Proceedings Article
01 Jan 1998
TL;DR: Gamma characterizes the reproduction of tone scale in an imaging system and summarizes, in a single numerical parameter, the nonlinear relationship between code value - in an 8-bit system, from 0 through 255 - and physical intensity.
Abstract: Gamma characterizes the reproduction of tone scale in an imaging system. Gamma summarizes, in a single numerical parameter, the nonlinear relationship between code value - in an 8-bit system, from 0 through 255 - and physical intensity. Nearly all image coding systems are nonlinear, and so involve values of gamma different from unity. Owing to poor understanding of tone scale reproduction, and to misconceptions about nonlinear coding, gamma has acquired a terrible reputation in computer graphics and image processing. In addition, the world-wide web suffers from poor reproduction of grayscale and color images, due to poor handling of nonlinear image coding. This paper aims to make gamma respectable again.

71 citations


Patent
22 Dec 1998
TL;DR: In this article, an improved technique for compressing a color or gray scale pixel map representing a document using an MRC format including a method of segmenting an original pixel map into two planes, and then compressing the data or each plane in an efficient manner.
Abstract: An improved technique for compressing a color or gray scale pixel map representing a document using an MRC format including a method of segmenting an original pixel map into two planes, and then compressing the data or each plane in an efficient manner. The image is segmented such that pixels that compress well under a lossy compression technique are placed on one plane and pixels that must be compressed losslessly are placed on another plane. Lossy compression is then applied to the lossy pixel plane while lossless compression is applied to the lossless pixel plane.

68 citations


Journal ArticleDOI
TL;DR: In this work, lossless grayscale image compression methods are compared on a medical image database and the best methods turned out to be TMW, CALIC and JPEG-LS.

66 citations



Journal ArticleDOI
TL;DR: Using four features of relative address (Rx, Ry), normalized density, and histogram equalized entropy, the neural networks classified lungs at 92% accuracy against test images following the same rules as for the training images.
Abstract: The purposes of this research are to investigate the effectiveness of our novel image features for segmentation of anatomic regions such as the lungs and the mediastinum in chest radiographs and to develop an automatic computerized method for image processing. A total of 85 screening chest radiographs from Johns Hopkins University Hospital were digitized to 2 K by 2.5 K pixels with 12 bit gray scale. To reduce the amount of information, the images were smoothed and subsampled to 256 by 310 pixels with 8 bit. The determination approach consists of classifying each pixel into two anatomic classes (lung and others) on the basis of several image features: (1) relative pixel address (Rx,Ry) based on lung edges extracted through image processing using profile, (2) density normalized from lungs and mediastinum density, and (3) histogram equalized entropy. The combinations of image features were evaluated using an adaptive-sized hybrid neural network consisting of an input, a hidden, and an output layer. Fourteen images were used for the training of the neural network and the remaining 71 images for testing. Using four features of relative address (Rx,Ry), normalized density, and histogram equalized entropy, the neural networks classified lungs at 92% accuracy against test images following the same rules as for the training images.

01 Jan 1998
TL;DR: This paper describes an algorithm for pedestrian recognition from sequences of grayscale stereo images taken from a pair of moving cameras using a feedforward time delay neural network (TDNN) with spatio-temporal receptive fields.
Abstract: This paper describes an algorithm for pedestrian recognition from sequences of grayscale stereo images taken from a pair of moving cameras. The algorithm consists of two parts: 1) a preliminary detection and tracking stage which consists of a real-time stereo algorithm that yields image regions which possibly contain a pedestrian; and, 2) a classification stage in which the temporal sequences of regions of interest are classified using a feedforward time delay neural network (TDNN) with spatio-temporal receptive fields.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: A method is developed to automatically merge models corresponding to the same object, using a combination of image features rather than a single feature such as gradient, to allow for the automated segmentation of an unknown number and locations of objects.
Abstract: This paper describes a general-purpose method we have developed for automatically segmenting objects of an unknown number and unknown locations in images. Our method integrates deformable models and statistics of image cues including intensity, gradient, color and texture. By using a combination of image features rather than a single feature such as gradient our method is more robust to noise and sparse data. To allow for the automated segmentation of an unknown number and locations of objects, we simultaneously segment objects initialized at uniformly distributed points in the image. A method is developed to automatically merge models corresponding to the same object. Results of the method are presented for several examples, including greyscale, color and noisy images.

Proceedings ArticleDOI
S. Ji1, H.W. Park
04 Oct 1998
TL;DR: A two-step image segmentation algorithm is proposed, which is based on region coherency for the segmentation of color image, and the next one is the region merging using artificial neural networks.
Abstract: A two-step image segmentation algorithm is proposed, which is based on region coherency for the segmentation of color image. The first step is the watershed segmentation, and the next one is the region merging using artificial neural networks. Spatially homogeneous regions are obtained by the first step, but the regions are oversegmented. The second step merges the oversegmented regions. The proposed method exploits the luminance and chrominance difference components of the color image to verify region coherency. The YUV color coordinate system is used in this work.

Proceedings ArticleDOI
04 Oct 1998
TL;DR: This work presents a fast, non-iterative technique for producing grayscale images from error diffused and dithered halftones and compares it to the best reported statistical smoothing, wavelet, and Bayesian algorithms to show that it delivers comparable PSNR and subjective quality at a fraction of the computation and memory requirements.
Abstract: We present a fast, non-iterative technique for producing grayscale images from error diffused and dithered halftones. The first stage of the algorithm consists of a Gaussian filter and a median filter, while the second stage consists of a bandpass filter, a thresholding operation, and a median filter. The second stage enhances the rendering of edges in the inverse halftone. We compare our algorithm to the best reported statistical smoothing, wavelet, and Bayesian algorithms to show that it delivers comparable PSNR and subjective quality at a fraction of the computation and memory requirements. For error diffused halftones, our technique is seven times faster than the MAP estimation method and 75 times faster than the wavelet method. For dithered halftones, our technique is 200 times faster than the MAP estimation method. A C implementation of the algorithm is available.

Journal ArticleDOI
TL;DR: A two focal plane procedure for obtaining cell counts from nuclear-stained immunocytochemistry material is described, which allows the capturing and cell counting of relatively low-magnification images with high digital figure-ground resolution.

PatentDOI
11 Dec 1998
TL;DR: A new algorithm will be proposed that combines error diffusion, blue-noise dithering, and over-modulation so that high-quality multilevel printing with smooth texture transition can be achieved.
Abstract: A new multitoning technique is proposed that combines error diffusion, blue-noise dithering and over-modulation in an adaptive algorithm to achieve high quality multilevel printing with smooth texture transition. A periodic dither signal is first added to an input digital image wherein the amplitude of the periodic dither signal is a function of the input pixel value for each input pixel. The amplitude of the periodic dither signal is larger for input pixel values near the N output levels, and the amplitude of the periodic dither signal is smaller for input pixel values intermediate to the N output levels to produce a modified input image. Then, a multi-level error diffusion halftoning algorithm is applied to the modified input image wherein the error diffusion halftoning algorithm uses a set of error feedback weights which are adjusted according to the original input pixel value for each input pixel. The sum of the error feedback weights is smaller for input pixel values near the N output levels, and the sum of the error feedback weights is larger for input pixel values intermediate to the N output levels to produce an output multi-level digital image.

Journal ArticleDOI
TL;DR: This correspondence extends and modifies classified vector quantization (CVQ) to solve the problem of inverse halftoning and reconstructs a gray-scale image from a set of codeword-indices.
Abstract: This correspondence extends and modifies classified vector quantization (CVQ) to solve the problem of inverse halftoning. The proposed process consists of two phases: the encoding phase and decoding phase. The encoding procedure needs a codebook for the encoder which transforms a halftoned image to a set of codeword-indices. The decoding process also requires a different codebook for the decoder which reconstructs a gray-scale image from a set of codeword-indices. Using CVQ, the reconstructed gray-scale image is stored in compressed form and no further compression may be required. This is different from the existing algorithms, which reconstructed a halftoned image in an uncompressed form. The bit rate of encoding a reconstructed image is about 0.51 b/pixel.

Patent
23 Dec 1998
TL;DR: In this article, a template matching approach is used to reconstruct the original grayscale image from the binarized image, which can be applied to various halftoning processes including error-diffusion processes.
Abstract: The methods and apparatus of this invention model and reconstruct binarized images. A grayscale image is modeled using a template matching approach. Each binarized grayscale image value is characterized by a unique set of templates which are rotations of each other. The set of templates allows construction of a look-up table between patterns and grayscale values. The look-up table is provided with a limited number of entries. As a result, the number of entries in the look-up table is reduced. The look-up table thus obtained is used for reconstructing the original grayscale image from the binarized image. The generated image quality is good in comparison with conventional methods. The process may be applied to various halftoning processes including error-diffusion processes.

Journal ArticleDOI
TL;DR: The standard operators are applied to connected sets that form maxima and minima, which are new, powerful, general tools for analysing and representing images.
Abstract: Mathematical morphology is the analysis of signals and images in terms of shape. Much is based on simple positive Boolean functions that are used to produce filters for binary and greyscale signals and images. In a previous development, the standard operators are applied to connected sets that form maxima and minima. These are new, powerful, general tools for analysing and representing images.

Patent
04 Dec 1998
TL;DR: In this article, a region-based binarization system applies adaptive thresholding and image rendering to a gray scale image to generate first and second binary images, and a final binary image can then be formed from the first binary images based on the classification map.
Abstract: A region-based binarization system applies adaptive thresholding and image rendering to a gray scale image to generate first and second binary images. The gray scale image can also be subsampled to acquire a low resolution image and locations of photographic images are detected in the low resolution image. Further, photographic images of the detected photographic images which have a rectangular shape are identified and a classification map which distinguishes pixels in the rectangular shaped photographic images from remaining pixels is generated. A final binary image can then be formed from the first and second binary images based on the classification map. The binarization system of the present invention is effective when the gray scale image is captured from a document which contains at least both photographic and text portions.

Patent
18 Nov 1998
TL;DR: In this paper, a system and a method for imaging a scene of interest utilize variable exposure periods that have durations based upon detecting a fixed voltage drop in order to determine the scene segment radiance.
Abstract: A system and a method for imaging a scene of interest utilize variable exposure periods that have durations based upon detecting a fixed voltage drop in order to determine the scene segment radiance. The rate of voltage drop corresponds to the degree of scene segment radiance, such that high radiant scene segments yield faster voltage drops than lower radiant scene segments. The variable exposure period is determined within each pixel in a pixel array of the system to gather exposure periods from different segments of the scene being imaged. The measured exposure periods are translated into grayscale information that can be used to generate a composite image having various levels of grayscale that is representative of the imaged scene. Each pixel includes a photo sensor, an analog-to-digital converter and a memory to measure, digitize and store the exposure period. The memory contains a number of memory cells having a three-transistor configuration that are each connected to a bi-directional bit line. The bi-directional bit line functions as both a read bit line and a write bit line. The three-transistor configuration allows for non-destructive read-outs of data stored in the memory cells.

Patent
Sano Toshiyuki1, Keiji Toyoda1
17 Mar 1998
TL;DR: In this paper, the gray scale of the input image is corrected using one of the grey scale characteristic associated with each area to providing a corrected image signal for the area, and the corrected image signals are combined into an output image signal.
Abstract: An input image is divided into relatively small blocks of pixels. Border data defining the areas are periodically calculated. For each pixel, a pair of selection pulses indicative of the area where the pixel is located is generated. In response to the pair of pulses, the gray scale characteristic is calculated for each area. The gray scale of the input image is corrected using one of the gray scale characteristic associated with each area to providing a corrected image signal for the area. In response to the pair of pulses, the corrected image signals are combined into an output image signal. An optimal gray scale correction is made for each area. The selection pulses are configured to gradually change near the borders thereby making the reproduced image appear natural.

Patent
01 Jul 1998
TL;DR: In this article, a method for the calculation of a pattern similarity metric that is locally normalized with respect to illumination intensity, and is invariant to rigid body preserving gray scale variations, such as scale, rotation, translation, and non-linear intensity transformations, is presented.
Abstract: The present invention provides a method for the calculation of a pattern similarity metric that is locally normalized with respect to illumination intensity, and is invariant with respect to rigid body preserving gray scale variations, such as scale, rotation, translation, and non-linear intensity transformations. In one aspect, the invention provides a method for comparing a model image with a run-time image so as to provide a quantitative measure of image similarity. In another general aspect of the invention, a method is provided for searching a run-time image with a model image so as to provide at least one location of the model image within the run-time image.

Book ChapterDOI
30 Mar 1998
TL;DR: A small advantage was found for text searching, compared to bilevel fonts, and the utility of perceptually-tuned grayscale fonts for improving the legibility of condensed text is explored.
Abstract: We analyze the quality of condensed text on LCD displays, generated with unhinted and hinted bilevel characters, with traditional anti-aliased and with perceptually-tuned grayscale characters. Hinted bi-level characters and perceptually-tuned grayscale characters improve the quality of displayed small size characters (8pt, 6pt) up to a line condensation factor of 80%. At higher condensation factors, the text becomes partly illegible. In such situations, traditional anti-aliased grayscale character seems to be the most robust variant. We explore the utility of perceptually-tuned grayscale fonts for improving the legibility of condensed text. A small advantage was found for text searching, compared to bilevel fonts. This advantage is consistent with human vision models applied to reading.

Journal ArticleDOI
01 May 1998
TL;DR: The first gray scale reflective bistable cholesteric display with dynamic drive is reported, achieving 8 to 16 levels of gray on a VGA format display at 133 dpi.
Abstract: We report the first gray scale reflective bistable cholesteric display with dynamic drive. We successfully achieve 8 to 16 levels of gray on a VGA format display at 133 dpi. The image shows good contrast and rich gray depth.

Patent
Tinku Acharya1
08 Dec 1998
TL;DR: In this article, a method that specifies identifying a category to which a pixel belongs and performing integrated contrast enhancement and gray scale adjustment on the pixel in accordance with the identified category is described.
Abstract: What is disclosed is a method that specifies identifying a category to which a pixel belongs and performing integrated contrast enhancement and gray scale adjustment on the pixel in accordance with the identified category. These enhanced/adjusted values may be looked up in a table constructed to contain a mapping for all such values.

Journal ArticleDOI
TL;DR: A digital halftoning method is proposed based on a multiscale error diffusion technique that can improve the diffusion performance by effectively removing pattern noise and eliminating boundary and "blackhole" effects.
Abstract: A digital halftoning method is proposed based on a multiscale error diffusion technique for digital halftoning. It can improve the diffusion performance by effectively removing pattern noise and eliminating boundary and "blackhole" effects. A dot overlap compensation scheme is also proposed to eliminate the bias in the gray scale of the printed images.

Patent
30 Apr 1998
TL;DR: In this article, the luminance component of the raster image data is converted to a binary format to identify the working pixel using RET template matching, and then modified by utilizing luminance data of adjacent pixels to produce a new luminance value which is then assigned to the working pixels.
Abstract: A method and apparatus enhance a color or grayscale raster image in a printer by identifying a working pixel in the raster image for anti-aliasing, and then modifying luminance data of the working pixel in a luminance chrominance color space such that an anti-aliasing effect is achieved relative to the raster image. The luminance component of the raster image data is converted to a binary format to identify the working pixel using RET template matching. The luminance data of the working pixel is modified by utilizing luminance data of adjacent pixels to produce a new luminance value which is then assigned to the working pixel. One of the adjacent pixels defines an edge of the object being anti-aliased in the raster image, and the other of the adjacent pixels defines an edge of a region in the raster image that is adjacent the object. In the event chroma data is associated with the object, the chroma data is combined with the modified luminance data and also assigned to the working pixel for accurate imaging thereof.

Patent
20 Feb 1998
TL;DR: In this paper, a method for automatically identifying the range of useful digital values to be displayed on a display medium and providing an appropriate gray scale transfer to optimize the diagnostic value of the final displayed image is presented.
Abstract: A method for automatically identifying the range of useful digital values to be displayed on a display medium and providing an appropriate gray scale transfer to optimize the diagnostic value of the final displayed image. One or more gray scale transfer functions corresponding to desired display media, one or more sets of experimentally determined constants, and one or more sets of algorithms are stored in a computer memory. A smoothed histogram and its histogram integral are constructed representing the frequency of occurrence of the digital values stored in the data bank. A low point and an edge point are identified from the histogram and integral, and based on the type of radiographic examination selected, are used with appropriate constants and algorithms to calculate a maximum and minimum value. Values lower than the minimum are replaced within a minimum, and values higher than the maximum are replaced by the maximum. The new range of values are mapped using the appropriate gray scale transfer function to a set of values displayed on the display medium.