scispace - formally typeset
Search or ask a question

Showing papers on "Grayscale published in 1987"


Journal ArticleDOI
TL;DR: The new method, called dot diffusion, appears to avoid some deficiencies of other commonly used techniques, and it is well suited to parallel computation; but it requires more buffers and more complex program logic than other methods when implemented sequentially.
Abstract: This paper describes a technique for approximating real-valued pixels by two-valued pixels The new method, called dot diffusion, appears to avoid some deficiencies of other commonly used techniques It requires approximately the same total number of arithmetic operations as the Floyd-Steinberg method of adaptive grayscale, and it is well suited to parallel computation; but it requires more buffers and more complex program logic than other methods when implemented sequentially A “smooth” variant of the method may prove to be useful in high-resolution printing

324 citations


Patent
24 Jul 1987
TL;DR: In this article, a method and system for efficiently generating grayscale character fonts from bi-level master character fonts decomposed into rectangles is presented, where each character is generated by performing, for each rectangle in the corresponding decomposed master character, the steps of: specifying a filter array, and its corresponding summed area filter arrays; determining the pixels in the character affected by the rectangle and a set of corresponding sampling points located inside and near the rectangle.
Abstract: A method and system for efficiently generating grayscale character fonts from bi-level master character fonts decomposed into rectangles. For each filter array to be used for converting master character fonts into grayscale characters there is generated at least one summed area filter array. Each element in each summed area filter array represents the sum of the filter array elements in a corresponding subarray of the filter array. A grayscale character is generated by performing, for each rectangle in the corresponding decomposed master character, the steps of: specifying a filter array, and its corresponding summed area filter arrays; determining the pixels in the grayscale character affected by the rectangle and a set of corresponding sampling points located inside and near the rectangle; for each grayscale character pixel affected by the rectangle, performing the steps of: assigning the pixel a predefined value corresponding to a black pixel if the corresponding sampling point is located inside the rectangle, and is offset from the perimeter of the rectangle by at least one half of the extent of the filter's support; and otherwise adding to the value of the grayscale pixel a value from the summed area filter array corresponding to the intersection of the selected filter array, centered at the sampling point corresponding to the grayscale pixel, and the rectangle.

74 citations


Journal Article
TL;DR: Using these methods, the authors have found that both supervised and unsupervised classification techniques yielded theme maps (class maps) which demonstrated tissue characteristic signatures and tissue classification errors found in computer-generated theme maps were due to subtle gray scale changes present in the original MR data sets arising from radiometric inhomogeneity and spatial nonuniformity.
Abstract: Multiecho magnetic resonance (MR) scanning produces tomographic images with approximately equal morphologic information but varying gray scales at the same anatomic level. Multispectral image classification techniques, originally developed for satellite imaging, have recently been applied to MR tissue characterization. Statistical assessment of multispectral tissue classification techniques has been used to select the most promising of several alternative methods. MR examinations of the head and body, obtained with a 0.35, 0.5, or 1.5T imager, comprised data sets with at least two pulse sequences yielding three images at each anatomical level: (1) TR = 0.3 sec, TE = 30 msec, (2) TR = 1.5, TE = 30, (3) TR = 1.5, TE = 120. Normal and pathological images have been analyzed using multispectral analysis and image classification. MR image data are first subjected to radiometric and geometric corrections to reduce error resulting from (1) instrumental variations in data acquisition, (2) image noise, and (3) misregistration. Training regions of interest (ROI) are outlined in areas of normal (gray and white matter, CSF) and pathological tissue. Statistics are extracted from these ROIs and classification maps generated using table lookup, minimum distance to means, maximum likelihood, and cluster analysis. These synthetic maps are then compared pixel by pixel with manually prepared classification maps of the same MR images. Using these methods, the authors have found that: (1) both supervised and unsupervised classification techniques yielded theme maps (class maps) which demonstrated tissue characteristic signatures and (2) tissue classification errors found in computer-generated theme maps were due to subtle gray scale changes present in the original MR data sets arising from radiometric inhomogeneity and spatial nonuniformity.

67 citations


Patent
30 Jul 1987
TL;DR: In this article, the background image is formed on the basis of the original picture, and unevenness or change of brightness has no influence on the accurate extraction of the target image.
Abstract: An original picture taken by an ITV camera is subject to a local maximum filter processing and, if necessary, further to a local minimum filter processing, so that a background image is formed from the original picture itself. A target image included in the original picture is extracted separated from the background image by the subtraction between the thus obtained background image and the original picture. According to the present invention, since the background image is formed on the basis of the original picture, the unevenness or change of brightness, which equally influences on both of the target image and the background image, has no influence on the accurate extraction of the target image.

60 citations


Patent
24 Nov 1987
TL;DR: In this article, a method and apparatus for processing a video image so as to feature the boundaries between light and dark regions of the image is presented, in which a center pixel is selected along with a plurality of pixels forming a two-dimensional neighborhood of the center pixel.
Abstract: A method and apparatus for processing a video image so as to feature the boundaries between light and dark regions of the image. The method and apparatus are applicable to a video image which is represented by an array of pixels, in which each pixel is associated with a gray scale intensity value. According to the invention a center pixel is selected along with a plurality of pixels forming a two-dimensional neighborhood of the center pixel. From the intensity values associated with the pixels of the neighborhood, a determination is made whether or not the center pixel lies in a transition region, in which the image undergoes a rapid variation between light and dark. A bit value is then assigned to the center pixel, indicating whether the center pixel is to be black or white in the processed image. The bit value is assigned according to one of two predetermined algorithms depending on whether the center pixel was determined to lie within or not to lie within a transition region. For center pixels lying in a transition region, a first algorithm assigns the bit value with respect to a virtual boundary between light and dark. For center pixels not lying in a transition region, a second algorithm distinct from the first algorithm and presenting no virtual boundary assigns the bit value.

51 citations


Proceedings ArticleDOI
01 Aug 1987
TL;DR: The performance of the implementation is such that filtering characters for grayscale displays is feasible in realtime on personal workstations, and an analysis of the efficiency of this technique, and examples of its implementation applied to various families of fonts and point sizes are given.
Abstract: While the race towards higher-resolution bitmap displays is still on, many grayscale displays have appeared on the scene To fully utilize their capabilities, grayscale fonts are needed, and these can be produced by filtering bi-level masters Most of the efficient filtering techniques cannot directly be applied For example, prefiltering is impractical, due to the number of character masters and the requirement of sub-pixel positioning Furthermore, we would like to impose as few restrictions as possible on the characteristics of the filter, in order to facilitate exploration into the quality of various filtersWe describe a fast filtering technique especially adapted to this task The characters are decomposed into rectangles, and a summed-area representation of the filter is efficiently convolved with each individual rectangle to construct the grayscale character For a given filter, the number of operations is O (linear size of the grayscale character), which is optimalWe give an analysis of the efficiency of this technique, and examples of its implementation applied to various families of fonts and point sizes The performance of the implementation is such that filtering characters for grayscale displays is feasible in realtime on personal workstations

49 citations


01 May 1987
TL;DR: The General Motors Research Laboratories has developed an image processing system that automatically analyzes the size distributions in fuel spray video images and can distinguish nonspherical anomalies from droplets, which allows sizing of droplets near the spray nozzle.
Abstract: An image processing system was developed which automatically analyzes the size distributions in fuel spray video images. Images are generated by using pulsed laser light to freeze droplet motion in the spray sample volume under study. This coherent illumination source produces images which contain droplet diffraction patterns representing the droplets degree of focus. The analysis is performed by extracting feature data describing droplet diffraction patterns in the images. This allows the system to select droplets from image anomalies and measure only those droplets considered in focus. Unique features of the system are the totally automated analysis and droplet feature measurement from the grayscale image. The feature extraction and image restoration algorithms used in the system are described. Preliminary performance data is also given for two experiments. One experiment gives a comparison between a synthesized distribution measured manually and automatically. The second experiment compares a real spray distribution measured using current methods against the automatic system.

18 citations


Journal ArticleDOI
TL;DR: This article proposes G-octree as an extension of G- quadtree to three dimensions and develops two-way G-quadtree/Goctree conversion procedures based on the algorithms for the binary case to provide an integrated processing environment for hierarchically represented 2D/3D gray.
Abstract: This article proposes G-octree as an extension of G-quadtree to three dimensions. A G-octree reflects in its construction a hierarchy of gray-scale level value homogeneity, as well as a hierarchy of spatial resolution. The article also develops two-way G-quadtree/Goctree conversion procedures based on the algorithms for the binary case. These procedures provide an integrated processing environment for hierarchically represented 2D/3D gray.scale images. We demonstrate our approach with an application to the color coding of macro-autoradiography images taken from rat brains.

17 citations


DOI
01 Mar 1987
TL;DR: In this article, the performance of the phase-correlation image-registration algorithm when greyscale images are replaced by boundary maps is discussed in the light of several experiments, and it is found that use of contours only does not substantially degrade the algorithm performance, while the reduced amount of information associated with each image may turn to advantage whenever the bandwidth of the communication channel is concerned.
Abstract: The performance of the phase-correlation image-registration algorithm when greyscale images are replaced by boundary maps is discussed in the light of several experiments. It is found that use of contours only does not substantially degrade the algorithm performance, while the reduced amount of information associated with each image may turn to advantage whenever the bandwidth of the communication channel is concerned.

17 citations


Journal ArticleDOI
TL;DR: In this article, the race towards higher-resolution bitmap displays is still on, many grayscale displays have appeared on the scene, and these displays are needed to fully utilize their capabilities.
Abstract: While the race towards higher-resolution bitmap displays is still on, many grayscale displays have appeared on the scene. To fully utilize their capabilities, grayscale fonts are needed, and these ...

12 citations


Journal ArticleDOI
TL;DR: Nine sources - reflection, edge pixels, lens quality, camera warm-up, auto black circuits, pixel aspect ratios, variation with screen location, gray scale range, and object size versusgray scale range - are identified and discussed along with results of tests to reveal the magnitude of the errors introduced.

Proceedings ArticleDOI
01 Mar 1987
TL;DR: An algorithm that addresses the recognition of partially visible two-dimensional objects in a gray scale image using the local shape of contour segments near critical points, represented in slope angle-arclength space (θ-s space), as the fundamental feature vectors.
Abstract: An important task in computer vision is the recognition of partially visible two-dimensional objects in a gray scale image. Recent works addressing this problem have attempted to match spatially local features from the image to features generated by models of the objects. However, many algorithms are less efficient than is possible. This is due primarily to insufficient attention being paid to the issues of reducing the data in features and feature matching. In this paper we discuss an algorithm that addresses both of these problems. Our algorithm uses the local shape of contour segments near critical points, represented in slope angle-arclength space (θ-s space), as the fundamental feature vectors. These fundamental feature vectors are further processed by projecting them onto a subspace of θ-s space that is obtained by applying the Karhunen-Loeve expansion to all critical points in the model set to obtain the final feature vectors. This allows the data needed to store the features to be reduced, while retaining nearly all their recognitive information. The resultant set of feature vectors from the image are matched to the model set using multidimensional range queries to a database of model feature vectors. The database is implemented using an efficient data-structure called a k-d tree. The entire recognition procedure for one image has complexity O(IlogI + IlogN), where I is the number of features in the image, and N is the number of model features. Experimental results showing our algorithm's performance on a number of test images are presented.

Patent
Klausz Remy1
24 Feb 1987
TL;DR: In this paper, a system was designed to form images of a single class, for example radiographs of blood vessels, a predetermined relationship (31) was established between the two parameters (L, M) characterizing the window so that the latter may be modified by actuation of an actuator.
Abstract: digital imaging system in which, at each point, or area, of the image is assigned a digital value and this value is transformed in such a way that retention is a range, or window, of values ​​representing the luminances for a display device. Control means are provided for modifying the two parameters characterizing the window in the transformation device. The system being designed to form images of a single class, for example radiographs of blood vessels, a predetermined relationship (31) is established between the two parameters (L, M) characterizing the window so that the latter may be modified by actuation of a single actuator.

01 Jan 1987
TL;DR: A new method is described of product simulation that employs also a real SAR input image for image simulation, which can be denoted as 'image-based simulation'.
Abstract: SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

Journal ArticleDOI
TL;DR: The systematicgray value variations in a series of corresponding locations within the images of a moving object are investigated and it is proven that the constraints induced by the temporal gray value variations are not sufficient in real outdoor scenes.
Abstract: Dreschler and Nagel ( Comput. Graphics Image Process . 20 , 1982, 199–228) and Westphal and Nagel ( Comput. Vision Graphics Image Process . 34 , 1986, 302–320) describe a system, which derives polyhedral object descriptions of moving rigid objects from monocular image sequences. In order to explore the possibilities to improve such a non-convex polyhedral object description into one with curved surface patches, the systematic gray value variations in a series of corresponding locations within the images of a moving object are investigated. Conditions are stated for a non-linear approach which estimates the local surface normal and albedo. It is proven that the constraints induced by the temporal gray value variations are not sufficient in real outdoor scenes.

Book ChapterDOI
01 Jun 1987
TL;DR: This paper proposes G-octree as an extension of G-quadtree to 3dimensions and develops two-way G- quadtree/G- octree conversion procedures based on the algorithms for the binary case to provide an integrated processing environment for hierarchically represented 2D/3D images.
Abstract: This paper proposes G-octree as an extension of G-quadtree to 3dimensions. A G-octree reflects in its construction a hierarchy of gray-scale level value homogeneity, as well as that of spatial resolution. The paper also develops two-way G-quadtree/G-octree conversion procedures based on the algorithms for the binary case. These procedures provide an integrated processing environment for hierarchically represented 2D/3D images. Our approach is demonstrated with an application to the color-coding of macroautoradiography images taken from rat brains.

Journal ArticleDOI
TL;DR: This paper examines the classification potential of three techniques based on spiral sampling of gray-scale, noisy images by selecting samples in a spiral manner starting from the edge of the image and proceeding toward the center.
Abstract: This paper examines the classification potential of three techniques based on spiral sampling of gray-scale, noisy images. Image pixels are rearranged into a one-dimensional sequence by selecting samples in a spiral manner starting from the edge of the image and proceeding toward the center. The properties of this sample sequence are examined by Fourier transform and correlation techniques, using images from 26 groups with varying contrast, orientation, and size. The classification ability of features extracted from spiral sequences and their accuracy are investigated.