scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1990"


Journal ArticleDOI
TL;DR: A systematic reconstruction-based method for deciding the highest-order ZERNike moments required in a classification problem is developed and the superiority of Zernike moment features over regular moments and moment invariants was experimentally verified.
Abstract: The problem of rotation-, scale-, and translation-invariant recognition of images is discussed. A set of rotation-invariant features are introduced. They are the magnitudes of a set of orthogonal complex moments of the image known as Zernike moments. Scale and translation invariance are obtained by first normalizing the image with respect to these parameters using its regular geometrical moments. A systematic reconstruction-based method for deciding the highest-order Zernike moments required in a classification problem is developed. The quality of the reconstructed image is examined through its comparison to the original one. The orthogonality property of the Zernike moments, which simplifies the process of image reconstruction, make the suggest feature selection approach practical. Features of each order can also be weighted according to their contribution to the reconstruction process. The superiority of Zernike moment features over regular moments and moment invariants was experimentally verified. >

1,971 citations


Journal ArticleDOI
TL;DR: An accurate, reproducible method for determining the infarct volumes of gray matter structures is presented for use with presently available image analysis systems, which minimizes the error that is introduced by edema, which distorts and enlarges theinfarcted tissue and surrounding white matter.
Abstract: An accurate, reproducible method for determining the infarct volumes of gray matter structures is presented for use with presently available image analysis systems. Areas of stained sections with optical densities above that of a threshold value are automatically recognized and measured. This eliminates the potential error and bias inherent in manually delineating infarcted regions. Moreover, the volume of surviving normal gray matter is determined rather than that of the infarct. This approach minimizes the error that is introduced by edema, which distorts and enlarges the infarcted tissue and surrounding white matter.

1,570 citations


Journal ArticleDOI
TL;DR: A definition of local band-limited contrast in images is proposed that assigns a contrast value to every point in the image as a function of the spatial frequency band and is helpful in understanding the effects of image-processing algorithms on the perceived contrast.
Abstract: The physical contrast of simple images such as sinusoidal gratings or a single patch of light on a uniform background is well defined and agrees with the perceived contrast, but this is not so for complex images. Most definitions assign a single contrast value to the whole image, but perceived contrast may vary greatly across the image. Human contrast sensitivity is a function of spatial frequency; therefore the spatial frequency content of an image should be considered in the definition of contrast. In this paper a definition of local band-limited contrast in images is proposed that assigns a contrast value to every point in the image as a function of the spatial frequency band. For each frequency band, the contrast is defined as the ratio of the bandpass-filtered image at the frequency to the low-pass image filtered to an octave below the same frequency (local luminance mean). This definition raises important implications regarding the perception of contrast in complex images and is helpful in understanding the effects of image-processing algorithms on the perceived contrast. A pyramidal image-contrast structure based on this definition is useful in simulating nonlinear, threshold characteristics of spatial vision in both normal observers and the visually impaired.

1,370 citations


Proceedings ArticleDOI
01 Sep 1990
TL;DR: A new rendering technique is proposed that produces 3-D images with enhanced visual comprehensibility and artificial enhancement processes are separated from geometric processes (projection and hidden surface removal) and physical processes (shading and texture mapping), and performed as postprocesses.
Abstract: We propose a new rendering technique that produces 3-D images with enhanced visual comprehensibility. Shape features can be readily understood if certain geometric properties are enhanced. To achieve this, we develop drawing algorithms for discontinuities, edges, contour lines, and curved hatching. All of them are realized with 2-D image processing operations instead of line tracking processes, so that they can be efficiently combined with conventional surface rendering algorithms.Data about the geometric properties of the surfaces are preserved as Geometric Buffers (G-buffers). Each G-buffer contains one geometric property such as the depth or the normal vector of each pixel. By using G-buffers as intermediate results, artificial enhancement processes are separated from geometric processes (projection and hidden surface removal) and physical processes (shading and texture mapping), and performed as postprocesses. This permits a user to rapidly examine various combinations of enhancement techniques without excessive recomputation, and easily obtain the most comprehensible image.Our method can be widely applied for various purposes. Several of these, edge enhancement, line drawing illustrations, topographical maps, medical imaging, and surface analysis, are presented in this paper.

747 citations



Proceedings ArticleDOI
01 Sep 1990
TL;DR: Computer graphics research has concentrated on creating photo-realistic images of synthetic objects, which communicate surface shading and curvature, as well as the depth relationships of objects in a scene, which are traditionally represented by a rectangular array of pixels that tile the image plane.
Abstract: Computer graphics research has concentrated on creating photo-realistic images of synthetic objects. These images communicate surface shading and curvature, as well as the depth relationships of objects in a scene. These renderings are traditionally represented by a rectangular array of pixels that tile the image plane.As an alternative to photo-realism, it is possible to create abstract images using an ordered collection of brush strokes. These abstract images filter and refine visual information before it is presented to the viewer. By controlling the color, shape, size, and orientation of individual brush strokes, impressionistic paintings of computer generated or photographic images can easily be created.

573 citations


Journal ArticleDOI
TL;DR: A method that combines region growing and edge detection for image segmentation is presented and is thought that the success in the tool images is because the objects shown occupy areas of many pixels, making it is easy to select parameters to separate signal information from noise.
Abstract: A method that combines region growing and edge detection for image segmentation is presented. The authors start with a split-and merge algorithm wherein the parameters have been set up so that an over-segmented image results. Region boundaries are then eliminated or modified on the basis of criteria that integrate contrast with boundary smoothness, variation of the image gradient along the boundary, and a criterion that penalizes for the presence of artifacts reflecting the data structure used during segmentation (quadtree in this case). The algorithms were implemented in the C language on a Sun 3/160 workstation running under the Unix operating system. Simple tool images and aerial photographs were used to test the algorithms. The impression of human observers is that the method is very successful on the tool images and less so on the aerial photograph images. It is thought that the success in the tool images is because the objects shown occupy areas of many pixels, making it is easy to select parameters to separate signal information from noise. >

567 citations


Journal ArticleDOI
TL;DR: Computer simulation results reveal that most algorithms perform consistently well on images with a bimodal histogram, however, all algorithms break down for a certain ratio of population of object and background pixels in an image, which in practice may arise quite frequently.
Abstract: A comparative performance study of five global thresholding algorithms for image segmentation was investigated. An image database with a wide variety of histogram distribution was constructed. The histogram distribution was changed by varying the object size and the mean difference between object and background. The performance of five algorithms was evaluated using the criterion functions such as the probability of error, shape, and uniformity measures Attempts also have been made to evaluate the performance of each algorithm on the noisy image. Computer simulation results reveal that most algorithms perform consistently well on images with a bimodal histogram. However, all algorithms break down for a certain ratio of population of object and background pixels in an image, which in practice may arise quite frequently. Also, our experiments show that the performances of the thresholding algorithms discussed in this paper are data-dependent. Some analysis is presented for each of the five algorithms based on the performance measures.

556 citations


Journal ArticleDOI
TL;DR: Using sequences of CIF standard pictures, the interframe motion compensated prediction error with this technique is compared to the other fast methods and the computational complexity of this algorithm is compared against those methods.
Abstract: A fast block-matching algorithm for motion estimation is presented. It is based on a logarithmic step where, in each search step, only four locations are tested. For a motion displacement of w pels/frame, this technique requires 5+4 log/sub 2/w computations to locate the best match. Using sequences of CIF standard pictures, the interframe motion compensated prediction error with this technique is compared to the other fast methods. The computational complexity of this algorithm is also compared against those methods. >

531 citations



Journal ArticleDOI
TL;DR: A shape-based interpolation scheme for multidimensional images is presented that not only minimizes user involvement in interactive segmentation, but also leads to more accurate representation and depiction of dynamic as well as static objects.
Abstract: A shape-based interpolation scheme for multidimensional images is presented. This scheme consists of first segmenting the given image data into a binary image, converting the binary image back into a gray image wherein the gray value of a point represents its shortest distance (positive value for points of the object and negative for those outside) from the cross-sectional boundary, and then interpolating the gray image. The set of all points with nonnegative values associated with them in the interpolated image constitutes the interpolated object. The method not only minimizes user involvement in interactive segmentation, but also leads to more accurate representation and depiction of dynamic as well as static objects. The general methodology and the implementation details of the method are presented and compared on a qualitative and quantitative basis to the existing methods. The generality of the proposed scheme is illustrated with a number of medical imaging examples. >

Journal ArticleDOI
TL;DR: A recursive filtering structure is proposed that drastically reduces the computational effort required for smoothing, performing the first and second directional derivatives, and carrying out the Laplacian of an image.
Abstract: A recursive filtering structure is proposed that drastically reduces the computational effort required for smoothing, performing the first and second directional derivatives, and carrying out the Laplacian of an image. These operations are done with a fixed number of multiplications and additions per output point independently of the size of the neighborhood considered. The key to the approach is, first, the use of an exponentially based filter family and, second, the use of the recursive filtering. Applications to edge detection problems and multiresolution techniques are considered, and an edge detector allowing the extraction of zero-crossings of an image with only 14 operations per output element at any resolution is proposed. Various experimental results are shown. >

Patent
29 Mar 1990
TL;DR: A panoramic image based virtual reality display system as mentioned in this paper is a closed structure having individual display units mounted in all viewable directions therein, with segments of the composite image displayed on respective display units to recreate the panorama view gathered by the panorama optical assembly.
Abstract: A panoramic image based virtual reality display system includes a panoramic optical assembly, preferably of substantially spherical coverage, feeding composite optical images to a light sensitive surface of a video camera for storage or further processing in image processing circuitry. Such image processing circuitry includes a special effects generator and image segment circuitry to divide a composite image into a plurality of image segments or sub-segments for display on individual displays of multiple video display assemblies. Such a multiple video display assembly preferably includes a closed structure having individual display units mounted in all viewable directions therein, with segments of the composite image displayed on respective display units to recreate the panoramic view gathered by the panoramic optical assembly. The image processing circuitry may also select a portion or portions of the composite image for display on one or two displays of a head mounted display unit.

Journal ArticleDOI
TL;DR: It is shown that the main image reconstruction methods, namely filtered backprojection and iterative reconstruction, can be directly applied to conformation therapy and first theoretical results are presented.
Abstract: The problem of optimizing the dose distribution for conformation radiotherapy with intensity modulated external beams is similar to the problem of reconstructing a 3D image from its 2D projections. In this paper we analyse the relationship between these problems. We show that the main image reconstruction methods, namely filtered backprojection and iterative reconstruction, can be directly applied to conformation therapy. We examine the features of each of these methods with regard to this new application and we present first theoretical results.

Journal ArticleDOI
TL;DR: A number of task-specific approaches to the assessment of image quality are treated, but only linear estimators or classifiers are permitted, and results are expressed as signal-to-noise ratios (SNR's).
Abstract: A number of task-specific approaches to the assessment of image quality are treated. Both estimation and classification tasks are considered, but only linear estimators or classifiers are permitted. Performance on these tasks is limited by both quantum noise and object variability, and the effects of postprocessing or image-reconstruction algorithms are explicitly included. The results are expressed as signal-to-noise ratios (SNR's). The interrelationships among these SNR's are considered, and an SNR for a classification task is expressed as the SNR for a related estimation task times four factors. These factors show the effects of signal size and contrast, conspicuity of the signal, bias in the estimation task, and noise correlation. Ways of choosing and calculating appropriate SNR's for system evaluation and optimization are also discussed.

Journal ArticleDOI
TL;DR: Through extensive experimentation with noiseless as well as noisy binary images of all English characters, the following conclusions are reached: the MLP outperforms the other three classifiers, especially when noise is present; the nearest-neighbor classifier performs about the same as the NN for thenoiseless case.
Abstract: A neural network (NN) based approach for classification of images represented by translation-, scale-, and rotation-invariant features is presented. The utilized network is a multilayer perceptron (MLP) classifier with one hidden layer. Back-propagation learning is used for its training. Two types of features are used: moment invariants derived from geometrical moments of the image, and features based on Zernlike moments, which are the mapping of the image onto a set of complex orthogonal polynomials. The performance of the MLP is compared to the Bayes, nearest-neighbor, and minimum-mean-distance statistical classifiers. Through extensive experimentation with noiseless as well as noisy binary images of all English characters (26 classes), the following conclusions are reached: (1) the MLP outperforms the other three classifiers, especially when noise is present; (2) the nearest-neighbor classifier performs about the same as the NN for the noiseless case; (3) the NN can do well even with a very small number of training samples; (4) the NN has a good degree of fault tolerance; and (5) the Zernlike-moment-based features possess strong class separability power and are more powerful than moment invariants. >


Journal ArticleDOI
TL;DR: The investigation focuses on the design of analysis/synthesis systems for image coding and the perceptual impact of these systems at low bit rates, and the theory, design, and implementation of both recursive and nonrecursive filtering systems are discussed.
Abstract: Analysis/synthesis systems designed for low bit rate image coding, their impact on overall system quality, and their computational complexity are discussed The investigation focuses on the design of analysis/synthesis systems for image coding and the perceptual impact of these systems at low bit rates Two objectives are emphasized in developing these systems: confining the total size of the subband images to be equal to the original image size, and designing the filters so that perceptual distortion is not introduced by the analysis/synthesis system Methods based on circular convolution and symmetric extensions are developed and discussed in detail The theory, design, and implementation of both recursive and nonrecursive filtering systems are discussed Methods are introduced which display advantages over conventional quadrature mirror filter based approaches >

Proceedings ArticleDOI
30 Sep 1990
TL;DR: A coding scheme for secretly embedding character information into a dithered multilevel image is presented, and this scheme is available for storing a picture in a database or communication system with secret information.
Abstract: A coding scheme for secretly embedding character information into a dithered multilevel image is presented This scheme inputs both a monotone image and secret information, which is converted to binary sequences, and it outputs a single dithered image This image contains the character data of about 2 kByte in a dithered bilevel image 3 kByte in a dithered three-level image of 256*256 pixels, and it appears to be an ordinary dithered image This scheme is available for storing a picture in a database or communication system with secret information >

Journal ArticleDOI
01 Apr 1990
TL;DR: The basic theory and applications of a set-theoretic approach to image analysis called mathematical morphology are reviewed in this article, where the concepts of mathematical morphology geometrical structure in signals are used to illuminate the ways that morphological systems can enrich the theory and application of multidimensional signal processing.
Abstract: The basic theory and applications of a set-theoretic approach to image analysis called mathematical morphology are reviewed. The goals are to show how the concepts of mathematical morphology geometrical structure in signals to illuminate the ways that morphological systems can enrich the theory and applications of multidimensional signal processing. The topics covered include: applications to nonlinear filtering (morphological and rank-order filters, multiscale smoothing, morphological sampling, and morphological correlation); applications to image analysis (feature extraction, shape representation and description, size distributions, and fractals); and representation theorems, which shows how a large class of nonlinear and linear signal operators can be realized as a combination of simple morphological operations. >

Journal Article
TL;DR: It was found that random errors in determining the brain contour are well tolerated and registration by the principal axes transformation can be accomplished with typical errors in the range of approximately 1 mm.
Abstract: We have developed a computational technique suitable for registration of sets of image data covering the whole brain volume which are translated and rotated with respect to one another. The same computational method may be used to register pairs of tomographic brain images which are rotated and translated in the transverse section plane. The technique is based on the classical theory of rigid bodies, employing as its basis the principal axes transformation. The performance of the method was studied by simulation and with image data from PET, XCT, and MRI. It was found that random errors in determining the brain contour are well tolerated. Progressively coarser axial sampling of data sets led to some degradation, but acceptable performance was obtained with axial sampling distances up to 10 mm. Given adequate digital sampling of the object volume, we conclude that registration by the principal axes transformation can be accomplished with typical errors in the range of approximately 1 mm. The advantages of the technique are simplicity and speed of computation.

Journal ArticleDOI
TL;DR: A coding scheme is presented based on a single fixed binary encoded illumination pattern, which contains all the information required to identify the individual strikes visible in the camera image and a prototype measurement system based on this coding principle is presented.
Abstract: The problem of strike identification in range image acquisition systems based on triangulation with periodically structured illumination is discussed. A coding scheme is presented based on a single fixed binary encoded illumination pattern, which contains all the information required to identify the individual strikes visible in the camera image. Every sample point indicated by the light pattern is made identifiable by means of a binary signature, which is locally shared among its closest neighbors. The applied code is derived from pseudonoise sequences, and it is optimized so that it can make the identification fault-tolerant to the largest extent. A prototype measurement system based on this coding principle is presented. Experimental results obtained with the measurement system are also presented. >

Journal ArticleDOI
TL;DR: It is demonstrated that the Hermite transform is in better agreement with human visual modeling than Gabor expansions, and therefore the scheme is presented as an analysis/resynthesis system.
Abstract: The author introduces a scheme for the local processing of visual information, called the Hermite transform. The problem is addressed from the point of view of image coding, and therefore the scheme is presented as an analysis/resynthesis system. The objectives of the present work, however, are not restricted to coding. The analysis part is designed so that it can also serve applications in the area of computer vision. Indeed, derivatives of Gaussians, which have found widespread application in feature detection over the past few years, play a central role in the Hermite analysis. It is also argued that the proposed processing scheme is in close agreement with current insight into the image processing that is carried out by the human visual system. In particular, it is demonstrated that the Hermite transform is in better agreement with human visual modeling than Gabor expansions. >

Patent
05 Nov 1990
TL;DR: In this article, a time series of successive relatively high-resolution frames of image data, any frame of which may or may not include a graphical representation of one or more predetermined specific members of a given generic class (e.g., human beings), is examined in order to recognize the identity of a specific member if that member's image is included in the time series.
Abstract: A time series of successive relatively high-resolution frames of image data, any frame of which may or may not include a graphical representation of one or more predetermined specific members (e.g., particular known persons) of a given generic class (e.g., human beings), is examined in order to recognize the identity of a specific member if that member's image is included in the time series. The frames of image data may be examined in real time at various resolutions, starting with a relatively low resolution, to detect whether some earlier-occurring frame includes any of a group of image features possessed by an image of a member of the given class. The image location of a detected image feature is stored and then used in a later-occurring, higher resolution frame to direct the examination only to the image region of the stored location in order to (1) verify the detection of the aforesaid image feature, and (2) detect one or more other of the group of image features, if any is present in that image region of the frame being examined. By repeating this type of examination for later and later occurring frames, the accumulated detected features can first reliably recognize the detected image region to be an image of a generic object of the given class, and later can reliably recognize the detected image region to be an image of a certain specific member of the given class.

Book
13 Dec 1990
TL;DR: The image as an analogue signal, scanning of an image by an aperture, and extension of the aperture notion are illustrated.
Abstract: 1 The image as an analogue signal.- 2 Scanning of an image by an aperture.- 3 Extension of the aperture notion.- 4 Photographic images.- 5 Digitizing and reconstructing images.- 6 Basic techniques of digital image processing.- 7 Algebraic operations between images.- 8 Coloured images.- 9 Linear processing of signals and images.

Journal ArticleDOI
TL;DR: The purpose of this paper is to familiarize the reader with the basic concepts of the algebra and to provide a general overview of its methodology.
Abstract: This paper is the first in a sequence of papers describing an algebraic structure for image processing that has become known as the AFATL Standard Image Algebra This algebra provides a common mathematical environment for image processing algorithm development and methodologies for algorithm optimization, comparison, and performance evaluation In addition, the image algebra provides a powerful algebraic language for image processing which, if properly embedded into a high level programming language, will greatly increase a programmer's productivity as programming tasks are greatly simplified due to replacement of large blocks of code by short algebraic statements The purpose of this paper is to familiarize the reader with the basic concepts of the algebra and to provide a general overview of its methodology

Journal ArticleDOI
TL;DR: The conclusion that can be drawn from this study is that it is useful to combine classifiers up to a certain order, and here it turned out that groups of four classifiers are optimal.
Abstract: This paper focuses on the problem of texture classification using statistical descriptors based on the co-occurrence matrices. A major part of the paper is dedicated to the derivation of a general model for analysis and interpretation of experimental results in texture analysis when individual and groups of classifiers are being used, and a technique for evaluating their performance. Using six representative classifiers; that is, second angular moment f1, contrast f2, inverse difference moment f5, entropy f9, and information measures of correlation I and II, f12 and f13, we give a systematic study of the discrimination power of all 63 combination of these classifiers on 13 samples of Brodatz textures. The conclusion that can be drawn from our study is that it is useful to combine classifiers up to a certain order. Here it turned out that groups of four classifiers are optimal.

Journal Article
TL;DR: This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set and finds the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure.
Abstract: Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

Journal ArticleDOI
TL;DR: The expectation-maximization algorithm is proposed to optimize the nonlinear likelihood function in an efficient way and low-order parametric image and blur models are incorporated into the identification method.
Abstract: A maximum-likelihood approach to the blur identification problem is presented. The expectation-maximization algorithm is proposed to optimize the nonlinear likelihood function in an efficient way. In order to improve the performance of the identification algorithm, low-order parametric image and blur models are incorporated into the identification method. The resulting iterative technique simultaneously identifies and restores noisy blurred images. >

Journal ArticleDOI
TL;DR: Understanding the optical behavior of the microscope system has indicated how to optimize specimen preparation, data collection, and processing protocols to obtain significantly improved images.