scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1992"


Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


Journal ArticleDOI
TL;DR: This paper organizes this material by establishing the relationship between the variations in the images and the type of registration techniques which can most appropriately be applied, and establishing a framework for understanding the merits and relationships between the wide variety of existing techniques.
Abstract: Registration is a fundamental task in image processing used to match two or more pictures taken, for example, at different times, from different sensors, or from different viewpoints. Virtually all large systems which evaluate images require the registration of images, or a closely related operation, as an intermediate step. Specific examples of systems where image registration is a significant component include matching a target with a real-time image of a scene for target recognition, monitoring global land usage using satellite images, matching stereo images to recover shape for autonomous navigation, and aligning images from different medical modalities for diagnosis.Over the years, a broad range of techniques has been developed for various types of data and problems. These techniques have been independently studied for several different applications, resulting in a large body of research. This paper organizes this material by establishing the relationship between the variations in the images and the type of registration techniques which can most appropriately be applied. Three major types of variations are distinguished. The first type are the variations due to the differences in acquisition which cause the images to be misaligned. To register images, a spatial transformation is found which will remove these variations. The class of transformations which must be searched to find the optimal transformation is determined by knowledge about the variations of this type. The transformation class in turn influences the general technique that should be taken. The second type of variations are those which are also due to differences in acquisition, but cannot be modeled easily such as lighting and atmospheric conditions. This type usually effects intensity values, but they may also be spatial, such as perspective distortions. The third type of variations are differences in the images that are of interest such as object movements, growths, or other scene changes. Variations of the second and third type are not directly removed by registration, but they make registration more difficult since an exact match is no longer possible. In particular, it is critical that variations of the third type are not removed. Knowledge about the characteristics of each type of variation effect the choice of feature space, similarity measure, search space, and search strategy which will make up the final technique. All registration techniques can be viewed as different combinations of these choices. This framework is useful for understanding the merits and relationships between the wide variety of existing techniques and for assisting in the selection of the most suitable technique for a specific problem.

4,769 citations


Journal ArticleDOI
TL;DR: In this article, a new version of the Perona and Malik theory for edge detection and image restoration is proposed, which keeps all the improvements of the original model and avoids its drawbacks.
Abstract: A new version of the Perona and Malik theory for edge detection and image restoration is proposed. This new version keeps all the improvements of the original model and avoids its drawbacks: it is proved to be stable in presence of noise, with existence and uniqueness results. Numerical experiments on natural images are presented.

2,565 citations


Journal ArticleDOI
TL;DR: It is found that there is decorrelation increasing with time but that digital terrain model generation remains feasible and such a technique could provide a global digital terrain map.
Abstract: A radar interferometric technique for topographic mapping of surfaces, implemented utilizing a single synthetic aperture radar (SAR) system in a nearly repeating orbit, is discussed. The authors characterize the various sources contributing to the echo correlation statistics, and isolate the term which most closely describes surficial change. They then examine the application of this approach to topographic mapping of vegetated surfaces which may be expected to possess varying backscatter over time. It is found that there is decorrelation increasing with time but that digital terrain model generation remains feasible. The authors present such a map of a forested area in Oregon which also includes some nearly unvegetated lava flows. Such a technique could provide a global digital terrain map. >

2,167 citations


Journal ArticleDOI
TL;DR: A computer algorithm for the three-dimensional alignment of PET images is described that relies on anatomic information in the images rather than on external fiducial markers and can be applied retrospectively, during acquisition, to reposition the scanner gantry and bed to match an earlier study.
Abstract: A computer algorithm for the three-dimensional (3D) alignment of PET images is described. To align two images, the algorithm calculates the ratio of one image to the other on a voxel-by-voxel basis and then iteratively moves the images relative to one another to minimize the variance of this ratio across voxels. Since the method relies on anatomic information in the images rather than on external fiducial markers, it can be applied retrospectively. Validation studies using a 3D brain phantom show that the algorithm aligns images acquired at a wide variety of positions with maximum positional errors that are usually less than the width of a voxel (1.745 mm). Simulated cortical activation sites do not interfere with alignment. Global errors in quantitation from realignment are less than 2%. Regional errors due to partial volume effects are largest when the gantry is rotated by large angles or when the bed is translated axially by one-half the interplane distance. To minimize such partial volume effects, the algorithm can be used prospectively, during acquisition, to reposition the scanner gantry and bed to match an earlier study. Computation requires 3-6 min on a Sun SPARCstation 2.

2,018 citations


Book
01 Jan 1992
TL;DR: The Image Processing Handbook, Seventh Edition delivers an accessible and up-to-date treatment of image processing, offering broad coverage and comparison of algorithms, approaches, and outcomes.
Abstract: Consistently rated as the best overall introduction to computer-based image processing, The Image Processing Handbook covers two-dimensional (2D) and three-dimensional (3D) imaging techniques, image printing and storage methods, image processing algorithms, image and feature measurement, quantitative image measurement analysis, and more. Incorporating image processing and analysis examples at all scales, from nano- to astro-, this Seventh Edition: Features a greater range of computationally intensive algorithms than previous versions Provides better organization, more quantitative results, and new material on recent developments Includes completely rewritten chapters on 3D imaging and a thoroughly revamped chapter on statistical analysis Contains more than 1700 references to theory, methods, and applications in a wide variety of disciplines Presents 500+ entirely new figures and images, with more than two-thirds appearing in color The Image Processing Handbook, Seventh Edition delivers an accessible and up-to-date treatment of image processing, offering broad coverage and comparison of algorithms, approaches, and outcomes.

1,858 citations


Journal ArticleDOI
TL;DR: In contrast to acquisition-based noise reduction methods a postprocess based on anisotropic diffusion is proposed, which overcomes the major drawbacks of conventional filter methods, namely the blurring of object boundaries and the suppression of fine structural details.
Abstract: In contrast to acquisition-based noise reduction methods a postprocess based on anisotropic diffusion is proposed. Extensions of this technique support 3-D and multiecho magnetic resonance imaging (MRI), incorporating higher spatial and spectral dimensions. The procedure overcomes the major drawbacks of conventional filter methods, namely the blurring of object boundaries and the suppression of fine structural details. The simplicity of the filter algorithm permits an efficient implementation, even on small workstations. The efficient noise reduction and sharpening of object boundaries are demonstrated by applying this image processing technique to 2-D and 3-D spin echo and gradient echo MR data. The potential advantages for MRI, diagnosis, and computerized analysis are discussed in detail. >

1,229 citations


Proceedings ArticleDOI
01 Jul 1992
TL;DR: 2.1 Conventional Metamorphosis Techniques Mc[:ml(wpht)iii twlween lWo or mor’c imafys (wer lime i) u uwi’ul \ i~u;ii tcchniquc.
Abstract: 2.1 Conventional Metamorphosis Techniques Mc[:ml(wpht)iii twlween lWo or mor’c imafys (wer lime i) u uwi’ul \ i~u;ii tcchniquc. (Jflen uwd f’orCducaliomd (n’tMCid;liMll Cnt purpt>wi. ‘1’l-:idi(ional Iilmmahing techniques for (his cflcc[ include ~’lckcr c’ut~(iuc’h LISu chwwwr cxhibi(ing ch:mgm while running thr(mgll ;! toreil and prosing behind several trws ) tind op[ic:d cro\\diswdv<’. in which onc image is f:ide(i out while wwther is sinwlt:lnLNNI\l)f’:idcdin (Mith makeup ch:mge. tippliwcm, or nhjecl subs[i [u[I(m ). Sc\’~’riilclawic horror lilm~ illu$tfiite [he process: who ctwld hnycl ~hc b:lir-tai~ing (fiiniform;ilml of the Woitman. or the drw m:itic lllct;itll(~rpll(~sii from Dr. Jchyll [o Mr. Hyde’? This pupcr prcwmls ii c(mtcnlp{mmy w~lu(i(mto the vi~u:d translonmrtion pnh lL’nl.

1,130 citations


01 Jan 1992
TL;DR: In this article, cross-correlation methods of interrogation of successive single-exposure frames can be used to measure the separation of pairs of particle images between successive frames, which can be optimized in terms of spatial resolution, detection rate, accuracy and reliability.
Abstract: To improve the performance of particle image velocimetry in measuring instantaneous velocity fields, direct cross-correlation of image fields can be used in place of auto-correlation methods of interrogation of double- or multiple-exposure recordings. With improved speed of photographic recording and increased resolution of video array detectors, cross-correlation methods of interrogation of successive single-exposure frames can be used to measure the separation of pairs of particle images between successive frames. By knowing the extent of image shifting used in a multiple-exposure and by a priori knowledge of the mean flow-field, the cross-correlation of different sized interrogation spots with known separation can be optimized in terms of spatial resolution, detection rate, accuracy and reliability.

1,101 citations


Book
03 Jan 1992
TL;DR: An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences and is robust in that it can handle image sequences that are quantified rather coarsely in space and time.
Abstract: Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components A second constraint is needed A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences The algorithm is robust in that it can handle image sequences that are quantified rather coarsely in space and time It is also insensitive to quantization of brightness levels and additive noise Examples are included where the assumption of smoothness is violated at singular points or along lines in the image

901 citations


Journal ArticleDOI
P.B.L. Meijer1
TL;DR: Computerized sampling of the system output and subsequent calculation of the approximate inverse (sound-to-image) mapping provided the first convincing experimental evidence for the preservation of visual information in sound representations of complicated images.
Abstract: An experimental system for the conversion of images into sound patterns was designed to provide auditory image representations within some of the known limitations of the human hearing systems possibly as a step towards the development of a vision substitution device for the blind. The application of an invertible (one-to-one) image-to-sound mapping ensures the preservation of visual information. The system implementation involves a pipelined special-purpose computer connected to a standard television camera. A novel design and the use of standard components have made for a low-cost portable prototype conversion system with a power dissipation suitable for battery operation. Computerized sampling of the system output and subsequent calculation of the approximate inverse (sound-to-image) mapping provided the first convincing experimental evidence for the preservation of visual information in sound representations of complicated images. >

Book
01 Jan 1992
TL;DR: The first book to give a unified and coherent exposition of orthogonal signal decomposition techniques, Multiresolution Signal Composition is intended for graduate students and research and development practitioners engaged in signal processing applications in voice and image processing, multimedia, and telecommunications.
Abstract: From the Publisher: Multiresolution Signal Composition: Transforms, Subbands, and Wavelets, Second Edition is the first book to give a unified and coherent exposition of orthogonal signal decomposition techniques. Advances in the field of electrical engineering/computer science have occurred since the first edition was published in 1992. This second edition addresses new developments in applications-related chapters, especially in Chapter 4, "Filterbrook Families: Design and Performance," which is greatly expanded. Also included are the most recent applications of orthogonal transforms in digital communications and multimedia. Multiresolution Signal Composition: Transforms, Subbands, and Wavelets, Second Edition is intended for graduate students and research and development practitioners engaged in signal processing applications in voice and image processing, multimedia, and telecommunications.


Journal ArticleDOI
TL;DR: Projection reconstruction techniques are shown to have intrinsic advantages over spin‐warp methods with respect to diminished artifacts from respiratory motion, and respiratory‐ordered view angle (ROVA) acquisition is found to diminish residual streaking significantly by reducing interview inconsistencies.
Abstract: Projection reconstruction (PR) techniques are shown to have intrinsic advantages over spin-warp (2DFT) methods with respect to diminished artifacts from respiratory motion. The benefits result from (1) portrayal of artifacts as radial streaks, with the amplitude smallest near the moving elements; (2) streak deployment perpendicular to the direction of motion of moving elements and often residing outside the anatomic boundaries of the subject; (3) inherent signal averaging of low spatial frequencies from oversampling of central k-space data. In addition, respiratory-ordered view angle (ROVA) acquisition is found to diminish residual streaking significantly by reducing interview inconsistencies. Comparisons of 2DFT and PR acquisitions are made with and without ROVA. Reconstructions from magnitude-only projections are found to have increased streaks from motion-induced phase shifts.

Patent
03 Aug 1992
TL;DR: In this article, a digital image processing system including a microprocessor, random access memory, storage memory, input means, display means, and suitable logic is presented for a conventional graphical user interface, such as a windows environment.
Abstract: A digital image processing system including a microprocessor, random access memory, storage memory, input means, display means, and suitable logic to provide existing digital image processing operations in a conventional graphical user interface, such as a windows environment, which allows the user to implement image processing operations on a selected portion of a movie without utilizing a primitive special programming language.

Journal ArticleDOI
01 Feb 1992
TL;DR: A software tool that facilitates the development of image reconstruction algorithms, and the design of optimal capacitance sensors for a capacitance-based 12-electrode tomographic flow imaging system are described.
Abstract: A software tool that facilitates the development of image reconstruction algorithms, and the design of optimal capacitance sensors for a capacitance-based 12-electrode tomographic flow imaging system are described. The core of this software tool is the finite element (FE) model of the sensor, which is implemented in OCCAM-2 language and run on the Inmos T800 transputers. Using the system model, the in-depth study of the capacitance sensing fields and the generation of flow model data are made possible, which assists, in a systematic approach, the design of an improved image-reconstruction algorithm. This algorithm is implemented on a network of trans- puters to achieve a real-time performance. It is found that the selection of the geometric param- eters of a 12-electrode sensor has significant effects on the sensitivity distributions of the capacitance fields and on the linearity of the capacitance data. As a consequence, the fidelity of the reconstructed images are affected. Optimal sensor designs can, therefore, be provided, by accommodating these effects.

Journal ArticleDOI
01 Dec 1992
TL;DR: In this article, a family of nonlinear filters based on order statistics is presented, and the probabilistic and deterministic properties of the best known and most widely used filter, the median filter, are discussed.
Abstract: A family of nonlinear filters based on order statistics is presented. A mathematical tool derived through robust estimation theory, order statistics has allowed engineers to develop nonlinear filters with excellent robustness properties. These filters are well suited to digital image processing because they preserve the edges and the fine details of an image much better than conventional linear filters. The probabilistic and deterministic properties of the best known and most widely used filter in this family, the median filter, are discussed. In addition, the authors consider filters that, while not based on order statistics, are related to them through robust estimation theory. A table that ranks nonlinear filters under a variety of performance criteria is included. Most of the topics treated are very active research areas, and the applications are varied, including HDTV, multichannel signal processing of geophysical and ECG/EEG data, and a variety of telecommunications applications. >

Journal ArticleDOI
TL;DR: The document image acquisition process and the knowledge base that must be entered into the system to process a family of page images are described, and the process by which the X-Y tree data structure converts a 2-D page-segmentation problem into a series of 1-D string-parsing problems that can be tackled using conventional compiler tools.
Abstract: Gobbledoc, a system providing remote access to stored documents, which is based on syntactic document analysis and optical character recognition (OCR), is discussed. In Gobbledoc, image processing, document analysis, and OCR operations take place in batch mode when the documents are acquired. The document image acquisition process and the knowledge base that must be entered into the system to process a family of page images are described. The process by which the X-Y tree data structure converts a 2-D page-segmentation problem into a series of 1-D string-parsing problems that can be tackled using conventional compiler tools is also described. Syntactic analysis is used in Gobbledoc to divide each page into labeled rectangular blocks. Blocks labeled text are converted by OCR to obtain a secondary (ASCII) document representation. Since such symbolic files are better suited for computerized search than for human access to the document content and because too many visual layout clues are lost in the OCR process (including some special characters), Gobbledoc preserves the original block images for human browsing. Storage, networking, and display issues specific to document images are also discussed. >

Journal ArticleDOI
TL;DR: In this article, a high-resolution map of the velocity field of the central portion of Ice Stream E in West Antarctica, generated by the displacement-measuring technique, is presented, and a cross-correlation software is found to be a significant improvement over previous manually based photogrammetric methods for velocity measurement, and is far more cost-effective than in situ methods in remote polar areas.

Book
25 Sep 1992
TL;DR: This book is designed to be of interest to optical, electrical and electronics, and electro-optic engineers, including image processing, signal processing, machine vision, and computer vision engineers, applied mathematicians, image analysts and scientists and graduate-level students in image processing and mathematical morphology courses.
Abstract: Presents the statistical analysis of morphological filters and their automatic optical design, the development of morphological features for image signatures, and the design of efficient morphological algorithms. Extends the morphological paradigm to include other branches of science and mathematics.;This book is designed to be of interest to optical, electrical and electronics, and electro-optic engineers, including image processing, signal processing, machine vision, and computer vision engineers, applied mathematicians, image analysts and scientists and graduate-level students in image processing and mathematical morphology courses.

Book
03 Jan 1992
TL;DR: In this article, a collection of essays explores neural networks applications in signal and image processing, function and estimation, robotics and control, associative memories, and electrical and optical networks.
Abstract: This collection of essays explores neural networks applications in signal and image processing, function and estimation, robotics and control, associative memories, and electrical and optical networks. Intended as a companion to "Neural Networks and Fuzzy Systems", this reference is designed to be of use to scientists, engineers and other working in the neural network field.

Proceedings ArticleDOI
27 Aug 1992
TL;DR: An algorithm for determining whether the goal of image fidelity is met as a function of display parameters and viewing conditions is described, intended for the design and analysis of image processing algorithms, imaging systems, and imaging media.
Abstract: Image fidelity is the subset of overall image quality that specifically addresses the visual equivalence of two images. This paper describes an algorithm for determining whether the goal of image fidelity is met as a function of display parameters and viewing conditions. Using a digital image processing approach, this algorithm is intended for the design and analysis of image processing algorithms, imaging systems, and imaging media. The visual model, which is the central component of the algorithm, is comprised of three parts: an amplitude nonlinearity, a contrast sensitivity function, and a hierarchy of detection mechanisms.

Journal ArticleDOI
TL;DR: Using mammogram images digitized at high resolution (less than 0.1 mm pixel size), it is shown that the validity of microcalcification clusters and anatomic details is considerably improved in the processed images.
Abstract: Diagnostic features in mammograms vary widely in size and shape. Classical image enhancement techniques cannot adapt to the varying characteristics of such features. An adaptive method for enhancing the contrast of mammographic features of varying size and shape is presented. The method uses each pixel in the image as a seed to grow a region. The extent and shape of the region adapt to local image gray-level variations, corresponding to an image feature. The contrast of each region is calculated with respect to its individual background. Contrast is then enhanced by applying an empirical transformation based on each region's seed pixel value, its contrast, and its background. A quantitative measure of image contrast improvement is also defined based on a histogram of region contrast and used for comparison of results. Using mammogram images digitized at high resolution (less than 0.1 mm pixel size), it is shown that the validity of microcalcification clusters and anatomic details is considerably improved in the processed images. >

Journal ArticleDOI
TL;DR: CRISP runs on a standard personal computer (PC), and is considerably faster than previous systems for CIP, and is designed with strong emphasis on user friendliness.

Journal ArticleDOI
TL;DR: A new method is proposed to estimate fractal dimension in a two-dimensional (2D) image which can readily be extended to a 3D image as well.

Book
01 Jan 1992
TL;DR: Data preprocessing for pictorial pattern recognition: preprocessing in the spatial domain pictorial data preposessing and shape analysis transforms and image processing in the transform doamin wavelets and wavelet transforms.
Abstract: Pattern recognition: supervised and unsupervised learning in pattern recognition nonparametric decision theoretic classification nonparametric (distribution-free) training of discriminant functions statistical discriminant functions clusteringanalysis and unsupervised learning dimensionality reduction and feature selection. Neural networks for pattern recognition: multilayer perception radial basis function networks hamming net and Kohonen self-organizing feature map the Hopfield model.Data preprocessing for pictorial pattern recognition: preprocessing in the spatial domain pictorial data preposessing and shape analysis transforms and image processing in the transform doamin wavelets and wavelet transforms. Applications: exemplaryapplications. Practical concerns of image processing and pattern recognition: computer system architectures for image processing and pattern recognition. Appendices: digital images image model and discrete mathematics digital image fundamentals matrixmanipulation Eigenvectors and Eigenvalves of an operator notation.

Journal ArticleDOI
TL;DR: In this paper, a method for estimating the 3D shape of objects and the motion of the camera from a stream of images is proposed, based on the Singular Value Decomposition.
Abstract: We propose a method for estimating the three-dimensional shape of objects and the motion of the camera from a stream of images. The goal is to give a robot the ability to localize itself with respect to the environment, draw a map of its own surroundings, and perceive the shape of objects in order to recognize or grasp them. Solutions proposed in the past were so sensitive to noise as to be of little use in practical applications. This sensitivity is closely related to the viewer-centered representation of scene geometry known as a depth map, and to the use of stereo triangulation to infer depth from the images. In fact, when objects are more than a few focal lengths away from the camera, parallax effects become subtle, and even a small amount of noise in the images produces large errors in the final shape and motion results. In our formulation, we represent shape in object-centered coordinates, and model image formation by orthographic, rather than perspective projection. In this way, depth, the distance between viewer and scene, plays no role, and the problem's sensitivity to noise is critically reduced. We collect the image coordinates of P feature points tracked through F frames into a 2F x P measurement matrix. If these coordinates are measured with respect to their centroid, we show that the measurement matrix can be written as the product of two matrices that represent the camera rotation and the positions of the feature points in space. The bilinear nature of this model, and its matrix formulation, lead to a factorization method for the computation of shape and motion, based on the Singular Value Decomposition. Previous solutions assumed motion to be smooth, in one form or another, in an attempt to constrain the solution and achieve reliable convergence. The factorization method, on the other hand, makes no assumption about the camera motion, and can deal with the large jumps from frame to frame found, for instance, in sequences taken with a hand-held camera. To make the factorization method into a working system, we solve several corollary problems: how to select image features, how to track them from frame to frame, how to deal with occlusions, and how to cope with the noise and artifacts that corrupt images recorded with ordinary equipment. We test the entire system with a series of experiments on real images taken both in the lab, for an accurate performance evaluation, and outdoors, to demonstrate the applicability of the method in real-life situations.

Journal ArticleDOI
TL;DR: It is shown how the wavelet transform directly suggests a modeling paradigm for multiresolution stochastic modeling and related notions of multiscale stationarity in which scale plays the role of a time-like variable.
Abstract: An overview is provided of the several components of a research effort aimed at the development of a theory of multiresolution stochastic modeling and associated techniques for optimal multiscale statistical signal and image processing. A natural framework for developing such a theory is the study of stochastic processes indexed by nodes on lattices or trees in which different depths in the tree or lattice correspond to different spatial scales in representing a signal or image. In particular, it is shown how the wavelet transform directly suggests such a modeling paradigm. This perspective then leads directly to the investigation of several classes of dynamic models and related notions of multiscale stationarity in which scale plays the role of a time-like variable. The investigation of models on homogeneous trees is emphasized. The framework examined here allows for consideration, in a very natural way, of the fusion of data from sensors with differing resolutions. Also, thanks to the fact that wavelet transforms do an excellent job of 'compressing' large classes of covariance kernels, it is seen that these modeling paradigms appear to have promise in a far broader context than one might expect. >

Journal ArticleDOI
TL;DR: Filtering methods are described to minimize ghosting artifact that is typical in echo planar imaging and results from computer simulation and experiments will be presented.
Abstract: Echo planar imaging is characterized by scanning the 2D k-space after a single excitation. Different sampling patterns have been proposed. A technically feasible method uses a sinusoidal readout gradient resulting is measured data that does not sample k-space in an equidistant manner. In order to employ a conventional 2D-FFT image reconstruction, the data have to be converted to a cartesian grid. This can be done either by interpolation or alternatively by a generalized transformation. Filtering methods are described to minimize ghosting artifact that is typical in echo planar imaging. Results both from computer simulation and from experiments will be presented. Experimental images were obtained using a 2-T whole-body research system.