scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1985"


Journal ArticleDOI
TL;DR: Two border following algorithms are proposed for the topological analysis of digitized binary images, which determine the surroundness relations among the borders of a binary image and follow only the outermost borders.
Abstract: Two border following algorithms are proposed for the topological analysis of digitized binary images. The first one determines the surroundness relations among the borders of a binary image. Since the outer borders and the hole borders have a one-to-one correspondence to the connected components of 1-pixels and to the holes, respectively, the proposed algorithm yields a representation of a binary image, from which one can extract some sort of features without reconstructing the image. The second algorithm, which is a modified version of the first, follows only the outermost borders (i.e., the outer borders which are not surrounded by holes). These algorithms can be effectively used in component counting, shrinking, and topological structural analysis of binary images, when a sequential digital computer is used.

2,303 citations


Journal ArticleDOI
TL;DR: A general overview of VLSI array processors and a unified treatment from algorithm, architecture, and application perspectives is provided in this article, where a broad range of application domains including digital filtering, spectrum estimation, adaptive array processing, image/vision processing, and seismic and tomographic signal processing.
Abstract: High speed signal processing depends critically on parallel processor technology. In most applications, general-purpose parallel computers cannot offer satisfactory real-time processing speed due to severe system overhead. Therefore, for real-time digital signal processing (DSP) systems, special-purpose array processors have become the only appealing alternative. In designing or using such array Processors, most signal processing algorithms share the critical attributes of regularity, recursiveness, and local communication. These properties are effectively exploited in innovative systolic and wavefront array processors. These arrays maximize the strength of very large scale integration (VLSI) in terms of intensive and pipelined computing, and yet circumvent its main limitation on communication. The application domain of such array processors covers a very broad range, including digital filtering, spectrum estimation, adaptive array processing, image/vision processing, and seismic and tomographic signal processing, This article provides a general overview of VLSI array processors and a unified treatment from algorithm, architecture, and application perspectives.

1,633 citations


Journal ArticleDOI
TL;DR: A model of how humans sense the velocity of moving images, using a set of spatial-frequency-tuned, direction-selective linear sensors, agrees qualitatively with human perception.
Abstract: We propose a model of how humans sense the velocity of moving images. The model exploits constraints provided by human psychophysics, notably that motion-sensing elements appear tuned for two-dimensional spatial frequency, and by the frequency spectrum of a moving image, namely, that its support lies in the plane in which the temporal frequency equals the dot product of the spatial frequency and the image velocity. The first stage of the model is a set of spatial-frequency-tuned, direction-selective linear sensors. The temporal frequency of the response of each sensor is shown to encode the component of the image velocity in the sensor direction. At the second stage, these components are resolved in order to measure the velocity of image motion at each of a number of spatial locations and spatial frequencies. The model has been applied to several illustrative examples, including apparent motion, coherent gratings, and natural image sequences. The model agrees qualitatively with human perception.

1,227 citations


Journal ArticleDOI
TL;DR: This report describes how to extract true intensity measurements in the presence of noise in magnetic resonance imaging.
Abstract: Power spectrum or magnitude images are frequently presented in magnetic resonance imaging. In such images, measurement of signal intensity at low signal levels is compounded with the noise. This report describes how to extract true intensity measurements in the presence of noise.

1,057 citations


Proceedings ArticleDOI
05 Apr 1985
TL;DR: Each of the major classes of image segmentation techniques is defined and several specific examples of each class of algorithm are described, illustrated with examples of segmentations performed on real images.
Abstract: There are now a wide variety of image segmentation techniques, some considered general purpose and some designed for specific classes of images. These techniques can be classified as: measurement space guided spatial clustering, single linkage region growing schemes, hybrid linkage region growing schemes, centroid linkage region growing schemes, spatial clustering schemes, and split-and-merge schemes. In this paper, we define each of the major classes of image segmentation techniques and describe several specific examples of each class of algorithm. We illustrate some of the techniques with examples of segmentations performed on real images.

1,025 citations


Journal ArticleDOI
01 Jan 1985

927 citations


Journal ArticleDOI
TL;DR: This paper presents a stereo matching algorithm using the dynamic programming technique that uses edge-delimited intervals as elements to be matched, and employs the above mentioned two searches: one is inter-scanline search for possible correspondences of connected edges in right and left images and the other is intra-scanlines search for correspondence of edge-Delimited interval on each scanline pair.
Abstract: This paper presents a stereo matching algorithm using the dynamic programming technique. The stereo matching problem, that is, obtaining a correspondence between right and left images, can be cast as a search problem. When a pair of stereo images is rectified, pairs of corresponding points can be searched for within the same scanlines. We call this search intra-scanline search. This intra-scanline search can be treated as the problem of finding a matching path on a two-dimensional (2D) search plane whose axes are the right and left scanlines. Vertically connected edges in the images provide consistency constraints across the 2D search planes. Inter-scanline search in a three-dimensional (3D) search space, which is a stack of the 2D search planes, is needed to utilize this constraint. Our stereo matching algorithm uses edge-delimited intervals as elements to be matched, and employs the above mentioned two searches: one is inter-scanline search for possible correspondences of connected edges in right and left images and the other is intra-scanline search for correspondences of edge-delimited intervals on each scanline pair. Dynamic programming is used for both searches which proceed simultaneously: the former supplies the consistency constraint to the latter while the latter supplies the matching score to the former. An interval-based similarity metric is used to compute the score. The algorithm has been tested with different types of images including urban aerial images, synthesized images, and block scenes, and its computational requirement has been discussed.

913 citations


Book
01 Sep 1985
TL;DR: Fast algorithms for digital signal processing, Fast algorithms fordigital signal processing , and so on.
Abstract: Fast algorithms for digital signal processing , Fast algorithms for digital signal processing , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی

797 citations


Journal ArticleDOI
01 Apr 1985
TL;DR: A new class of coding methods capable of achieving compression ratios as high as 70:1 is called second generation, which can be formed in this class: methods using local operators and combining their output in a suitable way and methods using contour-texture descriptions.
Abstract: The digital representation of an image requires a very large number of bits. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio, starting at 1 with the first digital picture in the early 1960s, reached a saturation level around 10:1 a couple of years ago. This certainly does not mean that the upper bound given by the entropy of the source has also been reached. First, this entropy is not known and depends heavily on the model used for the source, i.e., the digital image. Second, the information theory does not take into account what the human eye sees and how it sees. Recent progress in the study of the brain mechanism of vision has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 70:1. Image quality, of course, remains as an important problem to be investigated. This class of methods, that we call second generation, is the subject of this paper. Two groups can be formed in this class: methods using local operators and combining their output in a suitable way and methods using contour-texture descriptions. Four methods, two in each class, are described in detail. They are applied to the same set of original pictures to allow a fair comparison of the quality in the decoded pictures. If more effort is devoted to this subject, a compression ratio of 100:1 is within reach.

753 citations


Journal ArticleDOI
TL;DR: A version of the Marr-Poggio-Grimson algorithm that embodies modifications to the model, and its performance on a series of natural images is illustrated.
Abstract: Computational models of the human stereo system can provide insight into general information processing constraints that apply to any stereo system, either artificial or biological. In 1977 Marr and Poggio proposed one such computational model, which was characterized as matching certain feature points in difference-of-Gaussian filtered images and using the information obtained by matching coarser resolution representations to restrict the search space for matching finer resolution representations. An implementation of the algorithm and its testing on a range of images was reported in 1980. Since then a number of psychophysical experiments have suggested possible refinements to the model and modifications to the algorithm. As well, recent computational experiments applying the algorithm to a variety of natural images, especially aerial photographs, have led to a number of modifications. In this paper, we present a version of the Marr-Poggio-Grimson algorithm that embodies these modifications, and we illustrate its performance on a series of natural images.

601 citations


Journal ArticleDOI
TL;DR: The concept of image can be applied to a political candidate, a product, a country as mentioned in this paper, and it describes not individual traits or qualities, but the total impression an entity makes on the minds of others.
Abstract: The concept of “image” can be applied to a political candidate, a product, a country. It describes not individual traits or qualities, but the total impression an entity makes on the minds of others. It is a most powerful influence in the way people perceive things, and should be a crucial concept in shaping our marketing, advertising, and communications efforts. Thus, more attention must be paid to the overall impression, the “melody,” of an advertising or marketing campaign, rather than to its specific claims. An image is not anchored in just objective data and details. It is the configuration of the whole field of the object, the advertising, and, most important, the customer's disposition and the attitudinal screen through which he observes. A politician who suddenly starts wearing glasses can radically change his impression on others. Wearing dark glasses will do so even more. Yet he remains the same person. It is his aura, his image, that people have reacted to. By the same token, repackaging a product that has been on the market for decades can make it seem “young” again. The product hasn't changed, but its image has.

Journal ArticleDOI
TL;DR: This paper addresses the subproblem of identifying corresponding points in the two images by processing groups of collinear connected edge points called segments using the "minimum differential disparity" criterion, and produces a sparse array disparity map of the analyzed scene.
Abstract: Images are 2-dimensional projections of 3-dimensional scenes, therefore depth recovery is a crucial problem in image understanding, with applications in passive navigation, cartography, surveillance, and industrial robotics. Stereo analysis provides a more direct quantitative depth evaluation than techniques such as shape from shading, and its being passive makes it more applicable than active range finding imagery by laser or radar. This paper addresses the subproblem of identifying corresponding points in the two images. The primitives we use are groups of collinear connected edge points called segments, and we base the correspondence on the "minimum differential disparity" criterion. The result of this processing is a sparse array disparity map of the analyzed scene. �9 1985 Academic Press, Inc.

Journal ArticleDOI
TL;DR: In this article, the authors describe the organization of a rule-based system, SPAM, that uses map and domain-specific knowledge to interpret airport scenes, and the results of the system's analysis are characterized by the labeling of individual regions in the image and the collection of these regions into consistent interpretations of the major components of an airport model.
Abstract: In this paper, we describe the organization of a rule-based system, SPAM, that uses map and domain-specific knowledge to interpret airport scenes. This research investigates the use of a rule-based system for the control of image processing and interpretation of results with respect to a world model, as well as the representation of the world model within an image/map database. We present results on the interpretation of a high-resolution airport scene wvhere the image segmentation has been performed by a human, and by a region-based image segmentation program. The results of the system's analysis is characterized by the labeling of individual regions in the image and the collection of these regions into consistent interpretations of the major components of an airport model. These interpretations are ranked on the basis of their overall spatial and structural consistency. Some evaluations based on the results from three evolutionary versions of SPAM are presented.

Journal ArticleDOI
TL;DR: In this paper, a transfer theory of 3D image formation is derived that relates the 3D object (complex index of refraction) to the 3-D image intensity distribution in first-order Born approximation.
Abstract: In transmission microscopy, many objects are three dimensional, that is, they are thicker than the depth of focus of the imaging system. The three-dimensional (3-D) image-intensity distribution consists of a series of two-dimensional images (optical slices) with different parts of the object in focus. First, we deal with the fundamental limitations of 3-D imaging with classical optical systems. Second, a transfer theory of 3-D image formation is derived that relates the 3-D object (complex index of refraction) to the 3-D image intensity distribution in first-order Born approximation. This theory applies to weak objects that do not scatter much light. Since, in a microscope, the illumination is neither coherent nor completely incoherent, a theory for partially coherent light is needed, but in this case the object phase distribution and the absorptive parts of the object play different roles. Finally, some experimental results are presented.

Journal ArticleDOI
TL;DR: A general purpose performance measurement scheme for image segmentation algorithms that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation.
Abstract: This paper introduces a general purpose performance measurement scheme for image segmentation algorithms. Performance parameters that function in real-time distinguish this method from previous approaches that depended on an a priori knowledge of the correct segmentation. A low level, context independent definition of segmentation is used to obtain a set of optimization criteria for evaluating performance. Uniformity within each region and contrast between adjacent regions serve as parameters for region analysis. Contrast across lines and connectivity between them represent measures for line analysis. Texture is depicted by the introduction of focus of attention areas as groups of regions and lines. The performance parameters are then measured separately for each area. The usefulness of this approach lies in the ability to adjust the strategy of a system according to the varying characteristics of different areas. This feedback path provides the means for more efficient and error-free processing. Results from areas with dissimilar properties show a diversity in the measurements that is utilized for dynamic strategy setting.

Journal ArticleDOI
TL;DR: This book is very referred for you because it gives not only the experience but also lesson, that's not about who are reading this array signal processing book but about this book that will give wellness for all people from many societies.
Abstract: Where you can find the array signal processing easily? Is it in the book store? On-line book store? are you sure? Keep in mind that you will find the book in this site. This book is very referred for you because it gives not only the experience but also lesson. The lessons are very valuable to serve for you, that's not about who are reading this array signal processing book. It is about this book that will give wellness for all people from many societies.


Journal ArticleDOI
01 Apr 1985
TL;DR: An overview of the theory of sampling and reconstruction of multidimensional signals, including the role of the camera and display apertures, and the human visual system is presented and a class of nonlinear interpolation algorithms which adapt to the motion in the scene is presented.
Abstract: Sampling is a fundamental operation in all image communication systems A time-varying image, which is a function of three independent variables, must be sampled in at least two dimensions for transmission over a one-dimensional analog communication channel, and in three dimensions for digital processing and transmission At the receiver, the sampled image must be interpolated to reconstruct a continuous function of space and time In imagery destined for human viewing, the visual system forms an integral part of the reconstruction process This paper presents an overview of the theory of sampling and reconstruction of multidimensional signals The concept of sampling structures based on lattices is introduced The important problem of conversion between different sampling structures is also treated This theory is then applied to the sampling of time-varying imagery, including the role of the camera and display apertures, and the human visual system Finally, a class of nonlinear interpolation algorithms which adapt to the motion in the scene is presented

Journal ArticleDOI
N. Nill1
TL;DR: A new analytical solution, taking the form of a straightforward multiplicative weighting function, is developed which is readily applicable to image compression and quality assessment in conjunction with a visual model and the image cosine transform.
Abstract: Utilizing a cosine transform in image compression has several recognized performance benefits, resulting in the ability to attain large compression ratios with small quality loss. Also, incorporation of a model of the human visual system into an image compression or quality assessment technique intuitively should (and has often proven to) improve performance. Clearly, then, it should prove highly beneficial to combine the image cosine transform with a visual model. In the past, combining these two has been hindered by a fundamental problem resulting from the scene alteration that is necessary for proper cosine transform utilization. A new analytical solution to this problem, taking the form of a straightforward multiplicative weighting function, is developed in this paper. This solution is readily applicable to image compression and quality assessment in conjunction with a visual model and the image cosine transform. In the development, relevant aspects of a human visual system model are discussed, and a refined version of the mean square error quality assessment measure is given which should increase this measure's utility.

Journal ArticleDOI
TL;DR: It is demonstrated that movement influences MR images locally through blurring, and also generates ghost artifacts along the phase-encoding directions of the Fourier transform imaging technique.
Abstract: Artifacts due to periodic motion during the acquisition of magnetic resonance (MR) images have been studied. A mechanical device was constructed to oscillate a small sample along any line within a 0.15-T Technicare imager. Two- and three-dimensional images were obtained using various frequencies and amplitudes of oscillation. Computer simulations of these experiments yielded images which agreed with the experiments. We demonstrated that movement influences MR images locally through blurring, and also generates ghost artifacts along the phase-encoding directions of the Fourier transform imaging technique.

Journal ArticleDOI
TL;DR: It is shown how Grenader's method of sieves can be used with the EM algorithm to remove the instability and thereby decrease the 'noise' artifact introduced into the images with little or no increase in computational complexity.
Abstract: Images produced in emission tomography with the expectation-maximization (EM) algorithm have been observed to become more 'noisy' as the algorithm converges towards the maximum-likelihood estimate. We argue in this paper that there is an instability which is fundamental to maximum-likelihood estimation as it is usually applied and, therefore, is not a result of using the EM algorithm, which is but one numerical implementation for producing maximum-likelihood estimates. We show how Grenader's method of sieves can be used with the EM algorithm to remove the instability and thereby decrease the 'noise' artifact introduced into the images with little or no increase in computational complexity.

Journal ArticleDOI
TL;DR: In this paper, a general analysis of the signal to noise ratio (SNR) of X-ray imaging with a broad spectrum is presented, and it is shown that the energy modulation and the degree of matching by the energy response of the image receptor are significant determinants of the SNR for signal detection.
Abstract: A general analysis of the signal to noise ratio SNR, of X-ray imaging with a broad spectrum is presented. The analysis indicates that the energy modulation of the signal together with its degree of matching by the energy response of the image receptor are significant determinants of the SNR for signal detection. This requires a generalisation of the interpretation of detective quantum efficiency, DQE, the transfer function appropriate from SNR, that will be dependent on the image detection or discrimination task. The generalised DQE is similar to the conventional DQE for the task of detecting radiation levels, but may differ substantially from it for the task of discriminating a lesion from its surround, particularly for signals of bone or iodine. The photon counter is shown to be inferior to the ideal detector for these tasks, but to be generally superior to the energy detecting scintillators used in conventional or digital radiography and computed tomography.

Journal ArticleDOI
TL;DR: In this article, a procedure for using digital image processing techniques to measure the spatial correlation functions of composite heterogeneous materials is presented, and methods for eliminating undesirable biases and warping in digitized photographs are discussed.
Abstract: A procedure for using digital image processing techniques to measure the spatial correlation functions of composite heterogeneous materials is presented. Methods for eliminating undesirable biases and warping in digitized photographs are discussed. Fourier transform methods and array processor techniques for calculating the spatial correlation functions are treated. By introducing a minimal set of lattice‐commensurate triangles, a method of sorting and storing the values of three‐point correlation functions in a compact one‐dimensional array is developed. Examples are presented at each stage of the analysis using synthetic photographs of cross sections of a model random material (the penetrable sphere model) for which the analytical form of the spatial correlations functions is known. Although results depend somewhat on magnification and on relative volume fraction, it is found that photographs digitized with 512×512 pixels generally have sufficiently good statistics for most practical purposes. To illustrate the use of the correlation functions, bounds on conductivity for the penetrable sphere model are calculated with a general numerical scheme developed for treating the singular three‐dimensional integrals which must be evaluated.

Journal ArticleDOI
TL;DR: Tests were performed on synthetic aperture radar images which show that the algorithm reduces speckle noise in images favorably with a 3 × 3 median filter.
Abstract: An algorithm is described which reduces speckle noise in images. It is a nonlinear algorithm based on geometric concepts. Tests were performed on synthetic aperture radar images which show that it compares favorably with a 3 × 3 median filter.

Journal ArticleDOI
TL;DR: In this paper, the translational field lines at image locations corresponding to depth discontinuities in the scene are estimated from a simple estimate of the differential image motion, which facilitates closed-form solutions of camera motion parameters and environmental depth.
Abstract: The inference of three-dimensional camera motion parameters and the layout of a scene from image flows becomes particularly simple from a computational point of view if the scene contains depth variations. Under this condition, the differential image motion yields a simple estimate of the translational field lines at image locations corresponding to depth discontinuities in the scene. This in turn facilitates closed-form solutions of camera motion parameters and environmental depth. Our results may have relevance to human motion perception, which also seems to rely on depth variation in processing image motion.

Patent
03 Jun 1985
TL;DR: In this article, a real-time 3D display for medical imaging is presented, which includes a plurality of individual processing elements each having an image memory for storing a mini-image of a portion of the object as viewed from any given direction and a merge control means for generating a combined image of the objects including the depth thereof by selection on a pixel-by-pixel basis from each of the mini-images.
Abstract: A real-time three-dimensional display device particularly suited for medical imaging is disclosed. The device includes a plurality of individual processing elements each having an image memory for storing a mini-image of a portion of the object as viewed from any given direction and a merge control means for generating a combined image of the object including the depth thereof by selection on a pixel-by-pixel basis from each of the mini-images. In two different embodiments, priority codes are assigned to each of the processing elements reflecting the relative significance of a given pixel of the mini-image produced by a given processing element as compared to the pixels of mini-images produced by other processing elements. In one embodiment, the combined image is generated in accordance with the priority codes. In another embodiment, a Z buffer is used to provide for hidden surface removal on a pixel-by-pixel basis. Improved shadow, shading and gradient processors are provided to provide three-dimensional imaging as well as an improved scan conversion means for generating a coherent image from the combined images merged from all of the processing elements.

Journal ArticleDOI
TL;DR: In this article, the authors found that images with equal pixel signal-to-noise ratio (SNRp) but different correlation properties give quite different observer-performance measures for a simple detection experiment.
Abstract: Pixel signal-to-noise ratio is one accepted measure of image quality for predicting observer performance in medical imaging. We have found, however, that images with equal pixel signal-to-noise ratio (SNRp) but different correlation properties give quite different observer-performance measures for a simple detection experiment. The SNR at the output of an ideal detector with the ability to prewhiten the noise is also a poor predictor of human performance for disk signals in high-pass noise. We have found constant observer efficiencies for humans relative to the performance of a nonprewhitening detector for this task.

Journal ArticleDOI
TL;DR: An approach is suggested by which the kine matic variables may be extracted from evolving contours in an image sequence by extracting the instantaneous motion and local structure of the object along the line of sight from an evolving image sequence.
Abstract: This study concerns a new formulation and method of solu tion of the image flow problem. It is relevant to the maneu vering of a robotic system through an environment containing other moving objects or terrain. The two-dimensional image flow is generated by the relative rigid-body motion of a smooth, textured object along the line of sight to a monocular camera. By analyzing this evolving image sequence, we hope to extract the instantaneous motion (described by six degrees of freedom) and local structure (slopes and curvatures) of the object along the line of sight. The formulation relates a new local representation of an image flow to object motion and structure by twelve nonlinear algebraic equations. The repre sentation parameters are given by the two components of image velocity, three components of rate of strain, spin, and six independent image gradients of rate of strain and spin, evaluated at the point on the line of sight. These kinematic variables are motivated by the deformation of a finite ele...

Journal ArticleDOI
TL;DR: The performance of the motion compensated prediction developed here is investigated for various block sizes and is compared to other techniques.
Abstract: Interframe motion estimation of subblocks based on improved search techniques is developed. These techniques are based on minimizing the mean difference between the subblock in question in the present frame and the displaced subblock in the previous frame. The performance of the motion compensated prediction developed here is investigated for various block sizes and is compared to other techniques.

Journal ArticleDOI
TL;DR: By combining a nonparametric classifier, based on a clustering algorithm, with a quad-tree representation of the image, the scheme is both simple to implement and performs well, giving satisfactory results at signal-to-noise ratios well below 1.