scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1977"


Journal ArticleDOI
01 Mar 1977
TL;DR: A critical review is given of two kinds of Fourier descriptors (FD's) and a distance measure is proposed, in terms of FD's, that measures the difference between two boundary curves.
Abstract: Description or discrimination of boundary curves (shapes) is an important problem in picture processing and pattern recognition. Fourier descriptors (FD's) have interesting properties in this respect. First, a critical review is given of two kinds of FD's. Some properties of the FD's are given and a distance measure is proposed, in terms of FD's, that measures the difference between two boundary curves. It is shown how FD's can be used for obtaining skeletons of objects. Finally, experimental results are given in character recognition and machine parts recognition.

1,023 citations


01 Jan 1977

933 citations


Proceedings Article
22 Aug 1977
TL;DR: The matching of image and map features is performed rapidly by a new technique, called "chamfer matching", that compares the shapes of two collections of shape fragments, at a cost proportional to linear dimension, rather than area.
Abstract: Parametric correspondence is a technique for matching images to a three dimensional symbolic reference map. An analytic camera model is used to predict the location and appearance of landmarks in the image, generating a projection for an assumed viewpoint. Correspondence is achieved by adjusting the parameters of the camera model until the appearances of the landmarks optimally match a symbolic description extracted from the image. The matching of image and map features is performed rapidly by a new technique, called "chamfer matching", that compares the shapes of two collections of shape fragments, at a cost proportional to linear dimension, rather than area. These two techniques permit the matching of spatially extensive features on the basis of shape, which reduces the risk of ambiguous matches and the dependence on viewing conditions inherent in conventional image based correlation matching.

896 citations


Journal ArticleDOI
TL;DR: In this article, a number of simple and inexpensive enhancement techniques are suggested to make use of easily computed local context, features to aid in the reassignment of each point's gray level during histogram transfomation.

383 citations


Journal ArticleDOI
TL;DR: A set of orthogonal functions related to distinctive image features is presented, which allows efficient extraction of such boundary elements from digitized images with considerable improvements over, existing techniques, with a very moderate increase of computational cost.
Abstract: We study class of fast algorithms that extract object boundaries from digitized images. A set of orthogonal functions related to distinctive image features is presented, which allows efficient extraction of such boundary elements. The properties of these functions are. used to define new criteria for edge detection and a sequential algorithm is presented. Results indicate considerable improvements over, existing techniques, with a very moderate increase of computational cost.

378 citations


Journal ArticleDOI
TL;DR: Two error measures, the percentage area misclassified and a new pixel distance error, were defined and evaluated in terms of their correlation with human observation for comparison of multiple segmentations of the same scene and multiple scenes segmented by the same technique.

342 citations


Journal ArticleDOI
TL;DR: A major component of the computational burden of the maximum entropy procedure is shown to be a two-dimensional convolution sum, which can be efficiently calculated by fast Fourier transform techniques.
Abstract: Two-dimensional digital image reconstruction is an important imaging process in many of the physical sciences. If the data are insufficient to specify a unique reconstruction, an additional criterion must be introduced, either implicitly or explicitly before the best estimate can be computed. Here we use a principle of maximum entropy, which has proven useful in other contexts, to design a procedure for reconstruction from noisy measurements. Implementation is described in detail for the Fourier synthesis problem of radio astronomy. The method is iterative and hence more costly than direct techniques; however, a number of comparative examples indicate that a significant improvement in image quality and resolution is possible with only a few iterations. A major component of the computational burden of the maximum entropy procedure is shown to be a two-dimensional convolution sum, which can be efficiently calculated by fast Fourier transform techniques.

262 citations


ReportDOI
TL;DR: A standard approach to threshold selection for image segmentation is based on locating valleys in the image's gray level histogram, but several methods have been proposed that produce a transformed histogram in which the valley is deeper, or is converted into a peak, and is thus easier to detect.
Abstract: : A standard approach to threshold selection for image segmentation is based on locating valleys in the image's gray level histogram. Several methods have been proposed that produce a transformed histogram in which the valley is deeper, or is converted into a peak, and is thus easier to detect. The transformed histograms used in these methods can all be obtained by creating (gray level, edge value) scatter plots, and computing various weighted projections of these plots on the gray level axis. Using this unified approach makes it easier to understand how the methods work and to predict when a particular method is likely to be effective. The methods are applied to a set of examples involving both real and synthetic images, and the characteristics of the resulting transformed histograms are discussed. (Author)

193 citations


Journal Article
TL;DR: This essay surveys recent work in vision at M.I.T. from a perspective in which the representational problems assume a primary importance.
Abstract: : Vision is the construction of efficient symbolic descriptions from images of the world An important aspect of vision is the choice of representations for the different kinds of information in a visual scene In the early stages of the analysis of an image, the representations used depend more on what it is possible to compute from an image than on what is ultimately desirable, but later representations can be more sensitive to the specific needs of recognition This essay surveys recent work in vision at MIT from a perspective in which the representational problems assume a primary importance An overall framework is suggested for visual information processing, in which the analysis proceeds through three representations; (1) the primal sketch, which makes explicit the intensity changes and local two-dimensional geometry of an image, (2) the 2 1/2-D sketch, which is a viewer-centered representation of the depth, orientation and discontinuities of the visible surfaces, and (3) the 3-D model representation, which allows an object-centered description of the three-dimensional structure and organization of a viewed shape Recent results concerning processes for constructing and maintaining these representations are summarized and discussed (Author)

156 citations


Journal Article
TL;DR: A retrospective study of 299 successive infants who were ventilated for respiratory distress syndrome showed that 62 (21%) developed radiographic stage VI bronchopulmonary dysplasia (BPD).
Abstract: In the context of signal analysis and pattern matching, alignment of 1D signals for the comparison of signal morphologies is an important problem. For image processing and computer vision, 2D optical flow (OF) methods find wide application for motion analysis and image registration and variational OF methods have been continuously improved over the past decades. We propose a variational method for the alignment and displacement estimation of 1D signals. We pose the estimation of non-flat displacements as an optimization problem with a similarity and smoothness term similar to variational OF estimation. To this end, we can make use of efficient optimization strategies that allow real-time applications on consumer grade hardware. We apply our method to two applications from functional neuroimaging: The alignment of 2-photon imaging line scan recordings and the denoising of evoked and event-related potentials in single trial matrices. We can report state of the art results in terms of alignment quality and computing speeds. Existing methods for 1D alignment target mostly constant displacements, do not allow native subsample precision or precise control over regularization or are slower than the proposed method. Our method is implemented as a MATLAB toolbox and is online available. It is suitable for 1D alignment problems, where high accuracy and high speed is needed and non-constant displacements occur.

149 citations


Journal ArticleDOI
TL;DR: A theoretical and experimental extension of two-dimensional transform coding and hybrid transform/DPCM coding techniques to the coding of sequences of correlated image frames for Markovian image sources is presented.
Abstract: Two-dimensional transform coding and hybrid transform/DPCM coding techniques have been investigated extensively for image coding. This paper presents a theoretical and experimental extension of these techniques to the coding of sequences of correlated image frames. Two coding methods are analyzed: three-dimensional cosine transform coding, and two-dimensional cosine transform coding within an image frame combined with DPCM coding between frames. Theoretical performance estimates are developed for the coding of Markovian image sources. Simulation results are presented for transmission over error-free and binary symmetric channels.

Journal ArticleDOI
TL;DR: The complexities encountered in applying segmentation techniques to color images of natural scenes involving complex textured objects are analyzed and new ways of using the techniques to overcome some of the problems are explored.

Journal ArticleDOI
TL;DR: A stochastic model of edge structure is proposed and the edge detection problem formulated as one of least mean-square spatial filtering, which indicates substantial advantages over conventional edge detectors in the presence of noise.

Journal Article
TL;DR: The unit allows accurate positioning of a scintillation camera's detector at any angle around a patient in order to obtain the multiple projection images needed for transaxial tomography, and it is capable of imaging any area of the body.
Abstract: An emission transaxial tomographic system using a scintillation camera as the detector is described. The unit allows accurate positioning of a scintillation camera's detector at any angle around a patient in order to obtain the multiple projection images needed for transaxial tomography, and it is capable of imaging any area of the body. The camera can also be used for all types of conventional imaging procedures. Image processing is performed by a small on-line computer. A convolution algorithm and a mathematical technique for approximate absorption correction are used to obtain high-resolution and high-contrast images with good quantitative accuracy. The operation of the system is described and representative phantom and patient studies are presented to illustrate the capabilities of the system.

Journal ArticleDOI
TL;DR: In this article the adaptive systems are divided to four categories and the theoretical and the implementational problems of the optimum system are discussed and the assumptions that are made to overcome these problems are outlined.
Abstract: The following is a survey of the technical literature on adaptive coding of imagery. Section 1 briefly discusses the general problem of image data compression. The optimum image data compression system, from a theoretical viewpoint, is presented in Section 1.1. The theoretical and the implementational problems of the optimum system are discussed and the assumptions that are made to overcome these problems are outlined. One important assumption is the stationarity which is not true for most imagery. In adaptive systems the parameters are varied according to changes in signal statistics optimizing the system performance for nonstationary signals. In this article the adaptive systems are divided to four categories. Section 2 is a survey of adaptive transform coding systems. Section 3 discusses adaptive predictive coding systems. Sections 4 and 5 discuss adaptive cluster coding and adaptive entropy technique, respectively.

Proceedings ArticleDOI
08 Dec 1977
TL;DR: In this paper, a digital processor has been designed and built to implement Lockheed's phase correlation technique at a rate of 30 correlations per second on 128 x 128 element images digitized to eight bits.
Abstract: A digital processor has been designed and built to implement Lockheed's Phase Correlation technique at a rate of 30 correlations per second on 128 x 128 element images digitized to eight bits. Phase Correlation involves taking the inverse Fourier transform of the appropriately filtered phase of the Fourier cross-power spectrum of a pair of images to extract their relative displacement vector. It achieves sub-pixel accuracy with relative insensitivity to scene content, illumination differences and narrow-band noise. The processor, which is designed to accept inputs from a variety of sensors, is built with conventional TTL and MOS components and employs only a moderate amount of parallelism. It uses floating point arithmetic with equal exponents for real and imaginary parts. Multiplications are performed by table lookup. Application areas for the correlator include image velocity sensing, correlation guidance and scene tracking.© (1977) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
01 Nov 1977
TL;DR: In this article, a gamma-ray camera based on the Compton effect is proposed, which is able to locate successive interaction points in multicollision trajectories and the associated energy losses.
Abstract: A novel gamma-ray camera is proposed that is based on the Compton effect. The basis of the device is a segmented semiconductor detector which is able to locate successive interaction points in multicollision trajectories and the associated energy losses. In this way, conic section can be computed on an image plane for each emission from a point source in the object. An overall image can be constructed by summation of the individual ellipses. The effect of measurement errors is investigated theoretically and by simulation, and leads to favourable estimates for the resolution and sensitivity of the camera. The question of image processing to improve performance is briefly discussed.


Journal ArticleDOI
Mitchell1, Myers1, Boyne1
TL;DR: In this paper, the relative frequency of local extremes in grey level is used as the principal measure for image texture analysis, which is computationally simple and can be implemented in hardware for real-time analysis.
Abstract: A new technique for image texture analysis is described which uses the relative frequency of local extremes in grey level as the principal measure. This method is invariant to multiplicative gain changes (such as caused by changes in illumination level or film processing) and is invariant to image resolution and sampling rate if the image is not undersampled. The algorithm described is computationally simple and can be implemented in hardware for real-time analysis. Comparisons are made between this new method and the spatial dependence method of texture analysis using 49 samples of each of eight textures. The new method seems just as accurate and considerably faster.

Journal ArticleDOI
A. K. Jain1
TL;DR: In this article, the fast Karhunen-Loeve transform is extended to images with nonseparable or nearly isotropic covariance functions, or both, for image restoration, data compression, edge detection, image synthesis, etc.
Abstract: Stochastic representation of discrete images by partial differential equation operators is considered. It is shown that these representations can fit random images, with nonseparable, isotropic covariance functions, better than other common covariance models. Application of these models in image restoration, data compression, edge detection, image synthesis, etc., is possible. Different representations based on classification of partial differential equations are considered. Examples on different images show the advantages of using these representations. The previously introduced notion of fast Karhunen-Loeve transform is extended to images with nonseparable or nearly isotropic covariance functions, or both.

Journal ArticleDOI
01 Jan 1977
TL;DR: A review of a variety of incoherent optical analog techniques for performing correlation and linear transform operations in scanning and nonscanning systems using spatial and/or temporal inputs are considered.
Abstract: The use of optical systems in signal processing applications can offer significant advantages over an equivalent electronic approach. These advantages stem chiefly from the high-speed analog multiply and parallel processing capability inherent in an optical system. This can be used to advantage in application areas requiring large quantities of data to be processed in near real time. Presented in this paper is a review of a variety of incoherent optical analog techniques for performing correlation and linear transform operations. Both scanning and nonscanning systems using spatial and/or temporal inputs are considered.

Journal ArticleDOI
TL;DR: A freeze-drying and shadowing procedure for two-dimensional periodic biological structures that allows the subsequent application of image processing techniques is reported and did not show systematic deformations of the fine structure after freeze-Drying when compared to its stain exclusion pattern.

Proceedings Article
22 Aug 1977
TL;DR: A vision system which uses a semantic network model and a distributed control structure to accomplish the image analysis process to cast some light on fundamental problems of computer perception.
Abstract: : Document describes a vision system which uses a semantic network model and a distributed control structure to accomplish the image analysis process. The system is an attempt to bring together many current ideas in artificial intelligence and vision programming and thereby to cast some light on fundamental problems of computer perception. The semantic network facilities the interplay between geometric and other relational constraints which are used to direct and limit search.

Journal ArticleDOI
TL;DR: An optimum filter to restore the degraded image due to blurring and the signal-dependent noise is obtained on the basis of the theory of Wiener filtering.
Abstract: An optimum filter to restore the degraded image due to blurring and the signal-dependent noise is obtained on the basis of the theory of Wiener filtering. Computer simulations of image restoration using signal-dependent noise models are carried out. It becomes clear that the optimum filter, which makes use of a priori information on the signal-dependent nature of the noise and the spectral density of the signal and the noise showing significant spatial correlation, is potentially advantageous.

Journal ArticleDOI
TL;DR: An analysis is presented for selecting the important rainbow-hologram formation-setup parameters for minimization of the image blur.
Abstract: An analysis is presented for selecting the important rainbow-hologram formation-setup parameters for minimization of the image blur.

Patent
22 Apr 1977
TL;DR: In this article, a system and method for reconstruction of desired "frozen action" cross-sections of the heart or of other bodily organs or similar objects undergoing cyclic displacements is presented.
Abstract: System and method are set forth enabling reconstruction of images of desired "frozen action" cross-sections of the heart or of other bodily organs or similar objects undergoing cyclic displacements. Utilizing a computed tomography scanning apparatus data is acquired during one or more full rotational cycles and suitably stored. The said data corresponding to various angular projections can then be correlated with the desired portion of the object's cyclical motion by means of a reference signal associated with the motion, such as that derived through an electrocardiogram--where a heart is the object of interest. Data taking can also be limited to only the times when the desired portion of the cyclical motion is occurring. A sequential presentation of a plurality of said frozen action cross-sections provides a motion picture of the moving object.

ReportDOI
TL;DR: Histogram peaks can be sharpened using an iterative process in which large bins grow at the expense of nearby smaller bins to facilitate compressing or segmenting it.
Abstract: : Histogram peaks can be sharpened using an iterative process in which large bins grow at the expense of nearby smaller bins. The modified histogram will consist of a few spikes corresponding to the peaks of the original histogram. The image corresponding to the modified histogram is often almost undestinguishable from the original image. The small number of different gray levels in that image can be used to facilitate compressing or segmenting it.

Journal ArticleDOI
01 Jan 1977-Libri
TL;DR: The mission of the Public Library Public Libraries: A Approach to Funding New Technologies, New Responsibilities Myths Conclusion References Author Index Subject Index as mentioned in this paper, Part I: PRINCIPLES The Image The Model The Purposes of Graphic Records Information and Information Science Librarianship Library-Like-Systems PART II: PRACTICALITIES
Abstract: Introductuion PART I: PRINCIPLES The Image The Model The Purposes of Graphic Records Information and Information Science Librarianship Library-Like-Systems PART II: PRACTICALITIES The Mission of the Public Library Public Libraries: A Approach to Funding New Technologies, New Responsibilities Myths Conclusion References Author Index Subject Index.

Journal ArticleDOI
TL;DR: In this paper, the joint probability density function of the projections was used to derive the reconstruction scheme which is optimum in the maximum likelihood sense, and it was shown that for an average number of counts detected in excess of approximately 100 per projection, the image is essentially unbiased.
Abstract: The stochastic nature of the projections used in transmission image reconstruction has received little attention to date. This paper utilizes the joint probability density function of the projections to derive the reconstruction scheme which is optimum in the maximum likelihood sense. Two regimes are examined: that where there is significant probability of a zero count projection, and that where the zero count event may be safely ignored. The former regime leads to a complicated algorithm whose performance is data dependent. The latter regime leads to a simpler algorithm. Its performance, in terms of its bias and variance, has been calculated. It is shown that, for an average number of counts detected in excess of approximately 100 per projection, the image is essentially unbiased, and for counts in excess of approximately 2500 per projection, the image approximately attains the minimum variance of any reconstruction scheme using the same measurements.

Book
01 Jan 1977
TL;DR: Noncoherent optical processing, Noncoherent Optical processing, مرکز فناوری اطلاعات و اصاع رسانی, کδاوρزی
Abstract: Noncoherent optical processing , Noncoherent optical processing , مرکز فناوری اطلاعات و اطلاع رسانی کشاورزی