scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1981"


Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations


Journal ArticleDOI
01 Mar 1981
TL;DR: A large variety of algorithms for image data compression are considered, starting with simple techniques of sampling and pulse code modulation (PCM) and state of the art algorithms for two-dimensional data transmission are reviewed.
Abstract: With the continuing growth of modern communications technology, demand for image transmission and storage is increasing rapidly. Advances in computer technology for mass storage and digital processing have paved the way for implementing advanced data compression techniques to improve the efficiency of transmission and storage of images. In this paper a large variety of algorithms for image data compression are considered. Starting with simple techniques of sampling and pulse code modulation (PCM), state of the art algorithms for two-dimensional data transmission are reviewed. Topics covered include differential PCM (DPCM) and predictive coding, transform coding, hybrid coding, interframe coding, adaptive techniques, and applications. Effects of channel errors and other miscellaneous related topics are also considered. While most of the examples and image models have been specialized for visual images, the techniques discussed here could be easily adapted more generally for multidimensional data compression. Our emphasis here is on fundamentals of the various techniques. A comprehensive bibliography with comments is included for a reader interested in further details of the theoretical and experimental results discussed here.

810 citations


Journal ArticleDOI
01 Sep 1981

568 citations



Journal ArticleDOI
TL;DR: A highly efficient recursive algorithm is defined for simultaneously convolving an image (or other two-dimensional function) with a set of kernels which differ in width but not in shape, so that the algorithm generates aSet of low-pass or band-pass versions of the image.

442 citations


Journal ArticleDOI
01 May 1981
TL;DR: Several state-of-the-art mathematical models useful in image processing are considered, including the traditional fast unitary transforms, autoregessive and state variable models as well as two-dimensional linear prediction models.
Abstract: Several state-of-the-art mathematical models useful in image processing are considered. These models include the traditional fast unitary transforms, autoregessive and state variable models as well as two-dimensional linear prediction models. These models introduced earlier [51], [52] as low-order finite difference approximations of partial differential equations are generalized and extended to higher order in the framework of linear prediction theory. Applications in several image Processing problems, including image restoration, smoothing, enhancement, data compression, spectral estimation, and filter design, are discussed and examples given.

441 citations


Journal ArticleDOI
Abstract: We present a new direct method of estimating the three-dimensional motion parameters of a rigid planar patch from two time-sequential perspective views (image frames). First, a set of eight pure parameters are defined. These parameters can be determined uniquely from the two given image frames by solving a set of linear equations. Then, the actual motion parameters are determined from these pure parameters by a method which requires the solution of a sixth-order polynomial of one variable only, and there exists a certain efficient algorithm for solving a sixth-order polynomial. Aside from a scale factor for the translation parameters, the number of real solutions never exceeds two. In the special case of three-dimensional translation, the motion parameters can be expressed directly as some simple functions of the eight pure parameters. Thus, only a few arithmetic operations are needed.

391 citations


Proceedings Article
24 Aug 1981
TL;DR: This paper describes an algorithm for stereo sensing that uses an edge-based line-by-line stereo correlation scheme, and appears to be fast, robust, and parallel implementable.
Abstract: The past few years have seen a growing interest in the application" of three-dimensional image processing. With the increasing demand for 3-D spatial information for tasks of passive navigation [7,12], automatic surveillance [9], aerial cartography [10,13], and inspection in industrial automation, the importance of effective stereo analysis has been made quite clear. A particular challenge is to provide reliable and accurate depth data for input to object or terrain modelling systems (such as [5]. This paper describes an algorithm for such stereo sensing It uses an edge-based line-by-line stereo correlation scheme, and appears to be fast, robust, and parallel implementable. The processing consists of extracting edge descriptions for a stereo pair of images, linking these edges to their nearest neighbors to obtain the edge connectivity structure, correlating the edge descriptions on the basis of local edge properties, then cooperatively removmg those edge correspondences determined to be in error - those which violate the connectivity structure of the two images. A further correlation process, using a technique similar to that used for the edges, is applied to the image intensity values over intervals defined by the previous correlation The result of the processing is a full image array disparity map of the scene viewed.

389 citations


Proceedings Article
01 Jan 1981
TL;DR: In this article, an efficient single-pass adaptive bandwidth compression technique using the discrete cosine transform is described, which is achieved by using a rate buffer for channel rate equalization, and is demonstrated for coding of color images at 0.4 bits/pixel corresponding to realtime color television transmission over a 1.5 Mbit/s channel.
Abstract: An efficient single-pass adaptive bandwidth compression technique using the discrete cosine transform is described. The coding process involves a simple thresholding and normalization operation on the transform coefficients. Adaptivity is achieved by using a rate buffer for channel rate equalization. The buffer status and input rate are monitored to generate a feedback normalization factor. Excellent results are demonstrated for coding of color images at 0.4 bits/pixel corresponding to real-time color television transmission over a 1.5 Mbit/s channel.

388 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined geometries employing position-dependent charge partitioning to obtain a two-dimensional position signal from each detected photon or particle, and proposed two wedge-and-strip anode systems.
Abstract: The paper examines geometries employing position-dependent charge partitioning to obtain a two-dimensional position signal from each detected photon or particle. Requiring three or four anode electrodes and signal paths, images have little distortion and resolution is not limited by thermal noise. An analysis of the geometrical image nonlinearity between event centroid location and the charge partition ratios is presented. In addition, fabrication and testing of two wedge-and-strip anode systems are discussed. Images obtained with EUV radiation and microchannel plates verify the predicted performance, with further resolution improvements achieved by adopting low noise signal circuitry. Also discussed are the designs of practical X-ray, EUV, and charged particle image systems.

353 citations


Journal ArticleDOI
J. Stoffel1, J. Moreland1
TL;DR: This paper is a tradeoff study of image processing algorithms that can be used to transform continuous tone and halftone pictorial image input into spatially encoded representations compatible with binary output processes.
Abstract: This paper is a tradeoff study of image processing algorithms that can be used to transform continuous tone and halftone pictorial image input into spatially encoded representations compatible with binary output processes. A large percentage of the electronic output marking processes utilize a binary mode of operation. The history and rationale for this are reviewed and thus the economic justification for the tradeoff is presented. A set of image quality and processing complexity metrics are then defined. Next, a set of algorithms including fixed and adaptive thresholding, orthographic pictorial fonts, electronic screening, ordered dither, and error diffusion are defined and evaluated relative to their ability to reproduce continuous tone input. Finally, these algorithms, along with random nucleated halftoning, the alias reducing image enhancement system (ARIES), and a new algorithm, selective halftone rescreening (SHARE), are defined and evaluated as to their ability to reproduce halftone pictorial input.

Journal ArticleDOI
TL;DR: This paper reviews box-filtering techniques and also describes some useful extensions of the box filtering technique.


Journal ArticleDOI
01 May 1981
TL;DR: Three areas in which human vision models have been successfully applied are image bandwidth compression, image quality assessment, and image enhancement; results from these areas are summarized and some example results are given.
Abstract: The mechanisms are discussed by which the human eye forms a neural image of the outside world for transmission along the optic nerve. Mathematical models of these mechanisms which can be exploited for engineering purposes are presented and their usefulness and limitatious are discussed. Three areas in which human vision models have been successfully applied are image bandwidth compression, image quality assessment, and image enhancement; results from these areas are summarized and some example results are given. Some future directions are suggested.

Journal ArticleDOI
01 Jan 1981
TL;DR: Two-dimensional signal processing (including image processing) is possile, in spite of the inherent one-dimensional nature of the acousto-optic device as a spatial light modulator.
Abstract: The use of acousto-optic devices in real-time signal convolution and correlation has increased dramatically during the past decade because of improvements in device characteristics and implementation techniques. Depending on the application, processing can be implemented via spatial or temporal integration. Two-dimensional signal processing (including image processing) is possile, in spite of the inherent one-dimensional nature of the acousto-optic device as a spatial light modulator.

Journal ArticleDOI
TL;DR: A very efficient back-projection algorithm which results in large time savings when implemented in machine code and a minor modification to this algorithm which converts it to a re- projection procedure with comparable efficiency is described.
Abstract: While the computation time for reconstructing images in C.T. is not a problem in commercial systems, there are many experimental and developmental applications where resources are limited and image reconstruction places a heavy burden on the computer system. This paper describes a very efficient back-projection algorithm which results in large time savings when implemented in machine code. Also described is a minor modification to this algorithm which converts it to a re-projection procedure with comparable efficiency.

Journal ArticleDOI
TL;DR: In the international search for the optimal image processing computer architecture, image parallelism is the key to cost effectiveness.
Abstract: In the international search for the optimal image processing computer architecture, image parallelism is the key to cost effectiveness.

Journal ArticleDOI
TL;DR: An algorithm is presented for constructing a quadtree for a binary image given its row-by-row description that processes the image one row at a time and merges identically colored sons as soon as possible, so that a minimal size quadtree exists after processing each pixel.
Abstract: An algorithm is presented for constructing a quadtree for a binary image given its row-by-row description. The algorithm processes the image one row at a time and merges identically colored sons as soon as possible, so that a minimal size quadtree exists after processing each pixel. This method is spacewise superior to one which reads in an entire array and then attempts to build the quadtree.

Journal ArticleDOI
TL;DR: The analysis of texture transforms of multispectral digital imagery of Landsat-2 MSS data were generated and were useful for edge detection and image enhancement, but did not prove useful as features for the thematic mapping of land cover.

Journal ArticleDOI
TL;DR: The results of computer simulations show clearly how the process of forcing the image to conform to a priori object data reduces artifacts arising from limited data available in the Fourier domain.
Abstract: An iterative technique is proposed for improving the quality of reconstructions from projections when the number of projections is small or the angular range of projections is limited. The technique consists of transforming repeatedly between image and transform spaces and applying a priori object information at each iteration. The approach is a generalization of the Gerchberg-Papoulis algorithm, a technique for extrapolating in the Fourier domain by imposing a space-limiting constraint on the object in the spatial domain. A priori object data that may be applied, in addition to truncating the image beyond the known boundaries of the object, include limiting the maximum range of variation of the physical parameter being imaged. The results of computer simulations show clearly how the process of forcing the image to conform to a priori object data reduces artifacts arising from limited data available in the Fourier domain.

Journal ArticleDOI
TL;DR: Three-dimensional arcs and curves are defined and characterized for subsets of three-dimensional arrays for connectedness, cavities, and holes.
Abstract: Basic concepts of connectedness, cavities, and holes are defined for subsets of three-dimensional arrays. Three-dimensional arcs and curves are also defined and characterized.

Patent
20 Mar 1981
TL;DR: In this paper, an interactive image processing system (200,300) is presented which is capable of simultaneous processing of at least two different digitized composite color images to provide a displayable resultant composite color image.
Abstract: An improved interactive image processing system (200,300) is provided which is capable of simultaneous processing of at least two different digitized composite color images to provide a displayable resultant composite color image. Each of the digitized composite color images have separate digitized red, blue and green image components and have an associated image information content. The system (200,300) includes separate image storage planes (246,346,70',72',74',70",72",74",70"',72'", 74"',370,372,374,370',372',374',370",372",374) for retrievably storing each of the digitized red, blue and green image components or other image data as well as graphic planes (78',378) for storing graphic control data for processing of the images. The digital image processing of the image components is accomplished in a digital image processing portion (208,308) which includes an image processor (210,310) which contains the various storage planes in a refresh memory (246,346) which cooperates with a pipeline processor configuration (86'), image combine circuitry (270,272,274,270',272',274') and other control circuitry to enable the simultaneous processing between each of the corresponding image planes on a pixel by pixel basis under interactive control of a keyboard (50'), data tablet (54') or other interactive device. The system may be employed for interactive video processing (200) or as an interactive film printing system (300) in which the simultaneous processing of the two different images, which may be iterative, can be monitored in real time on a television monitor (44',315). In the video system (200), the combining format of the image planes may be interactively varied on a pixel-by-pixel basis by creating different digital control masks for each pixel which are stored in refresh memory (246,346). In either system (200,300), the interactive simultaneous digital processing of the images is accomplished in an RGB format.

Proceedings ArticleDOI
01 Aug 1981
TL;DR: This paper describes an on-going project at Carnegie-Mellon University in which a frame buffer raster-scan display system is designed which has the high performance typically required for interactive display applications.
Abstract: Interactive use of a display requires the capability to update the display rapidly. This paper describes an on-going project at Carnegie-Mellon University in which we are designing a frame buffer raster-scan display system which has the high performance typically required for interactive display applications. The system is intended to be a display for personal computers, computer generated graphic images, and image processing applications. Built using smart VLSI memory chips, the system will use parallel processing techniques to provide high performance.

Journal ArticleDOI
TL;DR: An algorithm for determining the position of a robot from only one TV image has been developed by transforming the standard square pattern of the input image into its skeleton by applying the method of least squares.

Journal ArticleDOI
TL;DR: A parallel procedure is described which, applied to a connected image, originates a connected skeleton made by the union of simple digital arcs that ensures the possibility of recovering the original image by means of a reverse distance transform.
Abstract: In picture processing it is often convenient to deal with a stick-like version (skeleton) of binary digital images. Although skeleton connectedness is not necessary for storage and retrieval purposes, this property is desirable when a structural description of images is of interest. In this paper a parallel procedure is described which, applied to a connected image, originates a connected skeleton made by the union of simple digital arcs. The procedure involves a step by step propagation of the background over the image. At every step, contour elements either belonging to the significant convex regions of the current image or being local maxima of the original image are selected as skeleton elements. Since the final set so obtained is not ensured to be connected, the configurations in correspondence of which disconnections appear are investigated and the procedures to avoid this shortcoming are given. The presence of the whole set of local maxima among the skeleton elements ensures the possibility of recovering the original image by means of a reverse distance transform. The details of the program implementing the proposed algorithm on a parallel processor are finally included.

Journal ArticleDOI
TL;DR: A procedure for extracting a set of textural features for characterizing small areas in radar images is presented and it is shown that these features can be used for classifying segments of radar images corresponding to different geological formations.
Abstract: Texture is an important spatial feature useful for identifying objects or regions of interest in an image. While textural features have been widely used in the analysis of a variety of photographic images, they have not been used for processing radar images. In this paper, we present a procedure for extracting a set of textural features for characterizing small areas in radar images and show that these features can be used for classifying segments of radar images corresponding to different geological formations.

01 Nov 1981
TL;DR: A new technique for representing digital pictures that greatly simplifies the problem of finding the correspondence between components in the description of two pictures, based on a new class of reversible transforms (the Difference of Low Pass or DOLP transform).
Abstract: : This dissertation presents a new technique for representing digital pictures. The principal benefit of this representation is that it greatly simplifies the problem of finding the correspondence between components in the description of two pictures. This representation technique is based on a new class of reversible transforms (the Difference of Low Pass or DOLP transform). A fast algorithm for computing the DOLP transform is then presented. This algorithm, called cascade convolution with expansion is based on the auto-convolution scaling property of Gaussian functions. Techniques are then described for constructing a structural description of an image from its Sampled DOLP transform. The symbols in this description are detected by detecting local peaks and ridges in each band-pass image, and among all of the band-pass image. This description has the form of a tree of peaks, with the peaks interconnected by chains of symbols from the ridges. The tree of peaks has a structure which can be matched despite changes in size, orientation, or position of the gray scale shape that is described.

Journal ArticleDOI
R.W. Ehrich1
01 Sep 1981

Patent
16 Apr 1981
TL;DR: In this article, a feature-enhanced image processing system was proposed by the addition of outputs of a high-pass filter acting as image-feature detector and a complementary low pass filter.
Abstract: An electronic image processing system, for image enhancement and noise suppression, from signals representing an array of picture elements, or pels. The system is of the kind providing a feature-enhanced output by the addition of outputs of a high-pass filter acting as image-feature detector and a complementary low-pass filter. The low-pass filter also acts as an image-feature detector and includes a prefilter (130 and Figure 22) and a sub-sampling filter (132) based on a set of weighting patterns in the form of sparse matrices (Figure 23, Figure 26, Figure 27). The sub-sampling filter (Figure 29) in a bandpass channel (128, 170 and Figure 22) of the low-pass filter may comprise pairs of filters (Figure 26, Figure 27) acting as detectors of selected image features.

Patent
Ian Clive Walker1
01 Dec 1981
TL;DR: In this article, a video image creation system provides intensity or color data from one or more stores and the image is created under manual control which effectively defines the coordinates of the artist's implement at any given time.
Abstract: A video image creation system provides intensity or color data from one or more stores. The image is created under manual control which effectively defines the coordinates of the artist's implement at any given time. A processor receives the incoming image data and previously derived data from a frame store and modifies this data in dependence on a parameter available from another store. The created image can be viewed on a monitor. The parameter controls the contribution made from any adjacent, previously created, parts of the image and can be such as to simulate different pencil or brush shapes or types of paint for example. Additional facilities such as pressure sensitivity and blurring can be provided.