scispace - formally typeset
Search or ask a question

Showing papers on "Image processing published in 1983"


Journal ArticleDOI
TL;DR: A technique for image encoding in which local operators of many scales but identical shape serve as the basis functions, which tends to enhance salient image features and is well suited for many image analysis tasks as well as for image compression.
Abstract: We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions. The representation differs from established techniques in that the code elements are localized in spatial frequency as well as in space. Pixel-to-pixel correlations are first removed by subtracting a lowpass filtered copy of the image from the image itself. The result is a net data compression since the difference, or error, image has low variance and entropy, and the low-pass filtered image may represented at reduced sample density. Further data compression is achieved by quantizing the difference image. These steps are then repeated to compress the low-pass image. Iteration of the process at appropriately expanded scales generates a pyramid data structure. The encoding process is equivalent to sampling the image with Laplacian operators of many scales. Thus, the code tends to enhance salient image features. A further advantage of the present code is that it is well suited for many image analysis tasks as well as for image compression. Fast algorithms are described for coding and decoding.

6,975 citations



01 Jun 1983
TL;DR: This thesis is an attempt to formulate a set of edge detection criteria that capture as directly as possible the desirable properties of an edge operator.
Abstract: : The problem of detecting intensity changes in images is canonical in vision. Edge detection operators are typically designed to optimally estimate first or second derivative over some (usually small) support. Other criteria such as output signal to noise ratio or bandwidth have also been been argued for. This thesis is an attempt to formulate a set of edge detection criteria that capture as directly as possible the desirable properties of an edge operator. Variational techniques are used to find a solution over the space of all linear shift invariant operators. The first criterion is that the detector have low probability of error i.e. failing to mark edges or falsely marking non-edges. The second is that the marked points should b The third criterion is that there should be low probability of more than one response to a single edge. The technique is used to find optimal operators for step edges and for extended impulse profiles (ridges or valleys in two dimensions). The extension of the one dimensional operators to two dimensions is then discussed. The result is a set of operators of varying width, length and orientation. The problem of combining these outputs into a single description is discussed, and a set of heuristics for the integration are given. (Author)

986 citations


Journal ArticleDOI
TL;DR: For the last five years, the University of Michigan and the Environmental Research Institute of Michigan have conducted a unique series of studies that involve the processing of biomedical imagery on a highly parallel computer specifically designed for image processing, finding that quantification by automated image analysis not only increases diagnostic accuracy but also provides significant data not obtainable from qualitative analysis alone.
Abstract: medical fields has led to the automated processing ofpictorial data. Here, a device called the cytocomputer searches for genetic mutations. A computer revolution has occurred not only in technical fields but also in medicine, where vast amounts of information must be processed quickly and accurately. Nowhere is the need for image processing techniques more apparent than in clinical diagnosis or mass screening applications where data take the form of digital images. New high-resolution scanning techniques such as computed tomography, nuclear magnetic resonance, po-sitron emission tomography, and digital radiography produce images containing immense amounts of relevant information for medical analysis. But as these scanning techniques become more vital to clinical diagnosis, the work for specialists who must visually examine the resultant images increases. In many cases, quantitative data in the form of measurements and counts are needed to supplement nonimage patient data, and the manual extraction of these data is a time-consuming and costly step in an otherwise automated process. Furthermore, subtle variants of shade and shape can be the earliest clues to a diagnosis, placing the additional burden of complete thoroughness on the examining specialist. For the last five years, the University of Michigan and the Environmental Research Institute of Michigan have conducted a unique series of studies that involve the processing of biomedical imagery on a highly parallel computer specifically designed for image processing. System designers have incorporated the requirements of extracting a verifiable answer from an image in a reasonable time into an integrated approach to hardware and software design. The system includes a parallel pipelined image processor, called a cytocomputer, and a high-level language specifically created for image processing, C-3PL, the cytocomputer parallel picture processing language. These studies have involved a great many people from both the medical and engineering communities and have highlighted the interdisciplinary aspects of biomedical image processing. The methods have been tested in anatomy, developmental biology, nuclear medicine, car-diology, and transplant rejection. The general consensus is that quantification by automated image analysis not only increases diagnostic accuracy but also provides significant data not obtainable from qualitative analysis alone. One study in particular, on which descriptions in this article are based, involves a joint effort by the University of Michigan's human genetics and electrical and computer engineering departments and is supported by a grant from the National Cancer Institute. Basically, automated image analysis is being applied via sophisticated biochemical and computer techniques to derive …

875 citations


Journal ArticleDOI
TL;DR: It is demonstrated here that the detailed time dependence of the resulting trajectory of sample points determines the relative weight and accuracy with which image information at each spatial frequency is measured, establishing theoretical limitations on image quality achievable with a given imaging method.
Abstract: The fundamental operations of nuclear magnetic resonance (NMR) imaging can be formulated, for a large number of methods, as sampling the object distribution in the Fourier spatial-frequency domain, followed by processing the digitized data (often simply by Fourier transformation) to produce a digital image. In these methods, which include reconstruction from projections, Fourier imaging, spin-warp imaging, and echo-planar imaging, controllable gradient fields determine the points in the spatial-frequency domain which are sampled at any given time during the acquisition of data (the free induction decay, or FID). The detailed time dependence of the resulting trajectory of sample points (the k trajectory) determines the relative weight and accuracy with which image information at each spatial frequency is measured, establishing theoretical limitations on image quality achievable with a given imaging method. We demonstrate here that these considerations may be used to compare the theoretical capabilities of NMR imaging methods, and to derive new imaging methods with optimal theoretical imaging properties.

429 citations



Journal ArticleDOI
01 Sep 1983

339 citations


Proceedings ArticleDOI
17 Mar 1983
TL;DR: This work model the speckle according to the exact physical process of coherent image formation and accurately represents the higher order statistical properties of speckel that are important to the restoration procedure.
Abstract: Speckle is a granular noise that inherently exists in all types of coherent imaging systems. The presence of speckle in an image reduces the resolution of the image and the detectability of the target. Many speckle reduction algorithms assume speckle noise is multiplicative. We instead model the speckle according to the exact physical process of coherent image formation. Thus, the model includes signal-dependent effects and accurately represents the higher order statistical properties of speckle that are important to the restoration procedure. Various adaptive restoration filters for intensity speckle images are derived based on different speckle model assumptions and a nonstationary image model. These filters respond adaptively to the signal-dependent speckle noise and the nonstationary statistics of the original image.© (1983) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

249 citations


Journal ArticleDOI
TL;DR: The effectiveness of the theory of fuzzy sets in detecting different regional boundaries of X-ray images is demonstrated and the system performance for different parameter conditions is illustrated by application to an image of a radiograph of the wrist.
Abstract: The effectiveness of the theory of fuzzy sets in detecting different regional boundaries of X-ray images is demonstrated. The algorithm includes a prior enhancement of the contrast among the regions (having small change in gray levels) using the contrast intensification (INT) operation along with smoothing in the fuzzy property plane before detecting its edges. The property plane is extracted from the spatial domain using S, ? and (1 ?) functions and the fuzzifiers. Final edge detection is achieved using max or min operator. The system performance for different parameter conditions is illustrated by application to an image of a radiograph of the wrist.

209 citations


Journal ArticleDOI
TL;DR: It is shown that the problem of tracking a target having a fixed velocity can be cast into a general framework of three-dimensional filter theory and the design of these filters is presented, taking into account the target, clutter, and optical detection models.
Abstract: The standard approach to the detection of a stationary target immersed within an optically observed scene is to use integration to separate the target energy from the background clutter. When the target is nonstationary and moves with fixed velocity relative to the clutter, the procedure for integrating the target signal is no longer obvious. In this paper it is shown that the problem of tracking a target having a fixed velocity can be cast into a general framework of three-dimensional filter theory. From this point of view, the target detection problem reduces to the problem of finding optimal three-dimensional filters in the three-dimensional transform domain and processing the observed scene via this filtering. The design of these filters is presented, taking into account the target, clutter, and optical detection models. Performance is computed for a basic clutter model, showing the effective increase in detectability as a function of the target velocity. The three-dimensional transform approach is readily compatible with VLSI array processing technology.

198 citations


Patent
Josef Bille1
09 Dec 1983
TL;DR: In this article, the authors used a polygonal scanner and an active image element that cooperates with a sensor to form a closed loop circuit for optical focusing of the image under adaptive control.
Abstract: In contrast to techniques currently used such as the fundus camera, the strain to which the patient is exposed during the generation of an image of the ocular fundus is reduced by moving a laser beam across the retina in the form of a scanning raster and by directing the light reflected from the retina through a photoelectric receiver to generate a television image. To achieve a very high resolution, the scanning operation provides at least in the interesting image parts signals which are representative of a greater number of image points than corresponds to the television standard. These image parts are selected sufficiently large to have each point of the raster assigned a separate image signal value when displayed on the television screen. During the time the scanning beam moves within the frame of a selected image part, its intensity is increased for improved image contrast. The apparatus for implementing this image-forming method utilizes at least one polygonal scanner and includes an active image element that cooperates with a sensor to form a closed loop circuit for optical focusing of the image under adaptive control. An electronic shutter is provided to control the intensity of the scanning beam. In addition to the formation of images, the apparatus also permits a measurement to be made of the spatial blood flow distribution in the fundus and of the degree of oxygen saturation of the blood in the retina of the eye under observation.

Patent
Carl P. Graf1, Kim Fairchild1, Karl M. Fant1, George W Rusler1, Michael O. Schroeder1 
29 Jul 1983
TL;DR: In this paper, a computer controlled imaging system involving a digital image processing and display system which has the ability to compose and construct a display scene from a library of images with sufficient processing speed to permit real-time or near real time analysis of the images by a human operator or a hardware/software equivalent thereof is described.
Abstract: The disclosure relates to a computer controlled imaging system involving a digital image processing and display system which has the ability to compose and construct a display scene from a library of images with sufficient processing speed to permit real-time or near real time analysis of the images by a human operator or a hardware/software equivalent thereof.

Journal ArticleDOI
TL;DR: The maximum entropy algorithm revealed detail on the images not seen in the linear restorations, but which are known to exist, in the reconstruction of images from sparse data, which no comparable linear algorithm can handle.
Abstract: A powerful iterative algorithm has been developed which produces a maximum entropy solution to the image restoration problem. It has been applied to images containing up to 1024 × 1024 pixels and has been implemented on both mini and mainframe computers. Unlike some methods, the algorithm does not require the point-spread function to have any special symmetry properties. Examples are given of the application of this method both to artificially and to experimentally blurred photographs, and also to an X-ray radiograph, blurred by the size of the radiation source. For comparison, restorations of some of these images by the linear method of constrained least squares are also shown. The maximum entropy algorithm revealed detail on the images not seen in the linear restorations, but which are known to exist. Maximum entropy is also applied to the reconstruction of images from sparse data, which no comparable linear algorithm can handle.

Journal ArticleDOI
TL;DR: The application of a general-purpose image-processing computer system to automatic fringe analysis is presented and three areas of application have been examined where the use of a system based on a random access frame store has enabled a processing algorithm to be developed to suit a specific problem.
Abstract: The application of a general-purpose image-processing computer system to automatic fringe analysis is presented. Three areas of application have been examined where the use of a system based on a random access frame store has enabled a processing algorithm to be developed to suit a specific problem. Furthermore, it has enabled automatic analysis to be performed with complex and noisy data. The applications considered are strain measurement by speckle interferometry, position location in three axes, and fault detection in holographic nondestructive testing. A brief description of each problem is presented, followed by a description of the processing algorithm, results, and timings.

Journal ArticleDOI
TL;DR: A new form of image estimator, which takes account of linear features, is derived using a signal equivalent formulation and shows that the method can improve the quality of noisy images even when the signal-to-noise ratio is very low.
Abstract: A new form of image estimator, which takes account of linear features, is derived using a signal equivalent formulation. The estimator is shown to be a nonstationary linear combination of three stationary estimators. The relation of the estimator to human visual physiology is discussed. A method for estimating the nonstationary control information is described and shown to be effective when the estimation is made from noisy data. A suboptimal approach which is computationally less demanding is presented and used in the restoration of a variety of images corrupted by additive white noise. The results show that the method can improve the quality of noisy images even when the signal-to-noise ratio is very low.

Journal ArticleDOI
TL;DR: Three algorithms for implementing an improved composite image construction by setting criteria for selecting the in-focus segments of an image sample are discussed.
Abstract: Improvement in the depth of field is demonstrated by properly processing a succession of image samples. Due to the limited depth of field each image sample has in-focus as well as out-of-focus segments. By setting criteria for selecting the in-focus segments, an improved composite image is formed. Three algorithms for implementing this construction are discussed.

BookDOI
01 Jan 1983
TL;DR: The differential method for Image Motion Estimation and Edges in Visual Scenes and Sequences, Application to Filtering, Sampling and Adaptive DPCM Coding, and Scene Analysis and Industrial Applications are reviewed.
Abstract: I Overview.- Overview on Image Sequence Analysis.- Dynamic Scene Analysis.- II Image Sequence Coding.- Recursive Motion Compensation: A Review.- The Differential Method for Image Motion Estimation.- Edges in Visual Scenes and Sequences: Application to Filtering, Sampling and Adaptive DPCM Coding.- Movement-Compensated Interframe Prediction for NTSC Color TV Signals.- Coding of Colour TV Signals with 34 MBit/s Transmission Rate.- Analysis of Different Displacement Estimation Algorithms for Digital Television Signals.- An Adaptive Gradient Approach to Displacement Estimation.- Motion Parameter Estimation in TV-Pictures.- Image Sequence Coding Using Scene Analysis and Spatio-Temporal Interpolation.- Two Motion Adaptive Interframe Coding Techniques for Air to Ground Video Signals.- Motion Estimation in a Sequence of Television Pictures.- Comparative Study Between Intra- and Inter-Frame Prediction Schemes.- A Narrow-Band Video Communication System for the Transmission of Sign Language Over Ordinary Telephone Lines.- Classification and Block Coding of the Frame Difference Signal.- Histograms of Image Sequence Spectra.- III Scene Analysis and Industrial Applications.- Determining 3-D Motion and Structure of a Rigid Body Using Straight Line Correspondences.- Comparison of Feature Operators for use in Matching Image Pairs.- Displacement Estimation for Objects on Moving Backgroud.- Linear Filtering in Image Sequences.- Photometric Stereo For Moving Objects.- On the Selection of Critical Points and Local Curvature Extrema of Region Boundaries for Interframe Matching.- Image Segmentation Considering Properties of the Human Visual System.- A Fast Edge Detection Algorithm Matching Visual Contour Perception.- Image Sequence Analysis for Target Tracking.- Track Acquisition of Sub-Pixel Targets.- A Pre-Processor for the Real-Time Interpretation of Dynamic Scenes.- Control of an Unstable Plant by Computer Vision.- Real-time Processing of Rasterscan Images.- 3-D Kalman Filtering of Image Sequences.- Atmospheric Disturbances Tracking in Satellite Images.- Aspects of Dynamic Scene Analysis in Meteorology.- IV Biomedical Applications.- Processing and Analysis of Radiographic Image Sequences.- Image Sequence Processing and Pattern Recognition of Biomedical Pictures.- A Rule-Based System for Characterizing Blood Cell Motion.- Three Dimensional Imaging from Computed Tomograms.- Model Based Analysis of Scintigraphic Image Sequences of the Human Heart.

Journal ArticleDOI
TL;DR: It is argued in this paper that the additional information present in the three-dimensional data is useful for improving reconstructions of images.
Abstract: List-mode data collected in a positron-emission tomography system having time-of-flight measurements are three dimensional, but all algorithms which have been published to date operate on two-dimensional data derived from these three-dimensional data. We argue in this paper that the additional information present in the three-dimensional data is useful for improving reconstructions of images.

Journal ArticleDOI
TL;DR: A tree data structure for representing multidimensional digital binary images and an algorithm for constructing the tree of a d-dimensional binary image from the trees of its (d - 1 )-dimensional cross sections are given.
Abstract: A tree data structure for representing multidimensional digital binary images is described. The method is based on recursive subdivision of the d-dimensional space into 2d hyperoctants. An algorithm for constructing the tree of a d-dimensional binary image from the trees of its (d - 1 )-dimensional cross sections is given. The computational advantages of the data structure and the algorithm are demonstrated both theoretically and in application to a three-dimensional reconstruction of a human brain.

Journal ArticleDOI
TL;DR: Adherence to design constraints developed here is of particular importance where colors are to be judged when the original is not directly accessible to the observer as, for example, when it is on another planet.
Abstract: The problem of producing a colored image from a colored original is analyzed. Conditions are determined for the production of an image in which the colors cannot be distinguished form those in the original by a human observer. If the final image is produced by superposition of controlled amounts of colored lights, only a simple linear transform need be applied to the outputs of the image sensors to produce the control inputs required for the image generators. In systems which depend instead on control of the concentration or the fractional area covered by colored dyes, a more difficult computation is called for. This calculation may for practical purposes be expressed in table lookup form. The conditions for exact reproduction of colored images should prove useful in the design and analysis of image processing systems whose final output is intended for human viewing. Judging by the design of some existing systems, these rules are not generally known or adhered to. Modern computational techniques make it practical to tackle this problem now. Adherence to design constraints developed here is of particular importance where colors are to be judged when the original is not directly accessible to the observer as, for example, when it is on another planet.

Journal ArticleDOI
TL;DR: This procedure, which requires no restrictions on the direction of motion, nor the location and shape of environmental objects, has been applied successfully to real-world image sequences from several different task domains.
Abstract: A procedure for processing real world image sequences produced by relative translational motion between a sensor and environmental objects is presented. In this procedure, the determination of the direction of sensor translation is effectively combined with the determination of the displacement of image features and environmental depth. It requires no restrictions on the direction of motion, nor the location and shape of environmental objects. It has been applied successfully to real-world image sequences from several different task domains. The processing consists of two basic steps: feature extraction and search. The feature extraction process picks out small image areas which may correspond to distinguishing parts of environmental objects. The direction of translational motion is then found by a search which determines the image displacement paths along which a measure of feature mismatch is minimized for a set of features. The correct direction of translation will minimize this error measure and also determine the corresponding image displacement paths for which the extracted features match well.

Journal ArticleDOI
TL;DR: An objective measure, the mean-square error, is obtained for each region of the partitioned image to evaluate the performance of the smoothing scheme at the corresponding level of spatial activity content.
Abstract: A quantitative evaluation of several edge-preserving noise-smoothing techniques is presented. All of the techniques evaluated are devised to preserve edge sharpness while achieving some degree of noise cleaning. They are based on local operations on neighboring points and all of them can be iterated. They are unweighted neighbor averaging (AVE), K-nearest neighbor averaging (KAVE), the edge and line weights method (EDLN), gradient inverse weighted smoothing (GRADIN), maximum homogeneity smoothing (MAXH), slope facet model smoothing (FACET), and median filtering (MEDIAN). The evaluation procedure involves two steps. First, the image is partitioned into regions based on the amount of spatial activity in a neighborhood of a pixel, where spatial activity is defined as local gradient. In the second part of the procedure an objective measure, the mean-square error, for each region of the partitioned image is obtained to evaluate the performance of the smoothing scheme at the corresponding level of spatial activity content. This evaluation procedure provides a convenient way to compare both the edge-preserving and noise-smoothing abilities of different schemes. The smoothing schemes were tested on a specially generated image with varying degrees of added noise and different edge slopes. The results of the comparison study are presented.

01 Jun 1983
TL;DR: An implementation of hierarchical scene matching in the VISIONS image processing cone - a pyramidal processing architecture that provides both a cheaper matching algorithm and a coarse-to-fine matching strategy that overcomes textural problems by matching on gross image structures first.
Abstract: : In this paper the authors present an implementation of hierarchical scene matching in the VISIONS image processing cone - a pyramidal processing architecture. The problem of scene matching is common to many applications in machine vision including registration, motion detection, and stereo vision. Scene matching by feature correlation can solve this problem but suffers from computational expense and failure in highly textured images. Hierarchical correlation provides both a cheaper matching algorithm and a coarse-to-fine matching strategy that overcomes textural problems by matching on gross image structures first. These methods fit naturally into the processing cone or pyramid architectures that have been proposed for image processing. Presented is a discussion of the architecture of the processing cone, the construction of image pyramids, and the use of these pyramids in hierarchical correlation. A set of experiments illustrates the operation of these ideas.

Journal ArticleDOI
TL;DR: The basic techniques of image processing using two-dimensional arrays of processors, or cellular arrays, are reviewed and various extensions and generalizations of the cellular array concept are discussed and their possible implementations and applications are discussed.
Abstract: array computers are not new to image processing, but more refined techniques have led to broader implementations. We can now construct arrays with up to 128 x 128 processors. Nearly 25 years ago, Ungerl\"2 suggested a two-dimensional array of processing elements as a natural computer architecture for image processing and recognition. Ideally, in this approach, each processor is responsible for one pixel, or one element of the image, with neighboring processors responsible for neighboring pix-els. Thus, using hardwired communication between neighboring processors, local operations can be performed on the image, or local image features can be detected in parallel, with every processor simultaneously accessing its neighbors and computing the appropriate function for its neighborhood. Over the last two decades, several machines embodying this concept have been constructed. The Illiac 1113 used a 36 x 36 processor array (the Illiac IV used only an 8 x 8 array) to analyze \"events\" in nuclear bubble-chamber images by examining 36 x 36 \"windows\" of the images. In later machines, such as the CLIP,4 DAP,5 and Mpp,6 arrays of up to 128 x 128 processors were used and were applied blockwise to larger images. This article reviews the basic techniques of image processing using two-dimensional arrays of processors, or cellular arrays. It also discusses various extensions and generalizations of the cellular array concept and their possible implementations and applications. The term cellular array is used because these machines can be regarded as generalizations of bounded cellular automata, which have been studied extensively on a theoretical level. The relative merits of cellular arrays for image processing as compared to other architectures* are not discussed here; but they have been studied extensively for such purposes, on levels from theory to hardware. A cellular array (Figure 1) is a two-dimensional array of processors, or cells, usually rectangular, each of which can directly communicate with its neighbors in the array. Here, for simplicity, I assume that each cell is connected to its four horizontal and vertical neighbors. Each cell on the borders of the array then has only three neighbors, and each cell in the four corners of the array has only two. I also assume that a cell can distinguish its neighbors; i.e., it can send a different message to each neighbor, and when it receives messages from them, it knows which message came from which neighbor. To use a cellular array for image processing, we give …

Journal ArticleDOI
01 May 1983
TL;DR: In this article, the authors introduced a class of texture features based on average degrees of match of the pixel neighbourhoods with a set of standard masks, which yield better texture classification than standard feature based on pairs of pixels.
Abstract: Laws has introduced a class of texture features based on average degrees of match of the pixel neighbourhoods with a set of standard masks. These features yield better texture classification than standard features based on pairs of pixels. Simplifications of these features are investigated. Their performance is not greatly affected by their exact form and also appears to remain the same if only local match maxima are used. An alternative definition of such features is also presented, based on sums and differences of Gaussian convolutions.

Journal ArticleDOI
TL;DR: The results make it theoretically possible to identify extremal edges of an intensity function f(x, y) of two variables by considering the gradient vector field V = ¿f, and establish the properties needed, and then use these properties in three ways.
Abstract: We use rotational and curvature properties of vector fields to identify critical features of an image. Using vector analysis and dif-ferential geometry, we establish the properties needed, and then use these properties in three ways. First, our results make it theoretically possible to identify extremal edges of an intensity function f(x, y) of two variables by considering the gradient vector field V = ?f. There is also enough information in ?f to find regions of high curvature (i.e., high curvature of the level paths of f). For color images, we use the vector field V = (I, Q). In application, the image is partitioned into a grid of squares. On the boundary of each square, V/|V| is sampled, and these unit vectors are used as the tangents of a curve ?. The rotation number (or topological degree) ?(?) and the average curvature f|??| are computed for each square. Analysis of these numbers yields infor-mation on edges and curvature. Experimental results from both simu-lated and real data are described.

Patent
12 Oct 1983
TL;DR: In this article, a completely electronic color camera for electronically recording an image of an object is described, where the image signals from a solid-state image sensor having color filters are converted to digital form so that the digitized image signals are applied to the same number of coding compression circuits, e.g. differential pulse code modulation circuits, as the kinds of colors of the color filters per scanning line, are subjected to a color separation coding processing and are then stored directly in memory without performing any additional processing.
Abstract: A completely electronic color camera for electronically recording an image of an object. The image signals from a solid-state image sensor having color filters are converted to digital form so that the digitized image signals are applied to the same number of coding compression circuits, e.g. differential pulse code modulation circuits, as the kinds of colors of the color filters per scanning line, are subjected to a color separation coding processing and are then stored directly in a memory without performing any additional processing. Error detecting and correctng circuits are provided. Error detecting and correcting data and lens setting data are added to the image data and recorded. The camera is reduced in size, is low in power consumption and requires a reduced memory capacity per frame.

Proceedings ArticleDOI
01 Jan 1983
Abstract: SeaMARC II (Sea Mapping and Remote Characterization) is both a conventional side-scan sonar and a bathymetric mapping system. It is a 12kHz, long-range, shallow-towed, high-speed system that produces an ocean floor image up to 10km wide. (By contrast SeaMARC I is a 30kHz deep towed, slow-speed side-scan only system.) Bathymetry is determined by measuring 4000 times each second the magnitude and arrival angle of narrow band acoustic energy returned from each side of ship's track. Off-line processing of this series of vectors produces a seafloor bathymetric map along the track with a width 3.4 times the water depth. A 10 knot tow capability allows more than 4000 square kilometers to be surveyed each day. In practice, the average returns from a relatively flat part of the survey area are used to generate a function relating acoustic arrival angle to measured electrical angle. This procedure corrects for local sound speed structure. With this function, the measurements from returns with adequately large magnitudes are converted to acoustic arrival angle. These angles are corrected for tow body attitude and, with the arrival times (range), are converted to horizontal (X) and vertical (Z) distances from the tow body to the reflector. Side-scan images aremore » a combination of the small scale reflecting properties of the bottom (micro-reflectivity) and the specular reflections from bathymetric slopes (macro-reflectivity). SeaMarc II has the unique ability to acquire synoptic data that will allow the two effects to be separated. This unique combination also allows more accurate and rapid understanding of bottom character than with either output by itself.« less

Proceedings ArticleDOI
13 Jun 1983
TL;DR: The architecture of a parallel computer called a pyramid machine, which combines features of tree machines and features of mesh-connected parallel computers, is presented, able to rapidly perform computations of local and global processing.
Abstract: This paper presents the architecture of a parallel computer called a pyramid machine. The system consists of a pyramidal array of processing elements, each of which executes the instructions broadcast by a controller. Each processing element except those on the outside of the array is directly connected to thirteen neighboring elements: eight on the same level, four on the next finer level and one on the next coarser level. The architecture combines features of tree machines and features of mesh-connected parallel computers. As a result it is able to rapidly perform computations of local and global processing. The main areas of application are image processing, graphics and spatial problem solving. The motivation, basic structure, and applications of the system are discussed.

Journal ArticleDOI
C. B. Chittineni1
TL;DR: In this article, the multidimensional greytone surface is expanded as a weighted sum of basis functions, and expressions for the coefficients of the fitted quadrautic and cubic surfaces are obtained when there is a rotation in the coordinate system.
Abstract: Detection of edges and lines in multidimensional data is an important operation in a number of image processing applications. The multidimensional picture function is a sampling of the underlying reflectance function of the objects in the scene with the noise added to the true function values. Edges and lines refer to places in the image where there are jumps in the values of the function or its derivatives. The multidimensional greytone surface is expanded as a weighted sum of basis functions. Using multidimensional orthogonal polynomial basis functions, expressions are developed for the coefficients of the fitted quadrautic and cubic surfaces. The parameters of the fitted surfaces are obtained when there is a rotation in the coordinate system. Assuming the noise is Gaussian, statistical tests are devised for the detection of significant edges and lines. Direction isotropic properties of the fitted surfaces are described. For computational efficiency, recursive relations are obtained between the parameters of the fitted surfaces of successive neighborhoods. Furthermore, experimental results are presented by applying the developed theory to multiband Landsat-Imagery Data.