scispace - formally typeset
Search or ask a question

Showing papers on "Orientation (computer vision) published in 1988"


Journal ArticleDOI
TL;DR: The development and implementation of an algorithm for automated text string separation that is relatively independent of changes in text font style and size and of string orientation are described and showed superior performance compared to other techniques.
Abstract: The development and implementation of an algorithm for automated text string separation that is relatively independent of changes in text font style and size and of string orientation are described. It is intended for use in an automated system for document analysis. The principal parts of the algorithm are the generation of connected components and the application of the Hough transform in order to group components into logical character strings that can then be separated from the graphics. The algorithm outputs two images, one containing text strings and the other graphics. These images can then be processed by suitable character recognition and graphics recognition systems. The performance of the algorithm, both in terms of its effectiveness and computational efficiency, was evaluated using several test images and showed superior performance compared to other techniques. >

664 citations


Journal ArticleDOI
R.K. Lenz1, Roger Y. Tsai1
TL;DR: Three groups of techniques for center calibration are presented: Group I requires using a laser and a four-degree-of-freedom adjustment of its orientation, but is simplest in concept and is accurate and reproducible; Group II is simple to perform,but is less accurate than the other two; and the most general, Group II, is accurate, but requires a good calibration plate and accurate image feature extraction of calibration points.
Abstract: Techniques are described for calibrating certain intrinsic camera parameters for machine vision. The parameters to be calibrated are the horizontal scale factor, and the image center. The scale factor calibration uses a one-dimensional fast Fourier transform and is accurate and efficient. It also permits the use of only one coplanar set of calibration points for general camera calibration. Three groups of techniques for center calibration are presented: Group I requires using a laser and a four-degree-of-freedom adjustment of its orientation, but is simplest in concept and is accurate and reproducible; Group II is simple to perform, but is less accurate than the other two; and the most general, Group II, is accurate and efficient, but requires a good calibration plate and accurate image feature extraction of calibration points. Group II is recommended most highly for machine vision applications. Results of experiments are presented and compared with theoretical predictions. Accuracy and reproducibility of the calibrated parameters are reported, as well as the improvement in actual 3-D measurement due to center calibration. >

650 citations


Journal ArticleDOI
TL;DR: An approach to illumination and imaging of specular surfaces that yields three-dimensional shape information is described and the proposed structured highlight techniques are promising for many industrial tasks.
Abstract: An approach to illumination and imaging of specular surfaces that yields three-dimensional shape information is described. The structured highlight approach uses a scanned array of point sources and images of the resulting reflected highlights to compute local surface height and orientation. A prototype structured highlight inspection system, called SHINY, has been implemented. SHINY demonstrates the determination of surface shape for several test objects including solder joints. The current SHINY system makes the distant-source assumption and requires only one camera. A stereo structured highlight system using two cameras is proposed to determine surface-element orientation for objects in a much larger field of view. Analysis and description of the algorithms are included. The proposed structured highlight techniques are promising for many industrial tasks. >

194 citations


Journal ArticleDOI
TL;DR: For natural textures, it is shown that the uniform density assumption (texels are uniformly distributed) is enough to recover the orientation of a single textured plane in view, under perspective projection.
Abstract: A central goal for visual perception is the recovery of the three-dimensional structure of the surfaces depicted in an image. Crucial information about three-dimensional structure is provided by the spatial distribution of surface markings, particularly for static monocular views: projection distorts texture geometry in a manner tha depends systematically on surface shape and orientation. To isolate and measure this projective distortion in an image is to recover the three dimensional structure of the textured surface. For natural textures, we show that the uniform density assumption (texels are uniformly distributed) is enough to recover the orientation of a single textured plane in view, under perspective projection. Furthermore, when the texels cannot be found, the edges of the image are enough to determine shape, under a more general assumption, that the sum of the lengths of the contours on the world plane is about the same everywhere. Finally, several experimental results for synthetic and natural images are presented.

186 citations


Journal ArticleDOI
26 May 1988-Nature
TL;DR: This work proposes that blobs computed by a centre-surround operator are useful as texture elements, and that a simple non-parametric statistic can be used to compare local distributions of blob attributes to locate texture boundaries.
Abstract: Recent computational and psychological theories of human texture vision assert that texture discrimination is based on first-order differences in geometric and luminance attributes of texture elements, called 'textons'. Significant differences in the density, orientation, size, or contrast of line segments or other small features in an image have been shown to cause immediate perception of texture boundaries. However, the psychological theories, which are based on the perception of synthetic images composed of lines and symbols, neglect two important issues. First, how can textons be computed from grey-level images of natural scenes? And second, how, exactly, can texture boundaries be found? Our analysis of these two issues has led to an algorithm that is fully implemented and which successfully detects boundaries in natural images. We propose that blobs computed by a centre-surround operator are useful as texture elements, and that a simple non-parametric statistic can be used to compare local distributions of blob attributes to locate texture boundaries. Although designed for natural images, our computation agrees with some psychophysical findings, in particular, those of Adelson and Bergen (described in the preceding article), which cast doubt on the hypothesis that line segment crossings or termination points are textons.

177 citations


Journal ArticleDOI
TL;DR: A model being developed to predict pilot dynamic spatial orientation in response to multisensory stimuli has shown agreement with several important qualitative characteristics of human spatial orientation, and it is felt that with further modification and additional experimental data the model can be improved and extended.
Abstract: A model is presented to predict human dynamic spatial orientation in response to multisensory stimuli. Motion stimuli are first processed by dynamic models of the visual, vestibular, tactile, and proprioceptive sensors. Central nervous system function is modeled as a steady state Kalman filter that optimally blends information from the various sensors to form an estimate of spatial orientation. Where necessary, nonlinear elements preprocess inputs to the linear central estimator in order to reflect more accurately some nonlinear human response characteristics. Computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation.

152 citations


Patent
22 Apr 1988
TL;DR: In this paper, an object orientation and position patch is attached to an object to be observed comprising a planar substantially coplanar and non-collinear reflective locations positioned upon the patch and a reflector having the reflective properties of the surface of a regular curved surface intersecting the planar surface.
Abstract: A system for computer vision is based upon an image sensor that maps an image to memory cells in association with a computer. An object orientation and position patch is attached to an object to be observed comprising a planar substantially coplanar and non-collinear reflective locations positioned upon the patch and a reflector having the reflective properties of the surface of a regular curved surface intersecting the planar surface. The computer has a task stored in main memory for detecting and quantifying a change in orientation and position of the object from the location of the image of the orientation and position patch.

150 citations


Journal ArticleDOI
TL;DR: A partial-shape-recognition technique utilizing local features described by Fourier descriptors is introduced, and experimental results are discussed that indicate that partial contours can be recognized with reasonable accuracy.
Abstract: A partial-shape-recognition technique utilizing local features described by Fourier descriptors is introduced. A dynamic programming formulation for shape matching is developed, and a method for comparison of match quality is discussed. This technique is shown to recognize unknown contours that may be occluded or that may overlap other objects. Precise scale information is not required, and the unknown objects may appear at any orientation with respect to the camera. The segment-matching dynamic programming method is contrasted with other sequence-comparison techniques that utilize dynamic programming. Experimental results are discussed that indicate that partial contours can be recognized with reasonable accuracy. >

137 citations


Journal ArticleDOI
01 Aug 1988
TL;DR: The author provides a general introduction to computer vision by focusing on two-dimensional object recognition, i.e. recognition of an object whose spatial orientation, relative to the viewing direction, is known.
Abstract: The author provides a general introduction to computer vision. He discusses basic techniques and computer implementations, and also indicates areas in which further research is needed. He focuses on two-dimensional object recognition, i.e. recognition of an object whose spatial orientation, relative to the viewing direction is known. >

106 citations


Journal ArticleDOI
TL;DR: An approach for computer perception of outdoor scenes is presented based on integrating information extracted from thermal images and visual images, which provides information not available by processing either type of image alone.
Abstract: An approach for computer perception of outdoor scenes is presented. The approach is based on integrating information extracted from thermal images and visual images, which provides information not available by processing either type of image alone. The thermal image is analyzed to provide estimates of surface temperature. The visual image provides surface absorptivity and relative orientation. These parameters are used together to provide estimates of heat fluxes at the surfaces of viewed objects. The thermal behavior of scene objects is described in terms of surface heat fluxes. Features based on estimated values of surface heat fluxes are shown to be more meaningful and specific in distinguishing scene components. >

105 citations


Journal ArticleDOI
TL;DR: A tensor-based moment function method and a principal-axes method were investigated for registering 3-D test images to a standard image using translation, scale, and orientation for quantifying the left ventricular function from gated blood pool single-photo emission computed tomographic images.
Abstract: A tensor-based moment function method and a principal-axes method were investigated for registering 3-D test images to a standard image using translation, scale, and orientation. These methods were applied at two image resolutions to test discretization effects. At the higher resolution, both methods were found to perform well in cases where the test image could be described as an affine transform of the standard. At low resolutions, however, and when the test image was not an affine transform of the standard, only the principal-axes-based method performed adequately. The problem of quantifying the left ventricular function from gated blood pool single-photo emission computed tomographic images is considered. >

Journal ArticleDOI
TL;DR: A method is presented for determining the surface orientations of an object by projecting a stripe pattern on to it by estimating surface normals from the slopes and intervals of the stripes in the image.
Abstract: A method is presented for determining the surface orientations of an object by projecting a stripe pattern on to it. Assuming orthographical projection as a camera model and parallel light projection of the stripe pattern, the method obtains a 2 1/2-D representation of objects by estimating surface normals from the slopes and intervals of the stripes in the image. The 2 1/2-D image is further divided into planar or singly curved surfaces by examining the distribution of the surface normals in gradient space. A simple application to finding a planar surface and determining its orientation and shape is shown. The error in surface orientation is discussed. >

Patent
15 Apr 1988
Abstract: An apparatus for use in determining the orientation and location of an image plane, particularly when imaging the head of a human being, has an elongated flexible channel provided for containing an imaging-opaque fluid which is visible in the image. First and second carriers are each provided for supporting three respective portions of the elongated flexible channel at respective orientations transverse to the image plane and in predetermined space relation with respect to one another. Preferably, the portions of the said elongated flexible channel are arranged substantially as legs of a triangle. A support arrangement maintains the first and second carriers in fixed spatial relation to one another and to the head of the human being. The present invention can be used in any of several known imaging modalities by using appropriate contrast agents. After imaging, the portions of the said elongated flexible channel appear as points in the image, the location of the plane of imaging, and its orientation, being determined by analysis of the distance between such points, and the ratios of the distances between them. Such analysis can be performed by computer. Additionally, with the use of the present invention, computer analysis can be used to reconstruct a given set of consecutive image planes, such as from MRI, to match another given set of image planes, such as from PET.

Proceedings ArticleDOI
05 Dec 1988
TL;DR: This paper describes a technique for measuring the movement of edge-lines in a sequence of images by maintalning an image plane "flow model" using a set of parameter vectors representing the center-point, orientation and length of a segment.
Abstract: This paper describes a technique for measuring the movement of edge-lines in a sequence of images by maintalning an image plane "flow model". Edge-lines are expressed as a set of parameter vectors representing the center-point, orientation and length of a segment. Each parameter vector is composed of an estimate, a temporal derivative, and their covariance matrix. Line segment parameters in the flow model are updated using a Kalman filter. The eorrespondance of observed edge-lines segments to segments predicted from the flow model is determined by a linear complexity algorithm using distance normalized by covariance. The existence of segments in the flow model is controlled using a confidence factor. This technique is in everyday use as part of a larger system for building 3-D scene descriptions using a camera mounted on a robot arm. A near video-rate hardware implementation is currently under development

Journal ArticleDOI
TL;DR: A family of texture features is presented that have the ability to discriminate different textures in a 3-D scene as well as theAbility to recover the range and orientation of the surfaces of the scene and are derived from the gray-level run-length matrices of an image.
Abstract: A family of texture features is presented that have the ability to discriminate different textures in a 3-D scene as well as the ability to recover the range and orientation of the surfaces of the scene. These texture features are derived from the gray-level run-length matrices (GLRLMs) of an image. The GLRLMs are first normalized so that they all have equal average gray-level run length. Features extracted from the normalized GLRLMs are independent of the surface geometry. These features can be used in three-dimensional scene analysis where textures need to be identified according to their differences. Based on the average-run-length information and the classification results, surface range as well as surface orientation of a textured surface can be recovered. >

Journal ArticleDOI
TL;DR: In this paper, a computer-based measuring strategy to determine spatial orientation of short glass fibers from the cross-section of single fibers is presented and a way of correcting the inclination-dependent probability of hitting a fiber is shown as well as the correction of fiber-end intersections.
Abstract: A computer-based measuring strategy to determine spatial orientation of short glass fibers from the cross-section of single fibers is presented A way of correcting the inclination-dependent probability of hitting a fiber is shown as well as the correction of fiber-end intersections The latter is done by pattern recognition and application of a set of FORTRAN subroutines 9 references

Patent
09 Feb 1988
TL;DR: In this article, a method and apparatus for the identification of spatial patterns that occur in two or more scenes or maps is presented, each pattern comprises a set of points in a spatial coordinate system collectively represented by the geometrical figure formed by connecting all point pairs by straight lines.
Abstract: A method and apparatus for the identification of spatial patterns that occur in two or more scenes or maps. Each pattern comprises a set of points in a spatial coordinate system collectively represented by the geometrical figure formed by connecting all point pairs by straight lines. The pattern recognition process is one of recognizing congruent geometrical figures. Two geometrical figures are congruent if all the lines in one geometrical figure are of the same length as the corresponding lines in the other. This concept is valid in a spatial coordinate system of any number of dimensions. In two- or three-dimensional space, a geometrical figure may be considered as a polygon or polyhedron, respectively. Using the coordinates of the points in a pair of congruent geometrical figures, one in a scene and the other in a map, a least squares error transformation matrix may be found to map points in the scene into the map. Using the transformation matrix, the map may be updated and extended with points from the scene. If the scene is produced by the sensor system of a vehicle moving through an environment containing features at rest, the position and orientation of the vehicle may be charted, and, over a series of scenes, the course of the vehicle may be tracked. If the scenes are produced by a sensor system at rest, then moving objects and patterns in the field of view may be tracked.

Patent
Henry S. Baird1
16 May 1988
TL;DR: In this article, a file of picture elements is generated which depicts the image with respect to the reference angle, and the picture elements are projected onto a plurality of contiguous segments of imaginary lines at selected angles across the file.
Abstract: A method and apparatus for determining a predominant angle of orientation of an image with respect to a reference angle. A file of picture elements is generated which depicts the image with respect to the reference angle. The picture elements are projected onto a plurality of contiguous segments of imaginary lines at selected angles across the file. Each imaginary line is perpendicular to its associated direction of projection. The number of picture elements that fall into the segments for each projection are counted. An enhancement function is applied to the segment counts of each projection. The projection that generates the largest value of the enhancement function defines the angle of orientation of the image. The position of a document scanner or the document itself may be rotated to compensate for the detected skew.

Journal ArticleDOI
TL;DR: A comprehensive computer-graphics-based system (STERECON) is described for tracing and digitizing contours from individual or stereopair electron micrographs and organizes and displays the digitized data from successive sections as a 3-D reconstruction.
Abstract: A comprehensive computer-graphics-based system (STERECON) is described for tracing and digitizing contours from individual or stereopair electron micrographs. The contours are drawn in parallel planes within the micrographs. Provision is also made for tracing and digitizing in full three-dimensional (3-D) coordinates in any direction along linear structures such as cytoskeletal elements. The stereopair micrographs are viewed in combination with the contours being traced on a graphics terminal monitor. This is done either by projecting original electron micrograph (EM) negatives onto a screen and optically combining these images with contour lines being drawn on the monitor, or by first digitizing the images and displaying them directly on the monitor along with the contour lines. Prior image digitization allows computer enhancement of the structures to be contoured. Correction and alignment routines are included to deal with variable section thickness, section distortion and mass loss, variations in photography in the electron microscope, and terminal screen curvature when combining projected images with contour lines on the monitor. The STERECON system organizes and displays the digitized data from successive sections as a 3-D reconstruction. Reconstructions can be viewed in any orientation as contour stacks with hidden lines removed; as wire-frame models; or as shaded, solid models with variable lighting, transparency, and reflectivity. Volumes and surface areas of the reconstructed objects can be determined. Particular attention was paid to making the system convenient for the biological user. Users are given a choice of three different stereo-viewing methods.

Journal ArticleDOI
TL;DR: Evidence is presented that useful reconstructions can be obtained with only one or two extra tilts from highly disordered specimens, even if the objects are asymmetric.

Journal ArticleDOI
TL;DR: A computational model for the 3D interpretation of a 2D view based on contour classification and contour interpretation is suggested and a computer algorithm is described which attempts to interpret image contours on the following grounds.

Proceedings ArticleDOI
11 Apr 1988
TL;DR: The contribution of this work is to quantify and justify the functional relationships between image features and filter parameters so that the design process can be easily modified for different conditions of noise and scale.
Abstract: A procedure for filter design is described for enhancing fingerprint images. Four steps of this procedure are described: user specification of appropriate image features, determination of local ridge orientations throughout the image, smoothing of this orientation image, and pixel-by-pixel image enhancement by application of oriented, matched filter masks. The contribution of this work is to quantify and justify the functional relationships between image features and filter parameters so that the design process can be easily modified for different conditions of noise and scale. Application of the filter shows good ridge separation, continuity, and background noise reduction. >

Proceedings ArticleDOI
14 Nov 1988
TL;DR: A novel synthesis-based approach for selection of rotation-invariant features of an image based on a mapping of the image onto a set of orthogonal basis functions, which gives them many useful properties.
Abstract: A method for recognizing an object in a binary image regardless of its orientation is discussed. The technique is also insensitive to slight deviation in shape and structure from a reference. The rotation-invariant features are the magnitudes of the Zernike moments of the image. Unlike classical moments, the Zernike moments are a mapping of the image onto a set of orthogonal basis functions, which gives them many useful properties. A novel synthesis-based approach for selection of these features is presented. Using this procedure, the discrimination power of features is evaluated by examining dissimilarities among images synthesized from them for different patterns. The method, applied to recognition of all English characters, yielded 95% accuracy. >

Journal ArticleDOI
01 Jan 1988
TL;DR: A new approach to corner detection is described, based on the generalised Hough transform, which has the advantage that it can be used when objects have curved sides or blunt corners, as frequently happens with food products.
Abstract: A new approach to corner detection is described which is based on the generalised Hough transform. The approach has the advantage that it can be used when objects have curved sides or blunt corners, as frequently happens with food products; in addition, it can be tuned for varying degrees of corner bluntness. The method is inherently sensitive: we have shown how it may be optimised for accuracy in the measurement of object dimensions and orientation.

Journal ArticleDOI
TL;DR: Results indicate processing changes when features of texture and shape must be integrated, and texture may be a better candidate than edge orientation for early perceptual processing, with information being processed preattentively and in parallel.
Abstract: A haptic search paradigm, adapted from Treisman and Gelade’s (1980) visual search tasks, was used as an initial step in addressing issues relevant to the development of models of human and machine haptic object processing. Texture and/or edge-orientation information were presented to multiple finger locations in disjunction (Experiment 1) and conjunction (Experiment 2) search tasks. In Experiment 3, subjects performed a difficult single-feature (orientation) search. Although the disjunction task could be interpreted with parallel or serial exhaustive models of haptic processing, subjects showed a shift toward serial self-terminating processing with the more complex and difficult tasks. These results indicate processing changes when features of texture and shape must be integrated. Given other converging evidence, texture may be a better candidate than edge orientation for early perceptual processing, with information being processed preattentively and in parallel.

Book ChapterDOI
01 Jan 1988
TL;DR: In this article, a new technique that takes into account variations in subject position, orientation, pixel size and slice thickness has been developed for the registration of brain images from different modalities.
Abstract: A new technique that takes into account variations in subject position, orientation, pixel size and slice thickness has been developed for the registration of brain images from different modalities. This method uses surface-fitting algorithms which minimize the mismatch between models of the external surface as constructed from each scan. An advantage relative to techniques that have been previously reported is the absence of the requirement for identification of either internal or external landmarks. Models of the external surface of the head are derived from transmission scans in positron emission tomography (PET) and from images obtained by using X-ray computed tomography (CT) and magnetic resonance (MR). The surface models are constructed from multiple external contours determined on various transverse slices. The fitting procedure yields a set of transformation parameters that represent the translation, rotation and linear scaling factors between two sets of images. Results from preliminary studies indicate that registration accuracy of 2–3 mm can be achieved between PET and CT or MR images.

Journal ArticleDOI
TL;DR: This method has been implemented for localizing an object in a manipulator end-effector instrumented with cen troid and matrix tactile sensors by solving a weighted linear system of the optimal set of vectors in a least squares sense.
Abstract: We present a method to obtain the position and orientation of an object through measurement from multiple sensors. Raw sensor measurements are subject to limitations of sensor precision and accuracy. Although for most measurements the estimate of position parameters is a linear function of the measurements, the estimate of orientation parameters is a nonlinear function of the measurements. Thus, error in orien tation estimate depends on the distance over which the raw measurements are made. For example, the estimate of the orientation of a line is better, the farther apart two points on the line are. The problem of finding the orientation parame ters is formulated in two steps. The first step computes vec tors from sensor measurements of points. A concept of best features is developed to select an optimal set of all possible vectors. The second step relates the orientation parameters to the vectors from the first step as a linear system. The best estimate is obtained by solving a weighted linear system of...

Proceedings ArticleDOI
08 Jun 1988
TL;DR: Algorithms used to identify markedly different objects and to distinguish between those objects which appear very similar to the trained eye are discussed, which has been very successful when applied to color images of the retina.
Abstract: We are developing a system designed around an IBM PC-AT to perform automatic diagnosis of diseases from images of the retina. The system includes hardware for color image capture and display. We are developing software for performing image enhancement, image analysis, pattern recognition and artificial intelligence. The design goal of the system is to automatically segment a digitized photograph of the retina into its normal and abnormal structures, identifying these objects by various features such as color, size, shape, texture, orientation, etc., and ultimately to provide a list of possible diagnoses with varying degrees of probability. We will discuss algorithms used to identify markedly different objects and to distinguish between those objects which appear very similar to the trained eye. Implementation of these algorithms, which are typically applied to areas such as remote sensing, terrain mapping and robotics, has been very successful when applied to color images of the retina.

Proceedings ArticleDOI
01 Jan 1988
TL;DR: In this article, the authors present three quality enhancement techniques and compare their performances to that provided by a basic raster printing scheme where pixels of binary (on or off) values are printed on a cartesian grid.
Abstract: Raster scan lithography systems, such as scanned-laser and most E-beam mask writers, produce images through a mosaic of discrete picture elements (pixels). Image qualities of the printed mask (or wafer) are governed by the interplay of several printing variables, including size and shape of the writing spot, pitch and orientation of the pixel grid, relative intensities of pixels, and exposure characteristics of the resist. We will review the theoretical foundations of raster imaging and show how these variables affect several key measures of lithographic image quality, including minimum feature size, edge placement resolution and accuracy, dimensional uniformity, and edge roughness. We will present three quality enhancement techniques and compare their performances to that provided by a basic raster printing scheme where pixels of binary (on or off) values are printed on a cartesian grid. The first technique involves rotating the printing grid 45 degrees to the main axis of the data coordinate system. We will demonstrate that, for lithographic images where most edges are parallel to the data axes, this grid provides 41% more addressable edge positions than a non-rotated grid with the same pixel density. The second technique, adapted from computer-graphics "antialiasing" applications, involves modulating the intensity of pixels along the edges of features to finely control the shape of the aerial image. This provides a vernier mechanism for the placement of exposed edges between grid locations and results in finer effective addressability and smoother edges. Third, we will review how multiple pass printing (a.k.a. vote-taking) reduces random errors, and show how it also reduces systematic errors when certain printing parameters are alternated between passes. Finally, we will present a single printing strategy in which all three techniques are combined to yield high accuracy, high-resolution images with economic use of printing pixels.

01 Sep 1988
TL;DR: Continuing advances in hardware and software have improved both the speed and the range of computations that can be made to simulate high resolution electron microscope (HREM) images from various structures.
Abstract: Continuing advances in hardware and software have improved both the speed and the range of computations that can be made to simulate high resolution electron microscope (HREM) images from various structures Use of image display systems and array processors have made the image simulation procedure much more interactive while laser printers provide a fast high-quality hard copy output Use of array processors has enabled the rewriting of electron scattering algorithms to include convergence effects (previously only considered after the scattered electron beams had emerged from the specimen) and upper-layer-line effects With an array processor it is faster to compute effects of spatial and temporal coherence in real space, rather than use approximation solutions derived from series expansion in reciprocal space With a frame buffer and suitable software the use has the facility to change parameters and view the results of the change almost immediately Selected images can then be directed to hard copy output, in contrast with batch methods where series of hard copy images are produced and then selected from Given a microdensitometer for input of experimental images from plates, or a video camera attached to the electron microscope and a frame buffer, split screen comparisons between experimental andmore » computed images are possible, including independent control of image contrast, magnification and orientation 23 refs, 19 figs, 2 tabs« less