scispace - formally typeset
Search or ask a question

Showing papers on "Real image published in 1990"


Book
01 Jul 1990
TL;DR: Simulations show that, compared to scalar error models, the 3D Gaussian reduces the variance in robot position estimates and better distinguishes rotational from translational motion.
Abstract: In stereo navigation, a mobile robot estimates its position by tracking landmarks with on-board cameras. Previous systems for stereo navigation have suffered from poor accuracy, in part because they relied on scalar models of measurement error in triangulation. Using three-dimensional (3D) Gaussian distributions to model triangulation error is shown to lead to much better performance. How to compute the error model from image correspondences, estimate robot motion between frames, and update the global positions of the robot and the landmarks over time are discussed. Simulations show that, compared to scalar error models, the 3D Gaussian reduces the variance in robot position estimates and better distinguishes rotational from translational motion. A short indoor run with real images supported these conclusions and computed the final robot position to within two percent of distance and one degree of orientation. These results illustrate the importance of error modeling in stereo vision for this and other applications.

356 citations


Proceedings ArticleDOI
16 Jun 1990
TL;DR: An iterative algorithm to increase image resolution is described, based on the resemblance of the presented problem to the reconstruction of a 2-D object from its 1-D projections in computer-aided tomography, and is shown, theoretically and practically, to converge quickly.
Abstract: An iterative algorithm to increase image resolution is described. Examples are shown for low-resolution gray-level pictures, with an increase of resolution clearly observed after only a few iterations. The same method can also be used for deblurring a single blurred image. The approach is based on the resemblance of the presented problem to the reconstruction of a 2-D object from its 1-D projections in computer-aided tomography. The algorithm performed well for both computer-simulated and real images and is shown, theoretically and practically, to converge quickly. The algorithm can be executed in parallel for faster hardware implementation. >

329 citations


Journal ArticleDOI
TL;DR: A new inference method, Highest Confidence First (HCF) estimation, is used to infer a unique labeling from the a posteriori distribution that is consistent with both prior knowledge and evidence.
Abstract: Integrating disparate sources of information has been recognized as one of the keys to the success of general purpose vision systems. Image clues such as shading, texture, stereo disparities and image flows provide uncertain, local and incomplete information about the three-dimensional scene. Spatial a priori knowledge plays the role of filling in missing information and smoothing out noise. This thesis proposes a solution to the longstanding open problem of visual integration. It reports a framework, based on Bayesian probability theory, for computing an intermediate representation of the scene from disparate sources of information. The computation is formulated as a labeling problem. Local visual observations for each image entity are reported as label likelihoods. They are combined consistently and coherently on hierarchically structured label trees with a new, computationally simple procedure. The pooled label likelihoods are fused with the a priori spatial knowledge encoded as Markov Random Fields (MRF's). The a posteriori distribution of the labelings are thus derived in a Bayesian formalism. A new inference method, Highest Confidence First (HCF) estimation, is used to infer a unique labeling from the a posteriori distribution. Unlike previous inference methods based on the MRF formalism, HCF is computationally efficient and predictable while meeting the principles of graceful degradation and least commitment. The results of the inference process are consistent with both observable evidence and a priori knowledge. The effectiveness of the approach is demonstrated with experiments on two image analysis problems: intensity edge detection and surface reconstruction. For edge detection, likelihood outputs from a set of local edge operators are integrated with a priori knowledge represented as an MRF probability distribution. For surface reconstruction, intensity information is integrated with sparse depth measurements and a priori knowledge. Coupled MRF's provide a unified treatment of surface reconstruction and segmentation, and an extension of HCF implements a solution method. Experiments using real image and depth data yield robust results. The framework can also be generalized to higher-level vision problems, as well as to other domains.

285 citations


Journal ArticleDOI
TL;DR: An approach for explicitly relating the shape of image contours to models of curved three-dimensional objects is presented and readily extends to parameterized models.
Abstract: An approach for explicitly relating the shape of image contours to models of curved three-dimensional objects is presented. This relationship is used for object recognition and positioning. Object models consist of collections of parametric surface patches and their intersection curves; this includes nearly all representations used in computer-aided geometric design and computer vision. The image contours considered are the projections of surface discontinuities and occluding contours. Elimination theory provides a method for constructing the implicit equation of these contours for an object observed under orthographic or perspective projection. This equation is parameterized by the object's position and orientation with respect to the observer. Determining these parameters is reduced to a fitting problem between the theoretical contour and the observed data points. The proposed approach readily extends to parameterized models. It has been implemented for a simple world composed of various surfaces of revolution and tested on several real images. >

237 citations


Journal ArticleDOI
TL;DR: A functional minimization algorithm utilizing overlapping local charts to refine surface points and curvature estimates is presented, and an implementation as an iterative constraint satisfaction procedure based on local surface smoothness properties is developed.
Abstract: Early image understanding seeks to derive analytic representations from image intensities. The authors present steps towards this goal by considering the inference of surfaces from three-dimensional images. Only smooth surfaces are considered and the focus is on the coupled problems of inferring the trace points (the points through which the surface passes) and estimating the associated differential structure given by the principal curvature and direction fields over the estimated smooth surfaces. Computation of these fields is based on determining an atlas of local charts or parameterizations at estimated surface points. Algorithm robustness and the stability of results are essential for analyzing real images; to this end, the authors present a functional minimization algorithm utilizing overlapping local charts to refine surface points and curvature estimates, and develop an implementation as an iterative constraint satisfaction procedure based on local surface smoothness properties. Examples of the recovery of local structure are presented for synthetic images degraded by noise and for clinical magnetic resonance images. >

219 citations


Proceedings ArticleDOI
16 Jun 1990
TL;DR: The authors empirically compare three algorithms for segmenting simple, noisy images and conclude that contextual information from MRF models improves segmentation when the number of categories and the degradation model are known and that parameters can be effectively estimated.
Abstract: The authors empirically compare three algorithms for segmenting simple, noisy images: simulated annealing (SA), iterated conditional modes (ICM), and maximizer of the posterior marginals (MPM). All use Markov random field (MRF) models to include prior contextual information. The comparison is based on artificial binary images which are degraded by Gaussian noise. Robustness is tested with correlated noise and with object and background textured. The ICM algorithm is evaluated when the degradation and model parameters must be estimated, in both supervised and unsupervised modes and on two real images. The results are assessed by visual inspection and through a numerical criterion. It is concluded that contextual information from MRF models improves segmentation when the number of categories and the degradation model are known and that parameters can be effectively estimated. None of the three algorithms is consistently best, but the ICM algorithm is the most robust. The energy of the a posteriori distribution is not always minimized at the best segmentation. >

162 citations


Patent
15 Jun 1990
TL;DR: In this paper, a method and apparatus for determining the distance of an object from a camera system and also for focusing a surface patch of such object as well as obtaining an improved focus image of the surface patch is presented.
Abstract: The present invention is a method and apparatus for determining the distance of a surface patch of an object from a camera system and also for focusing a surface patch of such object as well as obtaining an improved focus image of the surface patch. The present invention also includes a method of determining a set of unknown parameters of a linear shift-invariant system. The camera system of the present invention has an aperture through which light enters, an image detector, an image forming optical system having first and second principal planes and a focal length, the second principal plane arranged closer to the image detector than the first principal plane, a light filter, a camera controller, and an image processor operatively connected to the image detector and to the camera controller. The camera is set to a first set of camera parameters which include the distance(s) between the second principal plane and the image detector, the diameter (D) of the camera aperture, the focal length (f) of the camera system and the spectral characteristic (λ) of light transmitted by the light filter. The apparatus and the method of the present invention are widely applicable and they significantly enhance the efficiency of image processing to provide the distance of an object and required changes in the camera parameters to focus the object.

139 citations


Journal ArticleDOI
TL;DR: In this paper, a generic optical flow can be approximated by using a constant term and a suitable combination of four elementary deformations of the time-varying image brightness, namely, a uniform expansion, a pure rotation, and two orthogonal components of shear.
Abstract: We show that optical flow, i.e., the apparent motion of the time-varying brightness over the image plane of an imaging device, can be estimated by means of simple differential techniques. Linear algebraic equations for the two components of optical flow at each image location are derived. The coefficients of these equations are combinations of spatial and temporal derivatives of the image brightness. The equations are suggested by an analogy with the theory of deformable bodies and are exactly true for particular classes of motion or elementary deformations. Locally, a generic optical flow can be approximated by using a constant term and a suitable combination of four elementary deformations of the time-varying image brightness, namely, a uniform expansion, a pure rotation, and two orthogonal components of shear. When two of the four equations that correspond to these deformations are satisfied, optical flow can more conveniently be computed by assuming that the spatial gradient of the image brightness is stationary. In this case, it is also possible to evaluate the difference between optical flow and motion field—that is, the two-dimensional vector field that is associated with the true displacement of points on the image plane. Experiments on sequences of real images are reported in which the obtained optical flows are used successfully for the estimate of three-dimensional motion parameters, the detection of flow discontinuities, and the segmentation of the image in different moving objects.

127 citations


Journal ArticleDOI
TL;DR: Within this framework, in order to facilitate measurement of the navigation parameters, a constrained egomotion strategy was adopted in which the position of the fixation point is stabilized during the navigation (in an anthropomorphic fashion).
Abstract: The extraction of depth information from a sequence of images is investigated. An algorithm that exploits the constraint imposed by active motion of the camera is described. Within this framework, in order to facilitate measurement of the navigation parameters, a constrained egomotion strategy was adopted in which the position of the fixation point is stabilized during the navigation (in an anthropomorphic fashion). This constraint reduces the dimensionality of the parameter space without increasing the complexity of the equations. A further distinctive point is the use of two sampling rates: the faster (related to the computation of the instantaneous optical flow) is fast enough to allow the local operator to sense the passing edge (or, in other words, to allow the tracking of moving contour points), while the slower (used to perform the triangulation procedure necessary to derive depth) is slow enough to provide a sufficiently large baseline for triangulation. Experimental results on real image sequences are presented. >

123 citations


Patent
Barry Bronson1
02 Aug 1990
TL;DR: In this article, a technique for interaction with a projected video image with a light pen and/or target marks on the projection screen is presented, in which light reflected from the projected image is compared with the video image to detect the position of a spot.
Abstract: A technique for interaction with a projected video image with a light pen and/or target marks on the projection screen in which light reflected from the projected image is compared with the video image to detect the position of a spot positioned on the projected image by the light pen and/or the reflection of the target marks. The computer used to generate the video image is then caused to position a cursor in the video image in response to the spot position and/or otherwise modify the generated and/or projected video image.

82 citations


Proceedings ArticleDOI
16 Jun 1990
TL;DR: A camera model is presented which accounts for major sources of camera distortion: radial, decentering, and thin-prism distortions, and a type of measure is introduced which can be used to directly evaluate the performance of the calibration and compare calibrations among different systems.
Abstract: A camera model is presented which accounts for major sources of camera distortion: radial, decentering, and thin-prism distortions. The proposed calibration procedure consists of two steps. In the first step, calibration parameters are estimated using a closed-form solution based on a distortion-free camera model. In the second step, the parameters estimated in the first step are improved iteratively through nonlinear optimization, taking into account camera distortions. According to minimum-variance estimation, the objective function to be minimized is the mean-square discrepancy between the observed image points and their inferred image projections computed with the estimated calibration parameters. A type of measure is introduced which can be used to directly evaluate the performance of the calibration and compare calibrations among different systems. The validity and the performance of the calibration procedure are tested on real images taken by wide-angle lenses. Results consistently show significant improvements over less complete camera models. >

Proceedings ArticleDOI
04 Dec 1990
TL;DR: The key idea is that the local spatial structure of optical flow, with the exception of surface boundaries, is usually rather coherent and can thus be appropriately approximated by a linear vector field.
Abstract: A method is presented for the recovery of optical flow. The key idea is that the local spatial structure of optical flow, with the exception of surface boundaries, is usually rather coherent and can thus be appropriately approximated by a linear vector field. According to the proposed method, the optical flow components and their first order spatial derivatives are computed at the central points of rather large and overlapping patches which cover the image plane as the solution to a highly overconstrained system of linear algebraic equations. The equations, which are solved through the use of standard least mean square techniques, are derived from the assumptions that the changing image brightness is stationary everywhere over time and that optical flow is, locally, a linear vector field. The method has been tested on many sequences of synthetic and real images and the obtained optical flow has been used to estimate three-dimensional motion parameters with very good results. >

Journal ArticleDOI
TL;DR: An image coding method for low bit rates based on alternate use of the discrete cosine transform and the discrete sine transform on image blocks achieves the removal of redundancies in the correlation between neighboring blocks as well as the preservation of continuity across the block boundaries.
Abstract: An image coding method for low bit rates is proposed. It is based on alternate use of the discrete cosine transform (DCT) and the discrete sine transform (DST) on image blocks. This procedure achieves the removal of redundancies in the correlation between neighboring blocks as well as the preservation of continuity across the block boundaries. An outline of the mathematical justification of the method, assuming a certain first-order Gauss-Markov model, is given. The resulting coding method is then adapted to nonstationary real images by locally adapting the model parameters and improving the block classification technique. Simulation results are shown and compared with the performance of related previous methods, namely adaptive DCT and fast Karhunen-Loeve transform (FKLT). >

Journal ArticleDOI
TL;DR: Three‐dimensional imaging of biological objects becomes possible throughocal scanning laser microscopy because the lateral resolution is better than the axial resolution and, thus, the microscope provides orientation‐dependent images.
Abstract: SUMMARY Confocal scanning laser microscopy (CSLM) provides optical sectioning of a fluorescent sample and improved resolution with respect to conventional optical microscopy. As a result, three-dimensional (3-D) imaging of biological objects becomes possible. A difficulty is that the lateral resolution is better than the axial resolution and, thus, the microscope provides orientation-dependent images. However, a theoretical investigation of the process of image formation in CSLM shows that it must be possible to improve the resolution obtained in practice. We present two methods for achieving such a result in the case of 3-D fluorescent objects. The first method applies to conventional CSLM, where the image is detected only on the optical axis for any scanning position. Since the resulting 3-D image is the convolution of the object with the impulse-response function of the instrument, the problem of image restoration is a deconvolution problem and is affected by numerical instability. A short introduction to the linear methods developed for obtaining stable solutions of these problems (the so-called regularization theory of ill-posed problems) is given and an application to a real image is discussed. The second method applies to a new version of CSLM proposed in recent years. In such a case the full image must be measured by a suitable array of detectors. For each scanning position the data are not single numbers but vectors. Then, in order to recover the object, one must solve a Fredholm integral equation of the first kind. A method for the solution of this equation is presented and the possibility of achieving super-resolution is demonstrated. More precisely, we show that it is possible to improve by about a factor of 2 the resolution of conventional CSLM both in the lateral and axial directions.

01 Mar 1990
TL;DR: In this paper, a method for the estimation of scene structure and camera motion from a sequence of images is presented, which combines the ''direct'' motion vision approach with the theory of recursive estimation.
Abstract: This paper presents a method for the estimation of scene structure and camera motion from a sequence of images. This approach is fundamentally new. No computation of optical flow or feature correspondences is required. The method processes image sequences of arbitrary length and exploits the redundancy for a significant reduction in error over time. No assumptions are made about camera motion or surface structure. Both quantities are fully recovered. Our method combines the ``direct'''' motion vision approach with the theory of recursive estimation. Each step is illustrated and evaluated with results from real images.

Patent
22 Jan 1990
TL;DR: A holographic diffuser provides a high degree of chromatic correction, or color balance, within a selected eyebox and the ability to generate more than one specific eyebox for multiple observer applications.
Abstract: A holographic diffuser provides a high degree of chromatic correction, or color balance, within a selected eyebox and the ability to generate more than one specific eyebox for multiple observer applications. The strength of these gratings can be varied to modify the diffraction efficiency for each colorso that the balance of colors can be varied within the eyebox. This balancing of colors can be used, for example, to compensate for color imbalance within the light source or image generator. Illuminating a first holographic medium produces a first real image of a diffusing screen in a define eyebox. A second hologram is recorded in the holographic medium using the real image produced by the first hologram as an object such that when the holographic medium is illuminated, it produces a second real image of a diffusing screen in the define eyebox. A third hologram is recorded in the holographic medium using the second real image as an object. The third hologram is formed by multiple exposures of the holographic medium with a plurality of selected spectral components. The plurality of selected spectral components comprises optical wavelengths that may correspond to the colors red, green and blue, respectively. The third hologram may be formed by exposing the holographic medium with a single optical wavelength with first, second and third angles of incidence being selected for each exposure of the holographic medium to the optical wavelength.

Proceedings ArticleDOI
04 Dec 1990
TL;DR: In this paper, an algorithm for reconstructing the surface shape of a nonrigid transparent object, such as water, from the apparent motion of the observed pattern is described. But this algorithm is based on the optical and statistical analysis of the distortions.
Abstract: An algorithm is described for reconstructing the surface shape of a nonrigid transparent object, such as water, from the apparent motion of the observed pattern. This algorithm is based on the optical and statistical analysis of the distortions. It consists of the following parts: extraction of optical flow, averaging of each point trajectory obtained from the optical flow sequence, calculation of the surface normal using optical characteristics, and reconstruction of the surface. The algorithm is applied to synthetic and real images to demonstrate its performance. >

Proceedings ArticleDOI
16 Jun 1990
TL;DR: The authors present an efficient approach for reliably matching a set of points extracted from the environment of a mobile robot by means of passive stereo vision using two or three cameras.
Abstract: The authors present an efficient approach for reliably matching a set of points extracted from the environment of a mobile robot by means of passive stereo vision using two or three cameras. First, feature points corresponding to points with high curvature are extracted from each image using an efficient approach. The epipolar geometry and some powerful configuration constraints are then combined to match these points. A correspondence between curves is then established using the figural continuity. Results obtained on real images are given. >

Journal ArticleDOI
TL;DR: A general robust evaluator for edge detectors, based on local edge coherence, that can be incorporated with a feedback mechanism to automatically adjust edge detection parameters (e.g. edge thresholds), for adaptive detection of edges in real images.

Proceedings Article
01 Jan 1990
TL;DR: In this article, a camera model which accounts for major sources of camera distortion, namely radial, decentering and thin prism distortions, is proposed for stereo camera calibration, where the calibration parameters are estimated using a closed-form solution based on a distortion-free camera model.
Abstract: The objective of stereo camera calibration is to estimate the internal and external parameters of each camera. Using these parameters, the three-dimensional position of a point in the scene, identified and matched in two stereo images, can be determined by the method of triangulation. We present in this paper a camera model which accounts for major sources of camera distortion: radial, decentering and thin prism distortions. The proposed calibration procedure consists of two steps. In the first step, calibration parameters are estimated using a closed-form solution based on a distortion-free camera model. In the second step, the parameters estimated in the first step are improved iteratively through nonlinear optimization, taking into account camera distortions. According to minimum variance estimation, the objective function to be minimized is the mean-square discrepancy between the observed image points and their inferred image projections computed with the estimated calibration parameters. We introduce a type of measure which can be used to directly evaluate the performance of calibration and compare calibrations among different systems. The validity and the performance of our calibration procedure are tested real images taken by wide angle lenses. Results consistently show significant improvements over less complete camera models.

Proceedings ArticleDOI
04 Dec 1990
TL;DR: Tests of the algorithm on real image sequences show that various multispectral constraints from the visible and infrared spectrum can be used to compute optical flow fields in the presence of noise.
Abstract: Multispectral constraints are exploited for optical flow computation. The theoretical basis and conditions for using multispectral images are described. An optical flow algorithm using multispectral constraints is outlined. Tests of the algorithm on real image sequences show that various multispectral constraints from the visible and infrared spectrum can be used to compute optical flow fields in the presence of noise. >

Proceedings ArticleDOI
03 Apr 1990
TL;DR: The combined segmentation and motion estimation algorithm performs very well in a very noisy environment and is more effective in the case of salt-and-pepper noise and burst noise.
Abstract: A motion compensated image sequence enhancement algorithm is presented. A combined segmentation and motion estimation algorithm is employed. A temporal or a spatiotemporal low-pass filter is then applied. Mean and median filters are presented as low-pass filters. The temporal filtering is performed over the motion path of each pixel, which is provided by the motion-estimation algorithm. The spatial filtering does not blur the boundaries of the moving objects because the boundary locations are provided by the segmentation algorithm. The performance of the combined algorithm is examined using computer-generated and real image sequences corrupted by additive white Gaussian noise. The algorithm performs very well in a very noisy environment. Mean filtering is more effective in the case of white Gaussian noise, and median filtering is more effective in the case of salt-and-pepper noise and burst noise. >

Proceedings ArticleDOI
04 Dec 1990
TL;DR: It is shown that for 'small' field of view imaging systems, incorrect knowledge of the camera center does not affect the determination of the location of thecamera significantly, and a model of error based on the amount of error in placing the image center conforms to the errors obtained for experiments with synthetic and real data.
Abstract: A study is made of the effect of errors in estimates of the image center and focal length on pose refinement and other related (3-D inference from 2-D images) problems/algorithms. The authors show that for 'small' field of view imaging systems, incorrect knowledge of the camera center does not affect the determination of the location of the camera significantly. Incorrect estimates of the focal length only significantly affect the determination of the z-component (i.e. parallel to the optical axis) of the translation in camera coordinates. The output of the pose refinement algorithm is used to calculate the relative orientation between the coordinate frames of the same camera in two or more different positions as a prelude to computation of 3-D depths of new points by pseudo-triangulation. A model of error for this depth based on the amount of error in placing the image center conforms to the errors obtained for experiments with synthetic and real data. New points are located to an average accuracy of 1.5 mm and 0.3 ft for the two real image sequences respectively. >

Proceedings ArticleDOI
01 Mar 1990
TL;DR: In this paper, a method for the recovery of environment structure and camera motion from a sequence of images taken by a camera in motion is presented, which explicitly models the perceived temporal variation of the scene structure in the form of a dynamical system.
Abstract: We present a method for the recovery of environment structure and camera motion from a sequence of images taken by a camera in motion. Unlike previous approaches, our method of Dynamic Motion Vision explicitly models the perceived temporal variation of the scene structure in the form of a dynamical system. We use the Kalman Filter algorithm to optimally estimate depth values at every picture cell from optical flow. We interleave a least-squares motion estimation with the stages of the Kalman Filter. Our algorithm can therefore estimate both the structure of a scene and the camera motion simultaneously in an incremental fashion which improves the estimates as new images become available. Results of experiments on synthetic and real images are presented.

Journal ArticleDOI
TL;DR: The application of an anthropomorphic, retina-like visual sensor for optical flow and depth estimation is presented, and the main advantage is considerable data reduction, while a high spatial resolution is preserved in the part of the field of view corresponding to the focus of attention.

Journal ArticleDOI
TL;DR: An algorithm to compute axes of zero-curvature from disparities alone is developed and shown to be quite robust against violations of its basic assumptions for synthetic data with relatively large controlled deviations.
Abstract: Obtaining exact depth from binocular disparities is hard if camera calibration is needed. We will show that qualitative information can be obtained from stereo disparities with little computation and without prior knowledge (or computation) of camera parameters. First, we derive two expressions that order all matched points in the images by depth in two distinct ways from image coordinates only. Using one for tilt estimation and point separation (in depth) demonstrates some anomalies observed in psychophysical experiments, most notably the “induced size effect.” We apply the same approach to detect qualitative changes in the curvature of a contour on the surface of an object, with eitherx- ory-coordinate fixed. Second, we develop an algorithm to compute axes of zero-curvature from disparities alone. The algorithm is shown to be quite robust against violations of its basic assumptions for synthetic data with relatively large controlled deviations. It performs almost as well on real images, as demonstrated on an image of four cans at different orientations.

Journal ArticleDOI
TL;DR: It is shown that a binocular observer can recover the depth and three-dimensional motion of a rigid planar patch without using any correspondences between the left and right image frames (static) or between the successive dynamic frames (dynamic).
Abstract: It is shown that a binocular observer can recover the depth and three-dimensional motion of a rigid planar patch without using any correspondences between the left and right image frames (static) or between the successive dynamic frames (dynamic). Uniqueness and robustness issues are studied with respect to this problem and experimental results are given from the application of the theory to real images. >

Proceedings ArticleDOI
04 Dec 1990
TL;DR: A novel technique is presented for reconstructing the 3-D structure, and motion, of a scene undergoing relative rotational motion with respect to the camera using a grouping algorithm developed which exploits spatio-temporal constraints of the common motion to achieve reliable description of discrete point correspondences as curved trajectories in the image plane.
Abstract: A novel technique is presented for reconstructing the 3-D structure, and motion, of a scene undergoing relative rotational motion with respect to the camera. Given image correspondences of point features tracked over many frames, first, a grouping algorithm developed which exploits spatio-temporal constraints of the common motion to achieve a reliable description of discrete point correspondences as curved trajectories (general conics in the case of rotational motion) in the image plane. In contrast, trajectories fitted to points independent of each other lead to arbitrary image descriptions and very inaccurate 3-D parameters. Second, a novel closed-form solution, under perspective projection, for the 3-D motion and location of points from the computed image trajectories is presented. Both stages are applied to real image sequences with good results. >

Patent
31 May 1990
TL;DR: In this paper, a method and apparatus for aligning a reflective surface with an alignment axis in a representative environment of an interferometer is described, where an image of the reflective surface is focused onto a diffuse screen to form a spot image thereon.
Abstract: A method and apparatus is disclosed for aligning a reflective surface with an alignment axis in a representative environment of an interferometer. An image of the reflective surface is focused onto a diffuse screen to form a spot image thereon. Rays of the spot image emanating from the diffuse screen are collimated. Some of the collimated rays are focused onto a detector to form a non-inverted image spot. A portion of the collimated rays are intercepted and inverted by means of an image inverter aligned with the alignment axis. The inverted rays are focused onto the detector to form an inverted image spot. The reflecting surface is moved so as to cause the inverted image spot and the non-inverted image spot to coincide, at which point the reflecting surface is aligned with the alignment axis.

Proceedings ArticleDOI
04 Dec 1990
TL;DR: In this article, the intensity values recorded from multiple images of moving objects acquired simultaneously under different conditions of illumination are used to compute a dense, local representation of optical flow, where each image is assumed to satisfy the standard optical flow constraint equation.
Abstract: A novel method is described to compute a dense, local representation of optical flow. The idea is to use the intensity values recorded from multiple images of moving objects acquired simultaneously under different conditions of illumination. Each image is assumed to satisfy the standard optical flow constraint equation. Multiple images give rise to multiple constraint equations. When the optical flow and the 2-D motion field coincide, these multiple equations are in the same unknowns. A description is given of the basic theory, and the theory is illustrated on a real image motion sequence. All computations are local, independent and relatively simple. No iteration steps are required. It is suggested that the requirement to obtain simultaneous images under different conditions of illumination be satisfied by using spectrally distinct illumination and sensing. >