scispace - formally typeset
Search or ask a question

Showing papers on "Image plane published in 1994"


Journal ArticleDOI
TL;DR: Under the assumption of the radial distortion model, this paper presents a computationally efficient method for explicitly correcting the distortion of image coordinates in frame buffer without involving the computation of camera position and orientation.
Abstract: By implicit camera calibration, we mean the process of calibrating a camera without explicitly computing its physical parameters. Implicit calibration can be used for both three-dimensional (3-D) measurement and generation of image coordinates. In this paper, we present a new implicit model based on the generalized projective mappings between the image plane and two calibration planes. The back-projection and projection processes are modelled separately to ease the computation of distorted image coordinates from known world points. A set of constraints of perspectivity is derived to relate the transformation parameters of the two calibration planes. Under the assumption of the radial distortion model, we present a computationally efficient method for explicitly correcting the distortion of image coordinates in frame buffer without involving the computation of camera position and orientation. By combining with any linear calibration techniques, this method makes explicit the camera physical parameters. Extensive experimental comparison of our methods with the classic photogrammetric method and Tsai's (1986) method in the aspects of 3-D measurement (both absolute and relative errors), the prediction of image coordinates, and the effect of the number of calibration points, is made using real images from 15 different depth values. >

350 citations


Proceedings ArticleDOI
01 Jan 1994
TL;DR: This paper presents a methodology for producing accurate camera models for systems with automated, variable- parameter lenses and applies it to produce an `adjustable,' perspective-projection camera model based on Tsai's fixed camera model.
Abstract: Camera systems with automated zoom lenses are inherently more useful than those with fixed-parameter lenses. Variable-parameter lenses enable us to produce better images by matching the camera's sensing characteristics to the conditions in a scene. They also allow us to make measurements by noting how the scene's image changes as the parameters are varied. The reason variable-parameter lenses are not more commonly used in machine vision is that they are difficult to model for continuous ranges of lens settings. We show in this thesis that traditional modeling approaches cannot capture the complex relationships between control parameters and imaging processes. Furthermore, we demonstrate that the assumption of idealized behavior in traditional models can lead to significant performance problems in color imaging and focus ranging. By using more complex models and control strategies we were able to reduce or eliminate these performance problems. The principal contribution of our research is a methodology for empirically producing accurate camera models for systems with variable-parameter lenses. We also developed a comprehensive taxonomy for the property of "image center." To demonstrate the effectiveness of our methodology we applied it to produce an "adjustable," perspective-projection camera model based on Tsai's fixed camera model. We calibrated and tested our model on two different automated camera systems. In both cases the calibrated model operated across continuous ranges of focus and zoom with an average error of less than 0.14 pixels between the predicted and the measured positions of features in the image plane. We also calibrated and tested our model on one automated camera system across a continuous range of aperture and achieved similar results.

248 citations


Book ChapterDOI
01 Jun 1994
TL;DR: This contribution investigates local differential techniques for estimating optical flow and its derivatives based on the brightness change constraint by using the tensor calculus representation and builds the Taylor expansion of the gray-value derivatives as well as of the optical flow in a spatiotemporal neighborhood.
Abstract: This contribution investigates local differential techniques for estimating optical flow and its derivatives based on the brightness change constraint. By using the tensor calculus representation we build the Taylor expansion of the gray-value derivatives as well as of the optical flow in a spatiotemporal neighborhood. Such a formulation simplifies a unifying framework for all existing local differential approaches and allows to derive new systems of equations to estimate the optical flow and its derivatives. We also tested various optical flow estimation approaches on real image sequences recorded by a calibrated camera fixed on the arm of a robot. By moving the arm of the robot along a precisely defined trajectory we can determine the true displacement rate of scene surface elements projected into the image plane and compare it quantitatively with the results of different optical flow estimators.

237 citations


Patent
Groot Peter De1
13 Jan 1994
TL;DR: In this paper, an optical system for measuring the topography of an object includes an interferometer (1) with a multiple-color or white-light source (4), a mechanical scanning apparatus (13) for varying the optical path difference between the object and a reference surface, a two-dimensional detector array (9), and digital signal processing apparatus (2) for determining surface height from interference data.
Abstract: An optical system for measuring the topography of an object (3) includes an interferometer (1) with a multiple-color or white-light source (4), a mechanical scanning apparatus (13) for varying the optical path difference between the object and a reference surface, a two-dimensional detector array (9), and digital signal processing apparatus (2) for determining surface height from interference data. Interferograms for each of the detector image points in the field of view are generated simultaneously by scanning the object in a direction approximately perpendicular to the illuminated object surface while recording detector data in digital memory. These recorded interferograms for each image point are then transformed into the spatial frequency domain by Fourier analysis, and the surface height for each corresponding object surface point is obtained by examination of the complex phase as a function of spatial frequency. A complete three-dimensional image of the object surface is then constructed from the height data and corresponding image plane coordinates.

192 citations


Patent
28 Sep 1994
TL;DR: In this paper, the relative distances of the reflecting structure from the corresponding elements of the arrays are compared to determine the elevation of reflecting points with respect to an image plane, and a three-dimensional image of the reflectors is generated.
Abstract: Elongated arrays of individual ultrasonic transducer elements at spaced locations emit wedge-shaped beams of ultrasonic energy such that energy emitted by corresponding elements of the arrays travels into a common region of the object to be imaged. Energy emitted by the elements of both arrays is reflected by the same structure within the object. The relative distances of the reflecting structure from the corresponding elements of the arrays are compared to determine the elevation of the reflecting points with respect to an image plane, and a three-dimensional image of the reflectors is generated. A virtual image including information responsive to the elevation of the points can be derived, and used to recalculate the visible display corresponding to a user-selected image plane. The transducer may include a parallel pair of arrays.

172 citations


Journal ArticleDOI
TL;DR: A technique for measuring the motion of a rigid, textured plane in the frontoparallel plane is developed and tested on synthetic and real image sequences and offers a simple, novel way of tackling the ‘aperture’ problem.
Abstract: A technique for measuring the motion of a rigid, textured plane in the frontoparallel plane is developed and tested on synthetic and real image sequences. The parameters of motion — translation in two dimensions, and rotation about a previously unspecified axis perpendicular to the plane — are computed by a single-stage, non-iterative process which interpolates the position of the moving image with respect to a set of reference images. The method can be extended to measure additional parameters of motion, such as expansion or shear. Advantages of the technique are that it does not require tracking of features, measurement of local image velocities or computation of high-order spatial or temporal derivatives of the image. The technique is robust to noise, and it offers a simple, novel way of tackling the ‘aperture’ problem. An application to the computation of robot egomotion is also described.

163 citations


Journal ArticleDOI
01 Jun 1994
TL;DR: This paper introduces visual compliance, a new vision-based control scheme that lends itself to task-level specification of manipulation goals and derives the hybrid Jacobian matrix that is used to effect visual compliance.
Abstract: This paper introduces visual compliance, a new vision-based control scheme that lends itself to task-level specification of manipulation goals. Visual compliance is effected by a hybrid vision/position control structure. Specifically, the two degrees of freedom parallel to the image plane of a supervisory camera are controlled using visual feedback, and the remaining degree of freedom (perpendicular to the camera image plane) is controlled using position feedback provided by the robot joint encoders. With visual compliance, the motion of the end effector is constrained so that the tool center of the end effector maintains "contact" with a specified projection ray of the imaging system. This type of constrained motion can be exploited for grasping, parts mating, and assembly. The authors begin by deriving the projection equations for the vision system. They then derive equations used to position the manipulator prior to the execution of visual compliant motion. Following this, the authors derive the hybrid Jacobian matrix that is used to effect visual compliance. Experimental results are given for a number of scenarios, including grasping using visual compliance. >

156 citations


Journal ArticleDOI
TL;DR: Experimental results, obtained by computer simulations, confirm that the presented coding algorithm is a promising scheme for the application at extremely low transmission bit rates.
Abstract: An object-based analysis-synthesis image sequence coder for transmission bit rates between 8 and 16 kbit/s is presented. Each moving object is described by three sets of parameters defining its shape, motion, and colour. Coding is based on the source model of flexible 2D objects which move translationally in the image plane, as it has been used in an implementation for a 64 kbit/s ISDN videophone by Hotter (1990). In order to cut down the bit rate from 64 kbit/s to 8 kbit/s, QCIF image resolution instead of CIF resolution is applied. Image analysis and coding of object parameters have been adapted to the reduced resolution and to the changed parameter statistics, respectively. In addition to Hotter's coder, predictive coding is used for encoding polygons and splines to improve the coding efficiency of shapes. Vector quantization is applied instead of DCT for coding the luminance and chrominance parameters of the object textures. Uncovered background regions are encoded by applying adaptive prediction from either the neighbouring static background or a special background memory. Experimental results, obtained by computer simulations, confirm that the presented coding algorithm is a promising scheme for the application at extremely low transmission bit rates. This is shown by comparing the picture qualities obtained with the presented algorithm and a block-based hybrid-DCT scheme corresponding to H.261/RM8 at 11 kbit/s. >

139 citations


Journal ArticleDOI
08 May 1994
TL;DR: Algorithms for 3D robotic visual tracking of moving targets whose motion is 3D and consists of translational and rotational components are presented to track selected features on moving objects and to place their projections on the image plane at desired positions by appropriate camera motion.
Abstract: Algorithms for 3D robotic visual tracking of moving targets whose motion is 3D and consists of translational and rotational components are presented. The objective of the system is to track selected features on moving objects and to place their projections on the image plane at desired positions by appropriate camera motion. The most important characteristics of the proposed algorithms are the use of a single camera mounted on the end-effector of a robotic manipulator (eye-in-hand configuration), and the fact that these algorithms do not require accurate knowledge of the relative distance of the target object from the camera frame. This fact makes these algorithms particularly useful in environments that are difficult to calibrate. The camera model used introduces a number of parameters that are estimated on-line, further reducing the algorithms' reliance on precise calibration of the system. An adaptive control algorithm compensates for modeling errors, tracking errors, and unavoidable computational delays which result from time-consuming image processing. Experimental results are presented to verify the efficacy of the proposed algorithms. These experiments were performed using a multi-robotic system consisting of Puma 560 manipulators. >

113 citations


Patent
10 Jan 1994
TL;DR: In this article, an arrangement for generating reconstructon information to facilitate reconstruction of three-dimensional features of objects in a scene based on two-dimensional images of the scene taken from a plurality of locations is presented.
Abstract: An arrangement for generating reconstructon information to facilitate reconstruction of three-dimensional features of objects in a scene based on two-dimensional images of the scene taken from a plurality of locations. The arrangement includes a plurality of elements including an epipole generating means, a homography generating means and a depth value generating means. The epipole generating means identifies the location of epipoles, that is, the coordinates in each image plane, in which the images were recorded, of the point of intersection of the line interconnecting the centers of projection of the image recorders that record the images. The homography generating means uses the epipoles and the coordinates in the respective images of selected reference points to generate a homography that relates the coordinates of all points in the images. Finally the depth value generating means uses the homography generated by the homography generating means and the coordinates of the projection of at least one other point, other than the selected reference points, in the images to generate, for generating a depth value representative of the distance of the other point relative to the location of at least one of the image recorders. Using the depth values generated for an number of points of objects in the scene and the coordinates of the projections of the points in at least one of the image planes, as determined from the images, the three-dimensional (Euclidean) structures of the objects can be determined.

93 citations


Proceedings ArticleDOI
08 May 1994
TL;DR: The number of unknown parameters to be calibrated is drastically reduced, enabling simple and useful calibration in the geometric camera calibration of a fish-eye lens mounted on a CCD TV camera.
Abstract: Presents a new algorithm for the geometric camera calibration of a fish-eye lens (a high distortion lens) mounted on a CCD TV camera. The algorithm determines a mapping between points in the world coordinate system and their corresponding point locations in the image plane. The parameters to be calibrated are effective focal length, one-pixel width on the image plane, image distortion center, and distortion coefficients. A simple calibration pattern consisting of equally spaced dots is introduced as a reference for calibration. Some parameters to be calibrated are eliminated by setting up the calibration pattern precisely and assuming negligible distortion at the image distortion center. Thus, the number of unknown parameters to be calibrated is drastically reduced, enabling simple and useful calibration. The method employs a polynomial transformation between points in the world coordinate system and their corresponding image plane locations. The coefficients of the polynomial are determined using the Lagrangian estimation. Furthermore, the effectiveness of the proposed calibration method is confirmed by experimentation. >

Journal ArticleDOI
TL;DR: This work presents a method for exploiting this type of information to map the fiber orientations in the image plane using three diffusion-weighted images with sensitizing gradients along x, y and u, and an axis at 45 degrees with respect to x and y.

Patent
17 Nov 1994
TL;DR: In this paper, a method of producing a mask for use within a photolithographic illumination system characterized by a transmission function in which light is transmitted through non-opaque portions of the mask positioned in an object plane and in which an image is formed on an image plane is disclosed.
Abstract: A systematic method of producing a mask for use within a photolithographic illumination system characterized by a transmission function in which light is transmitted through non-opaque portions of the mask positioned in an object plane and in which an image is formed on an image plane is disclosed herein. The method includes the steps of defining a binary image pattern to be formed by the illumination system on the image plane; generating a continuous mask function of continuously-varying phase which satisfies predetermined error criteria based on the transmission function and the binary image pattern; transforming the mask function into a quadrature-phase mask function by dividing the continuously-varying phase into four phase levels; and generating the mask in accordance with the quadrature-phase mask function, wherein the mask includes a plurality of pixel regions each of which has a transmittance corresponding to one of the four phase levels.

Patent
22 Apr 1994
TL;DR: In this article, a pixel compensated electro-optical display system utilizing a pixel compensator to correct image problems is presented, where pixels have elongated geometric shapes that compensate for distortions normally occurring in optical systems utilizing one or more reflective surfaces.
Abstract: A pixel compensated electro-optical display system utilizing a pixel compensator to correct image problems. The pixel compensator comprises pixels having elongated geometric shapes that compensate for distortions normally occurring in optical systems utilizing one or more reflective surfaces. The pixels are configured so that when the image plane is viewed at a particular angle, the image is substantially corrected without complex optical refiguring. The pixel compensated electro-optical display system of the present invention accordingly minimizes the need to correct the reflected image. The optical system of the present invention finds utility in applications that require a reflected image because of spatial and other constrictions. The optical system of the present invention finds particular application in vehicle "heads up" display systems as well as in virtual reality and total immersion display systems.

Patent
04 Feb 1994
TL;DR: In this article, a system and method for 3D imaging utilize a lens to perform a twodimensional Fourier transform of an interference pattern while focusing the pattern on a two-dimensional detector array which is positioned in the image plane of the lens.
Abstract: A system and method for three-dimensional imaging utilize a lens to perform a two-dimensional Fourier transform of an interference pattern while focusing the pattern on a two-dimensional detector array which is positioned in the image plane of the lens. This allows immediate previewing of the imaged object for proper positioning. Coherent energy beams are utilized to create a series of interference patterns, or image-plane holograms, each at a different frequency of the source energy beams. Furthermore, at each frequency, the relative phase between an object and a reference energy beam is varied to capture the complex values associated with the interference patterns. After capturing and storing the various interference patterns, a computer performs a one-dimensional Fourier transform, or other simplified processing to generate the three-dimensional image of the object. The image resolution extends to the micron range making this system and method easily adaptable to a variety of three-dimensional inspection applications.

Patent
26 Jul 1994
TL;DR: In this article, an auto-leveling system is used to detect a deviation between the focus plane of the projection optical system and the surface of the shot area at each of multi-points.
Abstract: In an apparatus which positions an average plane thereof parallel to a best focus plane of a projection optical system even if there is unevenness on a wafer, a leveling stage is tilted on the basis of detection signal from an auto-leveling system and a surface of a shot area on a wafer is positioned in a predetermined tilt position relative to a focus plane of the projection optical system. While such a position being kept unchanged, a deviation between the focus plane of the projection optical system and the surface of the shot area is detected at each of multi-points. Within the shot area, by the use of auto-focus system, an amount of relative tilt between an average plane of the shot area obtained from plural deviations and the focus plane of the projection optical system is calculated, and by the use of thus calculated amount of tilt and the detection signal of auto-leveling system, the focus plane of the projection optical system is positioned in parallel with the average plane of the shot area.

Proceedings ArticleDOI
Cox1
21 Jun 1994
TL;DR: This paper extends results of a maximum likelihood two-frame stereo algorithm to the case of N cameras, and explicitly model occlusion between features of the principal pair, but the possibility of occlusions in the N-2 additional views is also modelled.
Abstract: This paper extends results of a maximum likelihood two-frame stereo algorithm to the case of N cameras. The N-camera stereo algorithm determines the "best" set of correspondences between a given pair of cameras, referred to as the principal cameras. Knowledge of the relative positions of the cameras allows the 3D point hypothesized by an assumed correspondence of two features in the principal pair to be projected onto the image plane of the remaining N-2 cameras. These N-2 points are then used to verify proposed matches. Not only does the algorithm explicitly model occlusion between features of the principal pair, but the possibility of occlusions in the N-2 additional views is also modelled. The benefits and importance of this are experimentally verified. Like other multi-frame stereo algorithms, the computational and memory costs of this approach increase linearly with each additional view. Experimental results are shown for two outdoor scenes. It is clearly demonstrated that the number of correspondence errors is significantly reduced as the number of views/cameras is increased. >

Patent
18 Feb 1994
TL;DR: In this paper, a tunable interferometer is used to produce a monochromatic continuous image at an image plane and includes two mirrors having substantially parallel surfaces and an adjustable spacing therebetween, a radiation detector (54) located at the image plane for recording the image, a filter arrangement (56) for allowing at least one predetermined range of wavelengths to pass to the detector, and a lens (50) arrangement for collecting radiation and limiting radiation incident on the interferometers to an angle which is substantially perpendicular to the substantially parallel surface of the two mirrors.
Abstract: A spectrometer comprises a tunable interferometer for producing a monochromatic continuous image at an image plane and including two mirrors (48) having substantially parallel surfaces and an adjustable spacing therebetween, a radiation detector (54) located at the image plane for recording the image, a filter arrangement (56) for allowing at least one predetermined range of wavelengths to pass to the detector, and a lens (50) arrangement for collecting radiation and limiting radiation incident on the interferometer to an angle which is substantially perpendicular to the substantially parallel surfaces of the two mirrors.

Journal ArticleDOI
TL;DR: In this paper, a linearized reflectance map based on the perspective projection model is proposed to estimate the surface height from a set of nonoverlapping triangular domains by solving a system of equations parameterized by nodal heights.
Abstract: Most conventional SFS (shape from shading) algorithms have been developed under the assumption of orthographic projection. However, the assumption is not valid when an object is not far away from the camera and, therefore, it causes severe reconstruction error in many real applications. In this research, we develop a new iterative algorithm for recovering surface heights from shaded images obtained with perspective projection. By dividing an image into a set of nonoverlapping triangular domains and approximating a smooth surface by the union of triangular surface patches, we can relate image brightness in the image plane directly to surface nodal heights in the world space via a linearized reflectance map based on the perspective projection model. To determine the surface height, we consider the minimization of a cost functional defined to be the sum of squares of the brightness error by solving a system of equations parameterized by nodal heights. Furthermore, we apply a successive linearization scheme in which the linearization of the reflectance map is performed with respect to surface nodal heights obtained from the previous iteration so that the approximation error of the reflectance map is reduced and accuracy of the reconstructed surface is improved iteratively. The proposed method reconstructs surface heights directly and does not require any additional integrability constraint. Simulation results for synthetic and real images are demonstrated to show the performance and efficiency of our new method.

Patent
16 Mar 1994
TL;DR: In this article, an optical arrangement for a flow cytometer with a numerical aperture Na i, where intense light is focused on the cells carried by a flow of water through the focal plane of the objective, with an optical axis and object plane coinciding with the objective and with a numerically larger aperture, NA O, was presented.
Abstract: An optical arrangement for a flow cytometer, wherein intense light is focused by a microscope objective having a numerical aperture Na i , onto the cells carried by a flow of water through the focal plane of the objective, with another microscope lens situated opposite the objective, with an optical axis and object plane coinciding with the objective and with a numerical aperture, NA O , which is significantly larger than that of the objective. The objective contains a circular field stop in, or close to, its secondary focal plane, with a diameter corresponding to a numerical aperture NA df , which is slightly larger than NA i , and much less than NA O . The fluorescence and scattered light from the stream of cells are separated by a dicroic mirror on basis of their different wavelength, so that they give rise to separate images in separate image planes of the objective. A telescope is situated behind the image plane and creates an image of the field stop in a plane with two concentric mirrors of different diameters, which separate light scattered from the cells according to the different scattering angles and direct them onto separate light detectors.

Patent
28 Dec 1994
TL;DR: In this article, a movable lensing system is adapted to focus an image on the image plane by using piezoelectric positioners to move the system a predetermined amount and on a predetermined plane parallel to the view plane so that the image focused by the lens on the array of sensing elements is displaced by the movement of the lens.
Abstract: An imaging system for implementation of a microscan includes an array of sensing elements that define an image plane. A movable lensing system is adapted to focus an image on the image plane. Piezoelectric positioners are provided to move the lensing system a predetermined amount and on a predetermined plane parallel to the image plane so that the image focused by the lensing system on the array of sensing elements is displaced on the image plane by the movement of the lensing system. As the lens is moved by the positioners and thus the optical axis of the lens, the image passing through the lens is also moved on the image plane. Moving the lens back and forth or in a predetermined pattern provides dithering of the image so that microscanning can be employed.

Patent
13 May 1994
TL;DR: A volume flow meter for displaying two-dimensional volume flow through a vessel, comprising an ultrasound instrument with scan head, a location and orientation sensor mounted to the scan head and a computer connected to the ultrasound instrument and the sensor as mentioned in this paper.
Abstract: A volume flow meter for displaying two-dimensional volume flow through a vessel, comprising an ultrasound instrument with scan head, a location and orientation sensor mounted to the scan head, and a computer connected to the ultrasound instrument and the sensor. The scan head is adapted to be positioned adjacent the vessel under investigation, for generating a raster of pixels which defines a color image representing flow velocities in the vessel through an image plane of the scan head. The sensor measures position and orientation of the scan head in three dimensions and generates a signal representative thereof to the computer. The computer receives said raster of pixels and the signal from the sensor and in response calculates the position and orientation of the vessel axis in three-dimensions responsive to orientation of the image plane longitudinally of the vessel. The computer then determines an angle θ between this axis and the image plane responsive to orientation of the image plane transversally to the vessel. Finally, the computer calculates and displays the volume flow as a summation of the flow velocities scaled by the tangent of the angle θ.

Patent
05 Jan 1994
TL;DR: In this paper, a backlight comprising a serpentine fluorescent tube nested in a uniquely shaped reflector is used to create a uniform image at the image plane, and the reflector reflects a substantial portion of the rest of the light emitted by the tube to the image planes in such a way that a very bright and uniform image is formed.
Abstract: A backlight comprising a serpentine fluorescent tube nested in a uniquely shaped reflector. Much of the light emitted from the tube is directly emitted to a bounded image plane. The reflector reflects a substantial portion of the rest of the light emitted by the tube to the image plane in such a manner that a very bright and uniform image is formed at the image plane.

Patent
13 Dec 1994
TL;DR: In this article, a device for viewing an object field containing intense light sources which could be disturbing or harmful to the human eye is described. But it is not shown in detail.
Abstract: A device for viewing an object field containing intense light sources which could be disturbing or harmful to the human eye Light from the object field is focused at a focal plane located within a layer of photochromic material An intense light source produces an opaque mask in the photochromic layer This mask matches the location and shape of the image of the intense light source at the image plane An eyepiece permits a viewer to view the image plane to obtain an image of the object field with light from the intense light source dimmed by the opaque mask in the photochromic layer

Journal ArticleDOI
TL;DR: The process of image plane holography with incoherent illumination with confocal imaging has similar properties and the similarities and differences between the two processes are described.
Abstract: The process of image plane holography with incoherent illumination has many significant properties. The process can produce extremely high-quality, low-noise images, section slicing, image formation through inhomogeneities, and high-resolution image formation through small apertures. The process of confocal imaging has similar properties. We describe the similarities and differences between the two processes.

Patent
23 Nov 1994
TL;DR: A stereoscopic imaging arrangement comprises an optical device (1) having an objective (2) and further lens means (3) located remotely from but in the optical path of the objective, and a stereoscopic device (4) arranged to receive light from said further lens and form an image on a photosensitive image plane as discussed by the authors.
Abstract: A stereoscopic imaging arrangement comprises a) an optical device (1) having an objective (2) and further lens means (3) located remotely from but in the optical path of the objective and b) a stereoscopic imaging device (4) arranged to receive light from said further lens means and form an image on a photosensitive image plane (7), the stereoscopic imaging device having shutter means (5) arranged to selectively occlude light exiting from left and right regions of said further lens means to form right and left images on said image plane and having means for combining said right and left images to form a stereoscopic representation of the field of view of said objective. The image may be displayed on a monitor (9) and viewed stereoscopically with switching spectacles (10).

Patent
16 Dec 1994
TL;DR: In this article, a three-dimensional correction memory is used to store deviation information relating to system inhomogeneities, which is indexed by x and y coordinates of the image plane and an attenuation coordinate associated with a plurality of standard samples.
Abstract: An x-ray system and method capable of producing very accurate RTR images corrected for x-ray system inhomogeneities. A three-dimensional correction memory is used to store deviation information relating to system inhomogeneities. The three-dimensional correction memory is indexed by x and y coordinates of the image plane and an attenuation coordinate associated with a plurality of standard samples. The standard samples are exposed in turn, and deviations between the measured attenuation and the nominal attenuation are noted and stored at the x and y coordinates in the attenuation plane for the sample then exposed. The procedure is repeated for a plurality of samples to create a three-dimensional correction array. An object to be imaged creates a set of raw intensity information. The raw information is corrected by accessing the correction memory at the x and y coordinates of each pixel to be corrected and the attenuation coordinate associated with the raw measurement. The deviation information is applied to the raw measurement to produce and store a corrected attenuation factor for each pixel.

Book ChapterDOI
01 Jun 1994
TL;DR: The computation of the optical flow field from an image sequences requires the definition of constraints on the temporal change of image features to limit the motion of the body in space and of the features on the image plane.
Abstract: The computation of the optical flow field from an image sequences requires the definition of constraints on the temporal change of image features. In general, these constraints limit the motion of the body in space and/or of the features on the image plane.

Journal ArticleDOI
TL;DR: When the pulse sequence timings are carefully optimized, the mixed imaging sequence in combination with the RLSQ algorithm used in this MRI system is a reliable and precise means of obtaining relaxation time data.

Book ChapterDOI
01 Jan 1994
TL;DR: The generalized geodesic distance betweeen two points is the length of the shortest path(s) linking these points in a minimum amount of time, used for defining a propagation function.
Abstract: The time necessary to follow a path defined on a grey scale image is defined as the sum of the image values along the path. The geodesic time associated with two points of the image is nothing but the smallest amount of time necessary to link these two points. Starting from this notion, we define a new geodesic metric on the image plane: the generalized geodesic distance. The generalized geodesic distance betweeen two points is the length of the shortest path(s) linking these points in a minimum amount of time. This distance is used for defining a propagation function. Applications to shape description and interpolation from contour data are provided.