scispace - formally typeset
Search or ask a question

Showing papers on "Orientation (computer vision) published in 2000"


Patent
30 Jun 2000
TL;DR: In this article, a system and method for recognizing gestures is presented. The method comprises obtaining image data and determining a hand pose estimation. And a frontal view of a hand is then produced.
Abstract: A system and method for recognizing gestures. The method comprises obtaining image data and determining a hand pose estimation. A frontal view of a hand is then produced. The hand is then isolated the background. The resulting image is then classified as a type of gesture. In one embodiment, determining a hand pose estimation comprises performing background subtraction and computing a hand pose estimation based on an arm orientation determination. In another embodiment, a frontal view of a hand is then produced by performing perspective unwarping and scaling. The system that implements the method may be a personal computer with a stereo camera coupled thereto.

421 citations


Journal ArticleDOI
TL;DR: A technique for performing three-dimensional pattern recognition by use of in-line digital holography, where the complex amplitude distribution generated by a 3D object at an arbitrary plane located in the Fresnel diffraction region is recorded by phase-shifting interferometry.
Abstract: We present a technique for performing three-dimensional (3D) pattern recognition by use of in-line digital holography. The complex amplitude distribution generated by a 3D object at an arbitrary plane located in the Fresnel diffraction region is recorded by phase-shifting interferometry. The digital hologram contains information about the 3D object's shape, location, and orientation. This information allows us to perform 3D pattern-recognition techniques with high discrimination and to measure 3D orientation changes. Experimental results are presented.

394 citations


01 Jan 2000
TL;DR: This thesis describes a statistical method for 3D object detection that has developed the first algorithm that can reliably detect faces that vary from frontal view to full profile view and the first algorithms thatCan reliably detect cars over a wide range of viewpoints.
Abstract: In this thesis, we describe a statistical method for 3D object detection. In this method, we decompose the 3D geometry of each object into a small number of viewpoints. For each viewpoint, we construct a decision rule that determines if the object is present at that specific orientation. Each decision rule uses the statistics of both object appearance and “non-object” visual appearance. We represent each set of statistics using a product of histograms. Each histogram represents the joint statistics of a subset of wavelet coefficients and their position on the object. Our approach is to use many such histograms representing a wide variety of visual attributes. Using this method, we have developed the first algorithm that can reliably detect faces that vary from frontal view to full profile view and the first algorithm that can reliably detect cars over a wide range of viewpoints.

288 citations


Proceedings ArticleDOI
16 Jun 2000
TL;DR: Omni-directional images provide the means of having adequate representations to support both accurate or qualitative navigation, since landmarks remain visible in all images, as opposed to a small field-of-view standard camera.
Abstract: We describe a method for visual based robot navigation with a single omni-directional (catadioptic) camera. We show how omni-directional images can be used to generate the representations needed for two main navigation modalities: Topological Navigation and Visual Path Following. Topological Navigation relies on the robot's qualitative global position, estimated from a set of omni-directional images obtained during a training stage (compressed using PCA). To deal with illumination changes, an eigenspace approximation to the Hausdorff measure is exploited. We present a method to transform omni-directional images to Bird's Eye Views that correspond to scaled orthographic views of the ground plane. These images are used to locally control the orientation of the robot, through visual servoing. Visual Path Following is used to accurately control the robot along a prescribed trajectory, by using bird's eye views to track landmarks on the ground plane. Due to the simplified geometry of these images, the robot's pose can be estimated easily and used for accurate trajectory following. Omni-directional images facilitate landmark based navigation, since landmarks remain visible in all images, as opposed to a small field-of-view standard camera. Also, omni-directional images provide the means of having adequate representations to support both accurate or qualitative navigation. Results are described in the paper.

220 citations


Journal ArticleDOI
01 Jul 2000
TL;DR: A set of algorithms for the creation of underwater mosaics is presented and their use as visual maps for underwater vehicle navigation is illustrated and the problem of pose estimation is tackled, using the available information on the camera intrinsic parameters.
Abstract: This paper presents a set of algorithms for the creation of underwater mosaics and illustrates their use as visual maps for underwater vehicle navigation. First, we describe the automatic creation of video mosaics, which deals with the problem of image motion estimation in a robust and automatic way. The motion estimation is based on a initial matching of corresponding areas over pairs of images, followed by the use of a robust matching technique, which can cope with a high percentage of incorrect matches. Several motion models, established under the projective geometry framework, allow for the creation of high quality mosaics where no assumptions are made about the camera motion. Several tests were run on underwater image sequences, testifying to the good performance of the implemented matching and registration methods. Next, we deal with the issue of determining the 3D position and orientation of a vehicle from new views of a previously created mosaic. The problem of pose estimation is tackled, using the available information on the camera intrinsic parameters. This information ranges from the full knowledge to the case where they are estimated using a self-calibration technique based on the analysis of an image sequence captured under pure rotation. The performance of the 3D positioning algorithms is evaluated using images for which accurate ground truth is available.

173 citations


Proceedings ArticleDOI
18 Mar 2000
TL;DR: A method for augmented reality with a stereo vision sensor and a video see-through head-mounted display (HMD) that can synchronize the display timing between the virtual and real worlds so that the alignment error is reduced.
Abstract: In an augmented reality system, it is required to obtain the position and orientation of the user's viewpoint in order to display the composed image while maintaining a correct registration between the real and virtual worlds. All the procedures must be done in real time. This paper proposes a method for augmented reality with a stereo vision sensor and a video see-through head-mounted display (HMD). It can synchronize the display timing between the virtual and real worlds so that the alignment error is reduced. The method calculates camera parameters from three markers in image sequences captured by a pair of stereo cameras mounted on the HMD. In addition, it estimates the real-world depth from a pair of stereo images in order to generate a composed image maintaining consistent occlusions between real and virtual objects. The depth estimation region is efficiently limited by calculating the position of the virtual object by using the camera parameters. Finally, we have developed a video see-through augmented reality system which mainly consists of a pair of stereo cameras mounted on the HMD and a standard graphics workstation. The feasibility of the system has been successfully demonstrated with experiments.

163 citations


Proceedings ArticleDOI
03 Sep 2000
TL;DR: A novel motion estimation algorithm is presented, which starts by computing 3D orientation tensors from the image sequence and is combined under the constraints of a parametric motion model to produce velocity estimates.
Abstract: Motion estimation in image sequences is an important step in many computer vision and image processing applications. Several methods for solving this problem have been proposed, but very few manage to achieve a high level of accuracy without sacrificing processing speed. This paper presents a novel motion estimation algorithm, which gives excellent results on both counts. The algorithm starts by computing 3D orientation tensors from the image sequence. These are combined under the constraints of a parametric motion model to produce velocity estimates. Evaluated on the well-known Yosemite sequence, the algorithm shows an accuracy substantially better than those obtained using previously published methods. Computationally, the algorithm is simple and can be implemented by means of separable convolutions, which also makes it fast.

158 citations


Journal ArticleDOI
TL;DR: Evidence is presented suggesting that diffusion tensor tertiary eigenvectors may specify the orientation of ventricular laminar sheets in cardiac ventricles, and DTMRI-based estimates of fiber orientation are shown to agree closely with those measured using histological techniques.
Abstract: An imaging method for the rapid reconstruction of fiber orientation throughout the cardiac ventricles is described. In this method, gradient-recalled acquisition in the steady-state (GRASS) imaging is used to measure ventricular geometry in formaldehyde-fixed hearts at high spatial resolution. Diffusion-tensor magnetic resonance imaging (DTMRI) is then used to estimate fiber orientation as the principle eigenvector of the diffusion tensor measured at each image voxel in these same hearts. DTMRI-based estimates of fiber orientation in formaldehyde-fixed tissue are shown to agree closely with those measured using histological techniques, and evidence is presented suggesting that diffusion tensor tertiary eigenvectors may specify the orientation of ventricular laminar sheets. Using a semiautomated software tool called HEARTWORKS, a set of smooth contours approximating the epicardial and endocardial boundaries in each GRASS short-axis section are estimated. These contours are then interconnected to form a volumetric model of the cardiac ventricles. DTMRI-based estimates of fiber orientation are interpolated into these volumetric models, yielding reconstructions of cardiac ventricular fiber orientation based on at least an order of magnitude more sampling points than can be obtained using manual reconstruction methods. © 2000 Biomedical Engineering Society. PAC00: 8761-c, 8757Gg

158 citations


Proceedings ArticleDOI
03 Sep 2000
TL;DR: Gabor-like space-variant filters are used for iteratively expanding an initially empty image containing just one or a few seeds, and a directional image model is used for tuning the filters according to the underlying ridge orientation.
Abstract: Introduces a method for the generation of synthetic fingerprint images. Gabor-like space-variant filters are used for iteratively expanding an initially empty image containing just one or a few seeds. A directional image model, whose inputs are the number and location of the fingerprint cores and deltas, is used for tuning the filters according to the underlying ridge orientation. Very realistic fingerprint images are obtained after the final noising-and-rendering stage.

158 citations


Journal ArticleDOI
TL;DR: The proposed morphological technique is insensitive to noise, skew and text orientation, and is also free from artifacts that are usually introduced by both fixed/optimal global thresholding and fixed-size block-based local thresholding.
Abstract: This paper presents a morphological technique for text extraction from images. The proposed morphological technique is insensitive to noise, skew and text orientation. It is also free from artifacts that are usually introduced by both fixed/optimal global thresholding and fixed-size block-based local thresholding. Examples are presented to illustrate the performance of the proposed method.

157 citations


Patent
27 Apr 2000
TL;DR: In this paper, a computer assisted technique for constructing a 3D model on top of one or more images (e.g., photographs) such that the model's parameters automatically match those of the real world object depicted in the photograph(s).
Abstract: A computer-assisted technique for constructing a three-dimensional model on top of one or more images (e.g., photographs) such that the model's parameters automatically match those of the real world object depicted in the photograph(s). Camera parameters such as focal length, position, and orientation in space may be determined from the images such that the projection of a three-dimensional model through the calculated camera parameters matches the projection of the real world object through the camera onto the image surface. Modeling is accomplished using primitives, such as boxes or pyramids, which may be intuitively manipulated to construct the three-dimensional model on a video display or other display screen of a computer system with a two-dimensional input controller (e.g., a mouse, joystick, etc.) such that the displayed three-dimensional object manipulation emulates physical three-dimensional object manipulation. Camera and primitive parameters are incrementally updated to provide visual feedback of the effect of additional constraints on the three-dimensional model, making apparent which user action may have been responsible for any failure to provide a modeling solution and, thus, allowing for rapid reversal and correction thereof. Surface properties (i.e., textures) may be extracted from the images for use in the three-dimensional model.

Proceedings ArticleDOI
10 Sep 2000
TL;DR: The problem of detecting straight lines in gray-scale images as an inverse problem is posed based on use of the inverse Radon operator, which relates the parameters determining the location and orientation of the lines in the image to the noisy input image.
Abstract: The problem of determining the location and orientation of straight lines in images is often encountered in the fields of computer vision and image processing. Traditionally the Hough transform has been widely used to solve this problem for binary images, due to it's simplicity and effectiveness. In this paper we pose the problem of detecting straight lines in gray-scale images as an inverse problem. We treat the input image as noisy observations, which are related to the underlying transform domain image through the inverse Hough transform operator. We then regularize this inverse problem using constraints that accentuate peaks in the Hough parameter space. We present four different forms of such constraints and demonstrate their effectiveness. Finally we show how our scheme can be alternatively viewed as one of finding an optimal representation of the image in terms of elements chosen from a redundant dictionary of lines, and thus is a form of adaptive signal representation.

Patent
29 Sep 2000
TL;DR: In this paper, a computer executed control loop recognizes features in the image and finds a corresponding position and orientation of a CAD model by projecting the CAD representation onto a virtual camera and moving the virtual camera to track the relative motion of the real imaging device, according to an efficient "visual servoing" algorithm.
Abstract: The invention displays computer graphics in combination with imagery of real objects (24), while maintaining apparent alignment notwithstanding any changes of viewpoint of an imaging device (20) relative to the real object (24). A computer executed control loop recognizes features (26) in the image and finds a corresponding position and orientation of a CAD model by projecting the CAD representation onto a 'virtual camera' and 'moving' the virtual camera to track the relative motion of the real imaging device, according to an efficient 'visual servoing' algorithm. In an alternate embodiment of the invention, computing tasks are divided between an 'image processing host' (30) and one or more 'display hosts' (260, 264, 266) which communicate over a channel (262). Bandwidth is conserved by performing image registration locally at the display host(s) using the 'visual servoing' algorithm.

Patent
13 Nov 2000
TL;DR: In this paper, an image metrology reference target (120) is provided that when placed in a scene of interest facilitates image analysis for various measurement purposes, such as automatic detection and bearing determination.
Abstract: In one example, an orientation dependent radiation source (122A, 122B) emanates radiation having at least one detectable property that varies as a function of a rotation of the orientation dependent radiation source (122A, 122B) and/or an observation distance from the orientation dependent radiation source (e.g., a distance between the source and a radiation detection device). In one example, an image metrology reference target (120) is provided that when placed in a scene of interest facilitates image analysis for various measurement purposes. Such a reference target (120) may include automatic detection means for facilitating an automatic detection of the reference target (120) in an image of the reference target (120) obtained by a camera, and bearing determination means for facilitating a determination of position and/or orientation of the reference target with respect to the camera. In one example, the bearing determination means of the reference target (120) includes one or more orientation dependent radiation sources (122A, 122B).

Proceedings ArticleDOI
24 Apr 2000
TL;DR: This paper presents a graph theoretic method that is applicable to data association problems where the features are observed via a batch process and described in the context of two possible navigation applications: metric map building with simultaneous localisation, and topological map based localisation.
Abstract: Data association is the process of relating features observed in the environment to features viewed previously or to features in a map. This paper presents a graph theoretic method that is applicable to data association problems where the features are observed via a batch process. Batch observations detect a set of features simultaneously or with sufficiently small temporal difference that, with motion compensation, the features can be represented with precise relative coordinates. This data association method is described in the context of two possible navigation applications: metric map building with simultaneous localisation, and topological map based localisation. Experimental results are presented using an indoor mobile robot with a 2D scanning laser sensor. Given two scans from different unknown locations, the features common to both scans are mapped to each other and the relative change in pose (position and orientation) of the vehicle between the two scans is obtained.

Patent
14 Nov 2000
TL;DR: In this paper, a user can automatically calibrate a projector-camera system to recover the mapping from a given point in the source image and its corresponding point in a camera image, and vice-versa.
Abstract: The present invention enables a user to automatically calibrate a projector-camera system (14/10) to recover the mapping from a given point in the source (pre-projection) image and its corresponding point in the camera image, and vice-versa One or more calibration patterns are projected onto a flat surface (18) with possibly unknown location and orientation by a projector (14) with possibly unknown location, orientation and focal length Images of these patterns are captured by a camera (10) mounted at a possibly unknown location, orientation and with possibly unknown focal length Parameters for mapping between the source image and the camera image are computed (22) The present invention can become an essential component of a projector-camera system, such as automatic keystone correction and vision-based control of computer systems

Proceedings Article
01 Jan 2000
TL;DR: The general applicability of the so-called "Manhattan world" assumption about the scene statistics of city and indoor scenes is explored and it is shown that it holds in a large variety of less structured environments including rural scenes.
Abstract: Preliminary work by the authors made use of the so-called "Manhattan world" assumption about the scene statistics of city and indoor scenes. This assumption stated that such scenes were built on a cartesian grid which led to regularities in the image edge gradient statistics. In this paper we explore the general applicability of this assumption and show that, surprisingly, it holds in a large variety of less structured environments including rural scenes. This enables us, from a single image, to determine the orientation of the viewer relative to the scene structure and also to detect target objects which are not aligned with the grid. These inferences are performed using a Bayesian model with probability distributions (e.g. on the image gradient statistics) learnt from real data.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: A new approach for geometric distortion correction based on image normalization is presented, in which the watermark is embedded and detected in the normalized image regardless of its size, orientation and flipping direction.
Abstract: A new approach for geometric distortion correction based on image normalization is presented in this paper. By normalization we mean geometrically transforming the image into a standard form. The parameters by which the image is normalized are estimated from the geometric moments of the image. This paper presents a system in which the watermark is embedded and detected in the normalized image. The watermark can then be embedded and detected in the normalized image regardless of its size, orientation and flipping direction.

Journal ArticleDOI
01 Sep 2000
TL;DR: A system capable of automatically reconstructing a prototype 3D model from a minimum number of range images of an object, and each set of results was found to be reasonably consistent with an intuitive human search.
Abstract: The focus of this paper is to design and implement a system capable of automatically reconstructing a prototype 3D model from a minimum number of range images of an object. Given an ideal 3D object model, the system iteratively renders range and intensity images of the model from a specified position, assimilates the range information into a prototype model, and determines the sensor pose (position and orientation) from which an optimal amount of previously unrecorded information may be acquired. Reconstruction is terminated when the model meets a given threshold of accuracy. Such a system has applications in the context of robot navigation, manufacturing, or hazardous materials handling. The system has been tested successfully on several synthetic data models, and each set of results was found to be reasonably consistent with an intuitive human search. The number of views necessary to reconstruct an adequate 3D prototype depends on the complexity of the object or scene and the initial data collected. The prototype models which the system recovers compare well with the ideal models.

Patent
22 Dec 2000
TL;DR: In this article, a surgical navigation system has a computer with a memory and display connected to a surgical instrument or pointer and position tracking system, so that the location and orientation of the pointer are tracked in real time and conveyed to the computer.
Abstract: A surgical navigation system has a computer with a memory and display connected to a surgical instrument or pointer and position tracking system, so that the location and orientation of the pointer are tracked in real time and conveyed to the computer. The computer memory is loaded with data from an MRI, CT, or other volumetric scan of a patient, and this data is utilized to dynamically display 3-dimensional perspective images in real time of the patient's anatomy from the viewpoint of the pointer. The images are segmented and displayed in color to highlight selected anatomical features and to allow the viewer to see beyond obscuring surfaces and structures. The displayed image tracks the movement of the instrument during surgical procedures. The instrument may include an imaging device such as an endoscope or ultrasound transducer, and the system displays also the image for this device from the same viewpoint, and enables the two images to be fused so that a combined image is displayed. The system is adapted for easy and convenient operating room use during surgical procedures.

Journal ArticleDOI
01 Aug 2000
TL;DR: A novel lane detection algorithm for visual traffic surveillance applications under the auspice of intelligent transportation systems that works well under practical visual surveillance conditions and is efficient as it only requires one image frame to determine the road center lines.
Abstract: This paper describes a novel lane detection algorithm for visual traffic surveillance applications under the auspice of intelligent transportation systems. Traditional lane detection methods for vehicle navigation typically use spatial masks to isolate instantaneous lane information from on-vehicle camera images. When surveillance is concerned, complete lane and multiple lane information is essential for tracking vehicles and monitoring lane change frequency from overhead cameras, where traditional methods become inadequate. The algorithm presented in this paper extracts complete multiple lane information by utilizing prominent orientation and length features of lane markings and curb structures to discriminate against other minor features. Essentially, edges are first extracted from the background of a traffic sequence, then thinned and approximated by straight lines. From the resulting set of straight lines, orientation and length discriminations are carried out three-dimensionally with the aid of two-dimensional (2-D) to three-dimensional (3-D) coordinate transformation and K-means clustering. By doing so, edges with strong orientation and length affinity are retained and clustered, while short and isolated edges are eliminated. Overall, the merits of this algorithm are as follows. First, it works well under practical visual surveillance conditions. Second, using K-means for clustering offers a robust approach. Third, the algorithm is efficient as it only requires one image frame to determine the road center lines. Fourth, it computes multiple lane information simultaneously. Fifth, the center lines determined are accurate enough for the intended application.

Patent
04 Feb 2000
TL;DR: In this paper, a three-dimensional position and orientation sensing apparatus including an image input section which inputs an image acquired by an image acquisition apparatus and showing at least three markers having color or geometric characteristics as one image, threedimensional positional information of the markers with respect to an object to be measured being known in advance is presented.
Abstract: A three-dimensional position and orientation sensing apparatus including: an image input section which inputs an image acquired by an image acquisition apparatus and showing at least three markers having color or geometric characteristics as one image, three-dimensional positional information of the markers with respect to an object to be measured being known in advance; a region extracting section which extracts a region corresponding to each marker in the image; a marker identifying section which identifies the individual markers based on the color or geometric characteristics of the markers in the extracted regions; and a position and orientation calculating section which calculates the three-dimensional position and orientation of the object to be measured with respect to the image acquisition apparatus, by using positions of the identified markers in the image input to the image input section, and the positional information of the markers with respect to the object to be measured.

Proceedings ArticleDOI
TL;DR: In this paper, the authors extend their previous work on robust image digest functions describing ideas how to make the hash function independent of image orientation and size, which can be clearly utilized for other applications, such as a search index for an efficient image database search.
Abstract: Digital watermarks have recently been proposed for authentication and fingerprinting of both video data and still images and for integrity verification of visual multimedia. In such applications, the watermark must be oblivious and has to depend on a secret key and on the original image. It is important that the dependence on the key be sensitive, while the dependence on the image be continuous (robust). Both requirements can be satisfied using special image digest (hash) functions that return the same bit-string for whole class of images derived from an original image using common processing operations including rotation and scaling. It is further required that two completely different images produce completely different bit-strings. In this paper, we extend our previous work on robust image digest functions describing ideas how to make the hash function independent of image orientation and size. The robust image digest can be clearly utilized for other applications, such as a search index for an efficient image database search.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: This paper addresses the problem of scanning both the color and geometry of real objects and displaying realistic images of the scanned objects from arbitrary viewpoints with a complete system that uses a stereo camera setup with active lighting to scan the object surface geometry and color.
Abstract: This paper addresses the problem of scanning both the color and geometry of real objects and displaying realistic images of the scanned objects from arbitrary viewpoints. We describe a complete system that uses a stereo camera setup with active lighting to scan the object surface geometry and color. Scans expressed in sensor coordinates are registered into a single object-centered coordinate system by aligning both the color and geometry where the scans overlap. The range data are integrated into a surface model using a robust hierarchical space carving method. The fit of the resulting approximate mesh to data is improved and the mesh structure is simplified using mesh optimization methods. In addition, a method for view-dependent texturing of the reconstructed surfaces is described. The method projects the color data from the input images onto the surface model and blends the various images depending on the location of the viewpoint and other factors such as surface orientation.

Journal ArticleDOI
TL;DR: A method for preserving the intrinsic orientation of the data during nonrigid warps of the image and a number of similarity measures are proposed, based on the DT itself, on theDT deviatoric, and on indices derived from the DT, to drive an elastic matching algorithm applied to the task of registration of 3D images of the human brain.

Patent
15 Jun 2000
TL;DR: In this paper, an automatic vision guidance system for an agricultural vehicle is described, which uses a K-means clustering algorithm in image processing to distinguish between crop and non-crop features.
Abstract: An automatic vision guidance system for an agricultural vehicle is disclosed and described. The vision guidance system uses a K-means clustering algorithm in image processing to distinguish between crop and non-crop features. The vision guidance system utilizes moment algorithms to determine the location and orientation of crop rows, from which desired wheel angles are determined and steering is commanded. The vision guidance system may adjust the location and orientation of visually determined crop rows according to a predetermined distance between crop rows. Further, the vision guidance system may utilize a redundant number of regions of interest in determining crop row locations and orientations.

Patent
Sebastien Roy1
30 Aug 2000
TL;DR: In this paper, the authors present a method for computing the location and orientation of an object in 3D space, which comprises the steps of marking a plurality of feature points on a 3D model and corresponding feature points in a 2D query image; for all possible subsets of three two-dimensional feature points marked in step (a), computing the four possible three-dimensional rigid motion solutions of a set of three points in threedimensional space such that after each of the four rigid motions, under a fixed perspective projection, the three threedimensional points are mapped precisely to the three corresponding
Abstract: A method for computing the location and orientation of an object in three-dimensional space. The method comprises the steps of: (a) marking a plurality of feature points on a three-dimensional model and corresponding feature points on a two-dimensional query image; (b) for all possible subsets of three two-dimensional feature points marked in step (a), computing the four possible three-dimensional rigid motion solutions of a set of three points in three-dimensional space such that after each of the four rigid motions, under a fixed perspective projection, the three three-dimensional points are mapped precisely to the three corresponding two-dimensional points; (c) for each solution found in step (b), computing an error measure derived from the errors in the projections of all three-dimensional marked points in the three-dimensional model which were not among the three points used in the solution, but which did have corresponding marked points in the two-dimensional query image; (d) ranking the solutions from step (c) based on the computed error measure; and (e) selecting the best solution based on the ranking in step (d). Also provided is a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform the method steps of the present invention and a computer program product embodied in a computer-readable medium for carrying out the methods of the present invention.

Patent
22 Dec 2000
TL;DR: In this article, a method and a system for visualizing the position and orientation of an object that is penetrating, or that has penetrated, into a subject, a first set of image data are produced from the interior of the subject before or after the penetration of the object into the subject, the sets of images are connected and are superimposed to form a fused set of images.
Abstract: In a method and a system for visualizing the position and orientation of an object that is penetrating, or that has penetrated, into a subject, a first set of image data are produced from the interior of the subject before the object has penetrated into the subject, a second set of image data are produced from the interior of the subject during or after the penetration of the object into the subject, the sets of image data are connected and are superimposed to form a fused set of image data, and an image obtained from the fused set of image data is displayed. The system has an x-ray computed tomography apparatus, and an x-ray apparatus, and/or an ultrasound apparatus for producing the first and second sets of data, respectively.

Journal ArticleDOI
TL;DR: The authors aim to demonstrate that many processes which have on occasion been viewed to be the exclusive province of automated, high-precision vision metrology are indeed suited to more general application across a broad range of fields which involve 3D object recording via close-range imagery.
Abstract: Through the adoption of recent innovations in automation in vision metrology, it can be demonstrated that rigorous, yet user-friendly digital photogrammetric processes of calibration and orientation/triangulation can be incorporated into a computational scheme which is on the one hand capable of meeting the demands of high metric quality, while on the other offering the facilities necessary to support wider application by non-specialist users. The software system Australis, developed for image mensuration and restitution of off-line close-range photogrammetric networks, is featured to illustrate these processes and procedures. By describing the structure and components of Australis, the authors aim to demonstrate that many processes which have on occasion been viewed to be the exclusive province of automated, high-precision vision metrology are indeed suited to more general application across a broad range of fields which involve 3D object recording via close-range imagery.

Patent
14 Nov 2000
TL;DR: In this article, an uncalibrated camera is used to observe the projected image and the image to be displayed is pre-warped so that the distortions induced by the misaligned projection system will exactly undo the distortion.
Abstract: The present invention provides automatic correction of any distortions produced when computer projection displays are misaligned with respect to the projection surface (such as keystoning). Although sophisticated LCD projectors now offer partial solutions to this problem, they require specialized hardware and time-consuming manual adjustment. The two key concepts in the present invention are: (1) using an uncalibrated camera to observe the projected image; and (2) the image to be displayed is pre-warped so that the distortions induced by the misaligned projection system will exactly undo the distortion. The result is that an arbitrarily mounted projector (in an unknown orientation) still displays a perfectly aligned and rectilinear image.