scispace - formally typeset
Search or ask a question

Showing papers on "3D reconstruction published in 2002"


Journal ArticleDOI
TL;DR: This article presents a detailed review of some of the most used calibrating techniques in which the principal idea has been to present them all with the same notation.

536 citations


Journal ArticleDOI
TL;DR: It is shown that complete surfel-based reconstructions can be created by repeatedly applying an algorithm called Surfel Sampling that combines sampling and parameter estimation to fit a single surfel to a small, bounded region of space-time.
Abstract: In this paper we study the problem of recovering the 3D shape, reflectance, and non-rigid motion properties of a dynamic 3D scene. Because these properties are completely unknown and because the scene's shape and motion may be non-smooth, our approach uses multiple views to build a piecewise-continuous geometric and radiometric representation of the scene's trace in space-time. A basic primitive of this representation is the dynamic surfel, which (1) encodes the instantaneous local shape, reflectance, and motion of a small and bounded region in the scene, and (2) enables accurate prediction of the region's dynamic appearance under known illumination conditions. We show that complete surfel-based reconstructions can be created by repeatedly applying an algorithm called Surfel Sampling that combines sampling and parameter estimation to fit a single surfel to a small, bounded region of space-time. Experimental results with the Phong reflectance model and complex real scenes (clothing, shiny objects, skin) illustrate our method's ability to explain pixels and pixel variations in terms of their underlying causes—shape, reflectance, motion, illumination, and visibility.

154 citations


Journal Article
TL;DR: In this article, two methods for RFM-based 3D reconstruction, the inverse RFM method and the forward RFM model, were investigated, and the results showed that the forward model can achieve a better reconstruction accuracy than the inverse model.
Abstract: The rational function model (RFM) is an alternative sensor model allowing users to perform photogrammetric processing. The RFM has been used as a replacement sensor model in some commercial photogrammetric systems due to its capability of maintaining the accuracy of the physical sensor models and its generic characteristic of supporting sensor-independent photogrammetric processing. With RFM parameters provided, end users are able to perform photogrammetric processing including ortho-rectification, 3D reconstruction, and DEM generation with an absence of the physical sensor model. In this research, we investigate two methods for RFM-based 3D reconstruction, the inverse RFM method and the forward RFM method. Detailed derivations of the algorithmic procedure are described. The emphasis is placed on the comparison of these two reconstruction methods. Experimental results show that the forward RFM can achieve a better reconstruction accuracy. Finally, real Ikonos stereo pairs were employed to verify the applicability and the performance of the reconstruction method.

143 citations


Journal ArticleDOI
TL;DR: It is demonstrated that it is possible to perform 3D segmentation of 3D objects in a scene and it is shown that the 3D correlation is more discriminant than the two-dimensional correlation.
Abstract: We use integral images of a three-dimensional (3D) scene to estimate the longitudinal depth of multiple objects present in the scene. With this information, we digitally reconstruct the objects in three dimensions and compute 3D correlations of input objects. We investigate the use of nonlinear techniques for 3D correlations. We present experimental results for 3D reconstruction and correlation of 3D objects. We demonstrate that it is possible to perform 3D segmentation of 3D objects in a scene. We finally present experiments to demonstrate that the 3D correlation is more discriminant than the two-dimensional correlation.

136 citations


Journal ArticleDOI
TL;DR: A distributed relational database for managing complex datasets and its integration into the high-resolution software package IMIRS (Image Management and Icosahedral Reconstruction System), which automates the tedious tasks of data management, enables data coherence, and facilitates information sharing in a distributed computer and user environment.

94 citations


Book ChapterDOI
25 Sep 2002
TL;DR: This work describes a technique whereby the 3D reconstruction occurs in real-time as the data is acquired, and where the operator can view the progress of the reconstruction on three orthogonal slice views through the ultrasound volume.
Abstract: The most attractive feature of 2D B-mode ultrasound for intra-operative use is that it is both a real time and a highly interactive modality. Most 3D freehand reconstruction methods, however, are not fully interactive because they do not allow the display of any part of the 3D ultrasound image until all data collection and reconstruction is finished. We describe a technique whereby the 3D reconstruction occurs in real-time as the data is acquired, and where the operator can view the progress of the reconstruction on three orthogonal slice views through the ultrasound volume. Capture of the ultrasound data can be immediately followed by a straightforward, interactive nonlinear registration of a pre-operative MRI volume to match the intra-operative ultrasound. We demonstrate the our system on a deformable, multi-modal PVA-cryogel phantom and during a clinical surgery.

81 citations


Proceedings ArticleDOI
07 Nov 2002
TL;DR: A method of 3D reconstruction of a human face from video in which the3D reconstruction algorithm and the generic model are handled separately and the main advantage of this algorithm is that it is able to retain the specific features of the face in the video sequence even when these features are different from those of theGeneric model.
Abstract: Reconstructing a 3D model of a human face from a video sequence is an important problem in computer vision, with applications to recognition, surveillance, multimedia etc. However, the quality of 3D reconstructions using structure from motion (SfM) algorithms is often not satisfactory. One common method of overcoming this problem is to use a generic model of a face. Existing work using this approach initializes the reconstruction algorithm with this generic model. The problem with this approach is that the algorithm can converge to a solution very close to this initial value, resulting in a reconstruction which resembles the generic model rather than the particular face in the video which needs to be modeled. We propose a method of 3D reconstruction of a human face from video in which the 3D reconstruction algorithm and the generic model are handled separately. A 3D estimate is obtained purely from the video sequence using SfM algorithms without use of the generic model. The final 3D model is obtained after combining the SfM estimate and the generic model using an energy function that corrects for the errors in the estimate by comparing local regions in the two models. The optimization is done using a Markov chain Monte Carlo (MCMC) sampling strategy. The main advantage of our algorithm over others is that it is able to retain the specific features of the face in the video sequence even when these features are different from those of the generic model. The evolution of the 3D model through the various stages of the algorithm is presented.

63 citations


Proceedings ArticleDOI
01 Jul 2002
TL;DR: It is demonstrated that light field morphing is an effective and easy-to-use technqiue that can generate convincing 3D morphing effects.
Abstract: We present a feature-based technique for morphing 3D objects represented by light fields. Our technique enables morphing of image-based objects whose geometry and surface properties are too difficult to model with traditional vision and graphics techniques. Light field morphing is not based on 3D reconstruction; instead it relies on ray correspondence, i.e., the correspondence between rays of the source and target light fields. We address two main issues in light field morphing: feature specification and visibility changes. For feature specification, we develop an intuitive and easy-to-use user interface (UI). The key to this UI is feature polygons, which are intuitively specified as 3D polygons and are used as a control mechanism for ray correspondence in the abstract 4D ray space. For handling visibility changes due to object shape changes, we introduce ray-space warping. Ray-space warping can fill arbitrarily large holes caused by object shape changes; these holes are usually too large to be properly handled by traditional image warping. Our method can deal with non-Lambertian surfaces, including specular surfaces (with dense light fields). We demonstrate that light field morphing is an effective and easy-to-use technqiue that can generate convincing 3D morphing effects.

60 citations


Proceedings ArticleDOI
09 Dec 2002
TL;DR: A novel system which, combines depth-from-stereo and visual hull reconstruction for acquiring dynamic real-world scenes at interactive rates by using the silhouettes from multiple views to construct a polyhedral visual hull.
Abstract: In this paper, we present a novel system which, combines depth-from-stereo and visual hull reconstruction for acquiring dynamic real-world scenes at interactive rates. First, we use the silhouettes from multiple views to construct a polyhedral visual hull is then used to limit the disparity range during depth-from-stereo computation. The restricted search range improves both speed and quality of the stereo reconstruction. In return, stereo information can compensate for some of the visual hull method, such as inability to reconstruct surface details and concave regions. Our system achieves a reconstruction frame rate of 4fps.

57 citations


Book ChapterDOI
28 May 2002
TL;DR: In this paper the problem of synthesizing virtual views from scene points within the scene, i.e., from scenepoints which are imaged by the real cameras, is examined, and a simple way of defining the position of the virtual camera in an uncalibrated setting is provided.
Abstract: In this paper we examine the problem of synthesizing virtual views from scene points within the scene, i.e., from scene points which are imaged by the real cameras. On one hand this provides a simple way of defining the position of the virtual camera in an uncalibrated setting. On the other hand, it implies extreme changes in viewpoint between the virtual and real cameras. Such extreme changes in viewpoint are not typical of most New-View-Synthesis (NVS) problems.In our algorithm the virtual view is obtained by aligning and comparing all the projections of each line-of-sight emerging from the "virtual camera" center in the input views. In contrast to most previous NVS algorithms, our approach does not require prior correspondence estimation nor any explicit 3D reconstruction. It can handle any number of input images while simultaneously using the information from all of them. However, very few images are usually enough to provide reasonable synthesis quality. We show results on real images as well as synthetic images with ground-truth.

53 citations


Proceedings ArticleDOI
02 Jun 2002
TL;DR: In this article, a set of hand-detected correspondences are established across (but not necessarily all) images and a heuristic method is suggested for selecting the point pairs which are most reliable for 3D reconstruction.
Abstract: This paper proposes a scheme for the reliable reconstruction of indoor scenes from a few catadioptric images. A set of hand-detected correspondences are established across (but not necessarily all) images. Our improved method is used for the estimation of the essential matrix from appropriately normalized point coordinates. Motion parameters are computed by using the Hartley's (1993) decomposition. A heuristic method is suggested for selecting the point pairs which are most reliable for 3D reconstruction. The known mid-point method is applied for computing the 3D model of a real scene. The parameters of the catadioptric sensor are approximately known but no precise self-calibration method is performed. The experiments show that a reliable 3D reconstruction is possible even without complicated non-linear self-calibration and/or reconstruction methods.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: A fast but accurate method for the estimation of the carving depth at each vertex of the 3D mesh is proposed for the reconstruction of a 3D real object from a sequence of high-definition images.
Abstract: We present a method for the reconstruction of a 3D real object from a sequence of high-definition images. We combine two different procedures: a shape from silhouette technique which provides a coarse 3D initial model followed by a multi-stereo carving technique. We propose a fast but accurate method for the estimation of the carving depth at each vertex of the 3D mesh. The quality of the final textured 3D reconstruction models allows us to validate the method.

Book ChapterDOI
01 Jan 2002
TL;DR: In this paper, the authors presented a method for fully automatic 3D reconstruction of coronary artery centerlines using three X-ray angiogram projections from a single rotating monoplane acquisition.
Abstract: We present a method for fully automatic 3D reconstruction of coronary artery centerlines using three X-ray angiogram projections from a single rotating monoplane acquisition The reconstruction method consists of three steps: (1) filtering and segmenting the images using a multiscale analysis, (2) matching points in two of the segmented images using the information from the third image, and (3) reconstructing in 3D the matched points This method needs good calibration of the system geometry and requires breatheld acquisitions The final algorithm is formulated as an energy minimization problem that we solve using dynamic programming optimization This method provides a fast and automatic way to compute 3D models of vessels centerlines It has been applied to both phantoms, for validation purposes, and patient data sets

Proceedings ArticleDOI
11 Aug 2002
TL;DR: It is established that the problem of 3D reconstruction from a single perspective view of a mirror symmetric scene is geometrically equivalent to observing the scene with two cameras, the cameras being symmetrical with respect to the unknown 3D symmetry plane.
Abstract: We address the problem of 3D reconstruction from a single perspective view of a mirror symmetric scene. We establish the fundamental result that it is geometrically equivalent to observing the scene with two cameras, the cameras being symmetrical with respect to the unknown 3D symmetry plane. All traditional tools of classical 2-view stereo can then be applied, and the concepts of fundamental/essential matrix, epipolar geometry, rectification and disparity hold. However, the problems are greatly simplified here, as the rectification process and the computation of epipolar geometry can be easily performed from the original view only. If the camera is calibrated, we show how to synthesize the symmetric image generated by the same physical camera. An Euclidean reconstruction of the scene can then be computed from the resulting stereo pair. To validate this novel formulation, we have processed many real images, and show examples of 3D reconstruction.

Proceedings ArticleDOI
30 Jul 2002
TL;DR: Inverse Synthetic Aperture Radar (ISAR) imagery provides an opportunity for 3D reconstruction, because it relies on target motion to provide cross-range resolution and is derived as a temporal sequence.
Abstract: Inverse Synthetic Aperture Radar (ISAR) imagery provides an opportunity for 3D reconstruction, because it relies on target motion to provide cross-range resolution and is derived as a temporal sequence. As it moves, the target presents different aspects, which can be integrated to derive the third dimension. Tomasi and Kanade introduced a robust technique for recovering object shape and motion, based on factorization of a matrix that represents the 2D projection equations for a set of points on the target object, as observed in an image sequence. The technique has been applied to orthographic projection Tomasi and Kanade, paraperspective projection Poelman and Kanade, and perspective projection Han and Kanade, but encounters nonlinearities when applied to point perspective projection, which require iterative solution. ISAR projection is naturally well suited for application of the factorization technique because the projection equations are linear. 3D reconstruction may lead to improved performance for automatic target recognition (ATR) procedures and may also be used to enhance human visualization of iamged targets.© (2002) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: The objective of the work presented in this paper is to generate complete, high-resolution models of real world scenes from passive intensity images and active range sensors by fusing range and intensity data.

Patent
30 Jul 2002
TL;DR: In this paper, a 3D reconstruction of the object is generated from the edges, and tests are performed on height and 3D shape of the reconstruction to classify the object as the ambulatory person or the wheelchair user.
Abstract: A method classifies an object in a scene as either an ambulatory person or a wheelchair user. Images of the scene are acquired with a set of stereo cameras. Moving objects in the scene are segmented from the background using detected edges. A 3D reconstruction of the object is generated from the edges, and tests are performed on height and 3D shape of the 3D reconstruction to classify the object as the ambulatory person or the wheelchair user.

Journal ArticleDOI
TL;DR: The quantitative approach to 3D reconstruction of refractive index inhomogeneity of static phase object is presented and the results prove high performance of the proposed measuring procedure.

Journal ArticleDOI
TL;DR: 3D images were found to have negligible visual distortion, enabling subjective assessments to be made with confidence to aid intervention, and to correct for geometrical distortion in the image intensifier television system as well as for deviations in gantry motion.
Abstract: The purpose of this project was to determine the degree of geometrical distortion in a three-dimensional (3D) image volume generated by a digital fluorography system with rotational image acquisition capabilities. 3D imaging is a valuable adjunct in neuroangiography for visualization and measurement of cerebral aneurysms and for determination of the optimum projection for intervention. To enable spatially accurate 3D reconstruction the system must correct for geometrical distortion in the image intensifier television system as well as for deviations in gantry motion. 3D volumes were reconstructed from 100 X-ray projections acquired over a 180 degrees arc over a period of 8 s. A phantom was constructed to assess geometrical distortion in the three dimensions. The phantom consisted of 1 mm diameter ball bearings embedded in Perspex in a cubic lattice configuration. The ball bearings were placed at 20 mm intervals over a 140 mm cubic volume. Distortion was assessed by taking measurements between points of known separation and using a differential distortion measurement. The maximum error in the 3D location of objects was found to be 1.4 mm, while the differential distortion was found to range from -1.0% to +2.3%. The 3D images were found to have negligible visual distortion, enabling subjective assessments to be made with confidence to aid intervention.

Journal ArticleDOI
TL;DR: A simplified version of the stereo reconstruction model is developed for O. Faugeras and R. Keriven (1998) and a weighted area measure is applied as part of a solution to the correspondence extraction in the shape from stereo and theshape from autostereogram problems.

Book ChapterDOI
21 Nov 2002
TL;DR: An appearance-based learning method that uses a shape descriptor of the 2D silhouette for classifying and identifying human posture and complements the articulated body model since it can define a mapping between the observed shape and the learned descriptions for inferring the articulatedBody model.
Abstract: In this paper we present an approach for capturing 3D body motion and inferring human body posture from detected silhouettes. We show that the integration of two or more silhouettes allows us to perform a 3D body reconstruction while each silhouette can be used for identifying human body postures. The 3D reconstruction is based on the representation of body parts using Generalized Cylinders providing an estimation of the 3D shape of the human body. The 3D shape description is refined by fitting an articulated body model using a particle filter technique. Identifying human body posture from the 2D silhouettes can reduce the complexity of the particle filtering by reducing the search space. We present an appearance-based learning method that uses a shape descriptor of the 2D silhouette for classifying and identifying human posture. The proposed method does not require an articulated body model fitted onto the reconstructed 3D geometry of the human body: It complements the articulated body model since we can define a mapping between the observed shape and the learned descriptions for inferring the articulated body model.

01 Jan 2002
TL;DR: A new motion based segmentation algorithm able to automatically detect and reconstruct planar regions and extend the non-linear estimator to incorporate the optical flow covariance matrix (maximum-likelihood) and it is shown that it is possible to locally time integrate the reconstruction process for increased robustness.
Abstract: Reconstructing the 3D shape of a scene from its 2D images is a problem that has attracted a great deal of research 3D models are nowadays widely used for scientific visualization, entertainment and engineering tasks Most of the approaches developed by the computer vision community can be roughly classified as feature based or flow based, according to if the data they use is a set of features matches or an optical flow field While a dense optical flow field, due to its noisy nature, is not extremely suitable for tracking, finding corresponding features between different views of large baseline is still an open problem The system we develop in this thesis is of a hybrid type We track sparse features over sequences acquired at 25Hz from an hand held camera During the tracking good features can be selected as those laying in high textured areas: this guarantees higher precision in the estimation of features displacements Such displacements are used to approximate optical flow We demonstrate that this approximation is a good one for our working conditions Using this approach we bypass the matching problem of stereo and the complexity and time integration problems of the optical flow based reconstruction Time integration is obtained by an optimal predict-update procedure that merges measurements by re-weighting by the respective covariance measurements Most of the research effort of this thesis is focused on the robust estimation of structure and motion from a pair of images and the related optical flow field We test first a linear solution that has the appealing property of being of closed form but the problem of returning biased estimates We propose an non-linear refinement to the linear estimator showing convergence properties and improvements in bias and variance We further extend the non-linear estimator to incorporate the optical flow covariance matrix (maximum-likelihood) and, moreover, we show that, in the case of dense sequences, it is possible to locally time integrate the reconstruction process for increased robustness We experimentally investigate the possibility of introducing geometrical constraints in the structure and motion estimation Such constraints are of bilinear type, ie planes, lines and incidence of these primitives are used For this purpose we present a new motion based segmentation algorithm able to automatically detect and reconstruct planar regions To asses the efficacy of our solution the algorithms were tested on a variety of real and simulated sequences ISBN 91-7283-308-4 • TRITA-02-11 • ISSN 0348-2952 • ISRN KTH/NA/R 02-11

Proceedings ArticleDOI
09 May 2002
TL;DR: In this article, a simple backprojection with an order statistics-based operator is used for combining the backprojected images into a reconstructed slice, which yields high contrast reconstructions with reduced artifacts at a relatively low computational complexity.
Abstract: Digital tomosynthesis mammography is an advanced x-ray application that can provide detailed 3D information about the imaged breast. We introduce a novel reconstruction method based on simple backprojection, which yields high contrast reconstructions with reduced artifacts at a relatively low computational complexity. The first step in the proposed reconstruction method is a simple backprojection with an order statistics-based operator (e.g., minimum) used for combining the backprojected images into a reconstructed slice. Accordingly, a given pixel value does generally not contribute to all slices. The percentage of slices where a given pixel value does not contribute, as well as the associated reconstructed values, are collected. Using a form of re-projection consistency constraint, one now updates the projection images, and repeats the order statistics backprojection reconstruction step, but now using the enhanced projection images calculated in the first step. In our digital mammography application, this new approach enhances the contrast of structures in the reconstruction, and allows in particular to recover the loss in signal level due to reduced tissue thickness near the skinline, while keeping artifacts to a minimum. We present results obtained with the algorithm for phantom images.

Proceedings ArticleDOI
15 May 2002
TL;DR: A set-up for high-resolution imaging with a conventional microscope that allows producing a 3D-image set from which a3D model can be derived and a good administration needs to be maintained at acquisition.
Abstract: This paper describes a set-up for high-resolution imaging with a conventional microscope that allows producing a 3D-image set from which a 3D model can be derived. The 3D-image set, is de facto, a gray value voxel model. In order to obtain such 3D image set a good administration needs to be maintained at acquisition and moreover, the acquisition must be realized in two steps. High-resolution images are built of image tiles and sophisticated algorithms are required to build a coherent image from the tiles.

Proceedings ArticleDOI
24 Jun 2002
TL;DR: An industrial application of a vision based 3D scene reconstruction process used in the Poseidon system to realize an automatic CAD model of a swimming pool proves that computer vision modeling can be efficiently used for real time applications.
Abstract: The article describes an industrial application of a vision based 3D scene reconstruction process. The images come from independent cameras whose locations are unknown and which are placed indifferently in two different media (i.e. underwater or in the air). This process is currently used in the Poseidon system (see http://www.poseidon-tech.com) to realize an automatic CAD model of a swimming pool. Poseidon is the first computer system in the world to help in the prevention of drownings in public swimming pools. We have evaluated the reconstruction accuracy based on synthetic scenes as well as real ones. In the case of real data, results are compared with measurements made by a surveyor (with a telemeter) and demonstrate high accuracy in 3D localization and reconstruction. Lastly, it proves that computer vision modeling can be efficiently used for real time applications.

Journal ArticleDOI
TL;DR: An automatic processing pipeline is presented that analyses an image sequence and automatically extracts camera motion, calibration and scene geometry and a dense estimate of the surface geometry of the observed scene is computed using stereo matching.
Abstract: This paper contains two parts. In the first part an automatic processing pipeline is presented that analyses an image sequence and automatically extracts camera motion, calibration and scene geometry. The system combines state-of-the-art algorithms developed in computer vision, computer graphics and photogrammetry. The approach consists of two stages. Salient features are extracted and tracked throughout the sequence to compute the camera motion and calibration and the 3D structure of the observed features. Then a dense estimate of the surface geometry of the observed scene is computed using stereo matching. The second part of the paper discusses how this information can be used for visualization. Traditionally, a textured 3D model is constructed from the computed information and used to render new images. Alternatively, it is also possible to avoid the need for an explicit 3D model and to obtain new views directly by combining the appropriate pixels from recorded views. It is interesting to note that even when there is an ambiguity on the reconstructed geometry, correct new images can often still be generated. Copyright © 2002 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: An algorithm for the automatic construction of a 3d model of archaeological vessels using two different 3d algorithms is presented, which provides information about the manufacturer and the usage of the vessel.
Abstract: An algorithm for the automatic construction of a 3d model of archaeological vessels using two different 3d algorithms is presented. In archeology the determination of the exact volume of arbitrary vessels is of importance since this provides information about the manufacturer and the usage of the vessel. To acquire the 3d shape of objects with handles is complicated, since occlusions of the object's surface are introduced by the handle and can only be resolved by taking multiple views. Therefore, the 3d reconstruction is based on a sequence of images of the object taken from different viewpoints with different algorithms; shape from silhouette and shape from structured light. The output of both algorithms are then used to construct a single 3d model. Results of the algorithm developed are presented for both synthetic and real input images.

Proceedings ArticleDOI
Fabio Remondino1
TL;DR: This paper presents a system for the reconstruction of 3-D models of articulated objects, like human bodies, from uncalibrated images, and a stereo matching process based on least squares matching extracts a dense set of image correspondences from the sequence.
Abstract: The goal of computing realistic 3-D models from image sequences is still a challenging problem. In recent years the demand for realistic reconstruction and modeling of objects and human bodies is increasing both for animation and medical applications. In this paper a system for the reconstruction of 3-D models of articulated objects, like human bodies, from uncalibrated images is presented. The scene is seen from different viewpoints and no pre-set knowledge is considered. To extract precise 3-D information from imagery, a calibration procedure must be performed. Therefore, first a camera calibration with Direct Linear Transformation (DLT) is done assuming few control points on the subject. The initial approximations of the interior and exterior orientation computed with DLT are then improved in a general photogrammetric bundle adjustment with self-calibration. Additionally a stereo matching process based on least squares matching extracts a dense set of image correspondences from the sequence. Finally a 3-D point cloud is computed by forward intersection using the calibration data and the matched points. The resulting 3-D model of human body is presented.

PatentDOI
TL;DR: In this paper, a method for local 3D reconstruction from 2-dimensional (2D) ultrasound images is proposed, which includes deriving a 2D image of an object, defining a target region within the 2D images, and defining a volume scan period.
Abstract: A method for local 3-dimensional (3D) reconstruction from 2-dimensional (2D) ultrasound images includes deriving a 2D image of an object; defining a target region within said 2D image; defining a volume scan period; during the volume scan period, deriving further 2D images of the target region and storing respective pose information for the further 2D images; and reconstructing a 3D image representation for the target region by utilizing the 2D images and the respective pose information.

Journal Article
TL;DR: 3D Reconstruction procedure is greatly speeded up by the suggested algorithm, which is based on the original volume rendering algorithm (Ray casting), which has been come true in environment of VC++ and OpenGL.
Abstract: The paper analyses the features of seismic data. Based on seismic data characteristics the paper simplified and improved the Ray-casting algorithm, therefore,3D Reconstruction procedure is greatly speeded up by the suggested algorithm, which is based on the original volume rendering algorithm (Ray casting).The algorithm has been come true in environment of VC++ and OpenGL