scispace - formally typeset
Search or ask a question

Showing papers on "3D reconstruction published in 2007"


Journal ArticleDOI
TL;DR: A survey of the most common techniques to fuse 3D surfaces by determining the motion between the different views, providing a useful guide for an interested reader including a Matlab toolbox available at the webpage of the authors.

654 citations


Journal ArticleDOI
TL;DR: A multilevel quality-guided phase unwrapping algorithm for real-time 3D shape measurement is presented and it is demonstrated that this algorithm improves the previous scan-line phase unwrap algorithm significantly although it reduces its processing speed slightly.
Abstract: A multilevel quality-guided phase unwrapping algorithm for real-time 3D shape measurement is presented. The quality map is generated from the gradient of the phase map. Multilevel thresholds are used to unwrap the phase level by level. Within the data points in each level, a fast scan-line algorithm is employed. The processing time of this algorithm is approximately 18.3 ms for an image size of 640x480 pixels in an ordinary computer. We demonstrate that this algorithm can be implemented into our real-time 3D shape measurement system for real-time 3D reconstruction. Experiments show that this algorithm improves the previous scan-line phase unwrapping algorithm significantly although it reduces its processing speed slightly.

201 citations


Journal ArticleDOI
TL;DR: In this article, a line detector is used to integrate the measured acoustic pressure over a straight line and can be realized by a thin line of a piezoelectric film or by a laser beam as part of an interferometer.
Abstract: Line detectors integrate the measured acoustic pressure over a straight line and can be realized by a thin line of a piezoelectric film or by a laser beam as part of an interferometer. Photoacoustic imaging with integrating line detectors is performed by rotating a sample or the detectors around an axis perpendicular to the line detectors. The subsequent reconstruction is a two-step procedure: first, two-dimensional (2D) projections parallel to the line detector are reconstructed, then the three-dimensional (3D) initial pressure distribution is obtained by applying the 2D inverse Radon transform. The first step involves an inverse problem for the 2D wave equation. Wave propagation in two dimensions is significantly different from 3D wave propagation and reconstruction algorithms from 3D photoacoustic imaging cannot be used directly. By integrating recently established 3D formulae in the direction parallel to the line detector we obtain novel back-projection formulae in two dimensions. Numerical simulations demonstrate the capability of the derived reconstruction algorithms, also for noisy measurement data, limited angle problems and 3D reconstruction with integrating line detectors.

167 citations


Journal ArticleDOI
TL;DR: An overview of the current state of the programs and some applications to cryo-electron tomography is given, i.e. to write standalone programs for simple tasks that are combined through shell scripting to provide more complex functionality, and to communicate with other software via common image formats.

154 citations


Proceedings ArticleDOI
26 Dec 2007
TL;DR: This paper presents an extremely efficient, inherently out-of-core bundle adjustment algorithm that decouple the original problem into several submaps that have their own local coordinate systems and can be optimized in parallel.
Abstract: Large-scale 3D reconstruction has recently received much attention from the computer vision community. Bundle adjustment is a key component of 3D reconstruction problems. However, traditional bundle adjustment algorithms require a considerable amount of memory and computational resources. In this paper, we present an extremely efficient, inherently out-of-core bundle adjustment algorithm. We decouple the original problem into several submaps that have their own local coordinate systems and can be optimized in parallel. A key contribution to our algorithm is making as much progress towards optimizing the global non-linear cost function as possible using the fragments of the reconstruction that are currently in core memory. This allows us to converge with very few global sweeps (often only two) through the entire reconstruction. We present experimental results on large-scale 3D reconstruction datasets, both synthetic and real.

153 citations


Proceedings ArticleDOI
26 Dec 2007
TL;DR: Algorithmic aspects of this parametric maximum flow problem previously unknown in vision, such as the ability to compute all breakpoints of lambda and corresponding optimal configurations infinite time are reviewed.
Abstract: The maximum flow algorithm for minimizing energy functions of binary variables has become a standard tool in computer vision. In many cases, unary costs of the energy depend linearly on parameter lambda. In this paper we study vision applications for which it is important to solve the maxflow problem for different lambda's. An example is a weighting between data and regularization terms in image segmentation or stereo: it is desirable to vary it both during training (to learn lambda from ground truth data) and testing (to select best lambda using high-knowledge constraints, e.g. user input). We review algorithmic aspects of this parametric maximum flow problem previously unknown in vision, such as the ability to compute all breakpoints of lambda and corresponding optimal configurations infinite time. These results allow, in particular, to minimize the ratio of some geometric functional, such as flux of a vector field over length (or area). Previously, such functional were tackled with shortest path techniques applicable only in 2D. We give theoretical improvements for "PDE cuts" [5]. We present experimental results for image segmentation, 3D reconstruction, and the cosegmentation problem.

151 citations


Journal ArticleDOI
TL;DR: In this paper, the authors summarize techniques of data processing and image correction to eliminate residual drawbacks of pixel detectors and show how to extend these methods to handle further physical effects such as hardening of the beam and edge enhancement by deflection.
Abstract: Semiconductor single-particle-counting pixel detectors offer many advantages for radiation imaging: high detection efficiency, energy discrimination, noiseless digital integration (counting), high frame rate and virtually unlimited dynamic range. All these properties allow to achieve high quality images. Examples of transmission images and 3D tomographic reconstruction using X-rays and slow neutrons are presented demonstrating effects that can affect the quality of images. A number of obstacles can limit detector performance if not handled. The pixel detector is in fact an array of individual detectors (pixels), each of them has its own efficiency, energy calibration and also noise. The common effort is to make all these parameters uniform for all pixels. However, an ideal uniformity can be never reached. Moreover, it is often seen that the signal in one pixel affects neighboring pixels due to various reasons (charge sharing, crosstalk, etc.). All such effects have to be taken into account during data processing to avoid false data interpretation. The main intention of this contribution is to summarize techniques of data processing and image correction to eliminate residual drawbacks of pixel detectors. It is shown how to extend these methods to handle further physical effects such as hardening of the beam and edge enhancement by deflection. Besides, more advanced methods of data processing such as tomographic 3D reconstruction are discussed. All methods are demonstrated on real experiments from biology and material science performed mostly with the Medipix2 pixel device. A brief view to the future of pixel detectors and their applications also including spectroscopy and particle tracking is given too.

130 citations


Journal ArticleDOI
TL;DR: High-resolution digital holography and pattern projection techniques such as coded light or fringe projection for real-time extraction of 3D object positions and color information could manifest themselves as an alternative to traditional camera-based methods.
Abstract: Advances in image sensors and evolution of digital computation is a strong stimulus for development and implementation of sophisticated methods for capturing, processing and analysis of 3D data from dynamic scenes. Research on perspective time-varying 3D scene capture technologies is important for the upcoming 3DTV displays. Methods such as shape-from-texture, shape-from-shading, shape-from-focus, and shape-from-motion extraction can restore 3D shape information from a single camera data. The existing techniques for 3D extraction from single-camera video sequences are especially useful for conversion of the already available vast mono-view content to the 3DTV systems. Scene-oriented single-camera methods such as human face reconstruction and facial motion analysis, body modeling and body motion tracking, and motion recognition solve efficiently a variety of tasks. 3D multicamera dynamic acquisition and reconstruction, their hardware specifics including calibration and synchronization and software demands form another area of intensive research. Different classes of multiview stereo algorithms such as those based on cost function computing and optimization, fusing of multiple views, and feature-point reconstruction are possible candidates for dynamic 3D reconstruction. High-resolution digital holography and pattern projection techniques such as coded light or fringe projection for real-time extraction of 3D object positions and color information could manifest themselves as an alternative to traditional camera-based methods. Apart from all of these approaches, there also are some active imaging devices capable of 3D extraction such as the 3D time-of-flight camera, which provides 3D image data of its environment by means of a modulated infrared light source.

130 citations


Journal ArticleDOI
TL;DR: This paper provides algorithms for 3D surface reconstruction to process the raw data and deliver detail preserving 3D models that possess accurate depth information for characterization and visualization of cracks as a significant improvement over contemporary commercial video-based vision systems.

125 citations


Book ChapterDOI
29 Mar 2007
TL;DR: In this paper, a Markov random field (MRF) model is used to identify the different planes and edges in the scene, as well as their orientations, and an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3D reconstruction.
Abstract: 3d reconstruction from a single image is inherently an ambiguous problem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of single-image depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each other. In this paper, we focus on the problem of automatic 3d reconstruction of indoor scenes, specifically ones (sometimes called “Manhattan worlds”) that consist mainly of orthogonal planes. We use a Markov random field (MRF) model to identify the different planes and edges in the scene, as well as their orientations. Then, an iterative optimization algorithm is applied to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction. Our approach is fully automatic—given an input image, no human intervention is necessary to obtain an approximate 3d reconstruction.

110 citations


Proceedings ArticleDOI
26 Dec 2007
TL;DR: The belief propagation algorithm is modified to operate on a 3D graph that includes both spatial and temporal neighbors and to be able to discard messages from outlying neighbors, and methods for introducing a bias and for suppressing noise typically observed in uniform regions are proposed.
Abstract: We present an approach for 3D reconstruction from multiple video streams taken by static, synchronized and calibrated cameras that is capable of enforcing temporal consistency on the reconstruction of successive frames. Our goal is to improve the quality of the reconstruction by finding corresponding pixels in subsequent frames of the same camera using optical flow, and also to at least maintain the quality of the single time-frame reconstruction when these correspondences are wrong or cannot be found. This allows us to process scenes with fast motion, occlusions and self- occlusions where optical flow fails for large numbers of pixels. To this end, we modify the belief propagation algorithm to operate on a 3D graph that includes both spatial and temporal neighbors and to be able to discard messages from outlying neighbors. We also propose methods for introducing a bias and for suppressing noise typically observed in uniform regions. The bias encapsulates information about the background and aids in achieving a temporally consistent reconstruction and in the mitigation of occlusion related errors. We present results on publicly available real video sequences. We also present quantitative comparisons with results obtained by other researchers.

Proceedings ArticleDOI
18 Jun 2007
TL;DR: In this article, a stereovision system is used to acquire images in order to obtain several shots of an object, at regular intervals according to a predefined trajectory, and a complete methodology of 3D reconstruction is exposed to perform a dense 3D model with texture mapping.
Abstract: The aim of this study is to propose a 3-dimension reconstruction method of small-scale scenes improved by a new image acquisition method for quantitative measurements. A stereovision system is used to acquire images in order to obtain several shots of an object, at regular intervals according to a predefined trajectory. A complete methodology of 3D reconstruction is exposed to perform a dense 3D model with texture mapping. A first result on natural images collected with the stereovision system during sea trials has been obtained.

Book ChapterDOI
27 Aug 2007
TL;DR: A robust energy model for multiview 3D reconstruction that fuses silhouette- and stereo-based image information that allows to cope with significant amounts of noise without manual pre-segmentation of the input images is introduced.
Abstract: In this work, we introduce a robust energy model for multiview 3D reconstruction that fuses silhouette- and stereo-based image information. It allows to cope with significant amounts of noise without manual pre-segmentation of the input images. Moreover, we suggest a method that can globally optimize this energy up to the visibility constraint. While similar global optimization has been presented in the discrete context in form of the maxflow-mincut framework, we suggest the use of a continuous counterpart. In contrast to graph cut methods, discretizations of the continuous optimization technique are consistent and independent of the choice of the grid connectivity. Our experiments demonstrate that this leads to visible improvements. Moreover, memory requirements are reduced, allowing for global reconstructions at higher resolutions.

Journal ArticleDOI
TL;DR: A non-linear optimization method to refine the motion and shape estimates which minimizes the image reprojection error and imposes the correct structure onto the motion matrix by choosing an appropriate parameterization is presented.

Journal ArticleDOI
TL;DR: A new method for recovering shape from shadows which is robust with respect to errors in shadows detection and it allows the reconstruction of objects in the round, rather than just bas-reliefs is proposed.
Abstract: Cast shadows are an informative cue to the shape of objects. They are particularly valuable for discovering object's concavities which are not available from other cues such as occluding boundaries. We propose a new method for recovering shape from shadows which we call shadow carving. Given a conservative estimate of the volume occupied by an object, it is possible to identify and carve away regions of this volume that are inconsistent with the observed pattern of shadows. We prove a theorem that guarantees that when these regions are carved away from the shape, the shape still remains conservative. Shadow carving overcomes limitations of previous studies on shape from shadows because it is robust with respect to errors in shadows detection and it allows the reconstruction of objects in the round, rather than just bas-reliefs. We propose a reconstruction system to recover shape from silhouettes and shadow carving. The silhouettes are used to reconstruct the initial conservative estimate of the object's shape and shadow carving is used to carve out the concavities. We have simulated our reconstruction system with a commercial rendering package to explore the design parameters and assess the accuracy of the reconstruction. We have also implemented our reconstruction scheme in a table-top system and present the results of scanning of several objects.

Journal ArticleDOI
TL;DR: This paper addresses the problem of vision-based navigation and proposes an original control law that drives the robot to its desired position using this image path, and proposes and uses specific visual features which ensure that the robot navigates within the visibility path.

Journal ArticleDOI
TL;DR: An approach using a new calibration jacket and explicit calibration algorithm is proposed in this paper that yields accurate results and compensates for involuntary motion occurring between X-ray exposures.
Abstract: The main objective of this study was to develop a 3D X-ray reconstruction system of the spine and rib cage for an accurate 3D clinical assessment of spinal deformities. The system currently used at Sainte-Justine Hospital in Montreal is based on an implicit calibration technique based on a direct linear transform (DLT), using a sufficiently large rigid object incorporated in the positioning apparatus to locate any anatomical structure to be reconstructed within its bounds. During the time lapse between the two successive X-ray acquisitions required for the 3D reconstruction, involuntary patient motion introduce errors due to the incorrect epipolar geometry inferred from the stationary object. An approach using a new calibration jacket and explicit calibration algorithm is proposed in this paper. This approach yields accurate results and compensates for involuntary motion occurring between X-ray exposures.

Journal ArticleDOI
TL;DR: An integrated surface imaging system that provides for the reconstruction of high‐quality 3D images by inherently preserving image registration, eliminates section distortion, thus removing the need for complex realignment and correction, and also ensures full capture of all image planes.
Abstract: Three-dimensional reconstruction of large tissue volumes using histological thin sections poses difficulties because of registration of sections, section distortion, and the possibility of incomplete data set collection due to section loss. We have constructed an integrated surface imaging system that successfully addresses these problems. Embedded tissue is mounted on a high precision XYZ stage and the upper surface is iteratively: (i) stained to provide an effective optical section, (ii) imaged using a digital camera, and (iii) removed with an ultramiller. This approach provides for the reconstruction of high-quality 3D images by inherently preserving image registration, eliminates section distortion, thus removing the need for complex realignment and correction, and also ensures full capture of all image planes. The system has the capacity to acquire images of tissue structure with voxel sizes from 0.5 to 50 mum over dimensions ranging from micrometers to tens of millimeters. The ultramiller enables large samples to be imaged by reliably removing tissue over their full extent. The ability to visualize key features of 3D tissue structure across such a range of scale and resolution will facilitate the development of a greater understanding of the relationship between structure and function. This understanding is essential for better analyses of the structural changes associated with different disease states, and the development of structure-based computer models of biological function.

Proceedings ArticleDOI
26 Dec 2007
TL;DR: This work introduces a new disparity smoothing algorithm that preserves depth discontinuities and enforces smoothness on a sub-pixel level, and presents a novel stereo constraint (gravitational constraint) that assumes sorted disparity values in vertical direction and guides global algorithms to reduce false matches, especially in low-textured regions.
Abstract: Dense stereo algorithms are able to estimate disparities at all pixels including untextured regions. Typically these disparities are evaluated at integer disparity steps. A subsequent sub-pixel interpolation often fails to propagate smoothness constraints on a sub-pixel level. The determination of sub-pixel accurate disparities is an active field of research, however, most sub-pixel estimation algorithms focus on textured image areas in order to show their precision. We propose to increase the sub-pixel accuracy in low- textured regions in three possible ways: First, we present an analysis that shows the benefit of evaluating the disparity space at fractional disparities. Second, we introduce a new disparity smoothing algorithm that preserves depth discontinuities and enforces smoothness on a sub-pixel level. Third, we present a novel stereo constraint (gravitational constraint) that assumes sorted disparity values in vertical direction and guides global algorithms to reduce false matches, especially in low-textured regions. Our goal in this work is to obtain an accurate 3D reconstruction. Large- scale 3D reconstruction will benefit heavily from these sub- pixel refinements, especially with a multi-baseline extension. Results based on semi-global matching , obtained with the above mentioned algorithmic extensions are shown for the Middlebury stereo ground truth data sets. The presented improvements, called ImproveSubPix, turn out to be one of the top-performing algorithms when evaluating the set on a sub-pixel level while being computationally efficient. Additional results are presented for urban scenes. The three improvements are independent of the underlying type of stereo algorithm and can also be applied to sparse stereo algorithms.

Journal ArticleDOI
TL;DR: A novel segmental calibration approach, i.e., dividing the whole work range into two parts and calibrating, respectively, with corresponding system parameters, is proposed to effectively improve the measurement accuracy of the large depth-of-view 3D laser scanner.

05 Dec 2007
TL;DR: The process chain of individual anatomy reconstruction is described which consists of segmentation of medical image data, geometrical reconstruction of all relevant tissue interfaces, up to the generation of geometric approximations of three-dimensional anatomy being suited for finite element analysis.
Abstract: For medical diagnosis, visualization, and model-based therapy planning three-dimensional geometric reconstructions of individual anatomical structures are often indispensable. Computer-assisted, model-based planning procedures typically cover specific modifications of “virtual anatomy” as well as numeric simulations of associated phenomena, like e.g. mechanical loads, fluid dynamics, or diffusion processes, in order to evaluate a potential therapeutic outcome. Since internal anatomical structures cannot be measured optically or mechanically in vivo, three-dimensional reconstruction of tomographic image data remains the method of choice. In this work the process chain of individual anatomy reconstruction is described which consists of segmentation of medical image data, geometrical reconstruction of all relevant tissue interfaces, up to the generation of geometric approximations (boundary surfaces and volumetric meshes) of three-dimensional anatomy being suited for finite element analysis. All results presented herein are generated with amira ® – a highly interactive software system for 3D data analysis, visualization and geometry reconstruction.

Journal ArticleDOI
10 Apr 2007
TL;DR: A method to estimate a patient-specific 3D shape of a femur from only two fluoroscopic images using a parametric femoral model is proposed and the usefulness of the proposed method is verified.
Abstract: In medical diagnostic imaging, an X-ray CT scanner or a MRI system have been widely used to examine 3D shapes or internal structures of living organisms or bones. However, these apparatuses are generally very expensive and of large size. A prior arrangement is also required before an examination, and thus, it is not suitable for an urgent fracture diagnosis in emergency treatment. This paper proposes a method to estimate a patient-specific 3D shape of a femur from only two fluoroscopic images using a parametric femoral model. Firstly, we develop a parametric femoral model by statistical analysis of a number of 3D femoral shapes created from CT images of 51 patients. Then, the pose and shape parameters of the parametric model are estimated from two 2D fluoroscopic images using a distance map constructed by the level set method. Experiments using synthesized images and fluoroscopic images of a phantom femur are successfully carried out and the usefulness of the proposed method is verified.

Journal ArticleDOI
TL;DR: A point-by-point back projection (BP) method is described and compared with traditional SAA for the important clinical task of evaluating morphology of small objects such as microcalcifications and demonstrated improved rendition of microCalcifications in the direction perpendicular to the tube motion direction.
Abstract: Digital breast tomosynthesis is a three-dimensional imaging technique that provides an arbitrary set of reconstruction planes in the breast from a limited-angle series of projection images acquired while the x-ray tube moves. Traditional shift-and-add (SAA) tomosynthesis reconstruction is a common mathematical method to line up each projection image based on its shifting amount to generate reconstruction slices. With parallel-path geometry of tube motion, the path of the tube lies in a plane parallel to the plane of the detector. The traditional SAA algorithm gives shift amounts for each projection image calculated only along the direction of x-ray tube movement. However, with the partial isocentric motion of the x-ray tube in breast tomosynthesis, small objects such as microcalcifications appear blurred (for instance, about 1-4 pixels in blur for a microcalcification in a human breast) in traditional SAA images in the direction perpendicular to the direction of tube motion. Some digital breast tomosynthesis algorithms reported in the literature utilize a traditional one-dimensional SAA method that is not wholly suitable for isocentric motion. In this paper, a point-by-point back projection (BP) method is described and compared with traditional SAA for the important clinical task of evaluating morphology of small objects such as microcalcifications. Impulse responses at different three-dimensional locations with five different combinations of imaging acquisition parameters were investigated. Reconstruction images of microcalcifications in a human subject were also evaluated. Results showed that with traditional SAA and 45 degrees view angle of tube movement with respect to the detector, at the same height above the detector, the in-plane blur artifacts were obvious for objects farther away from x-ray source. In a human subject, the appearance of calcifications was blurred in the direction orthogonal to the tube motion with traditional SAA. With point-by-point BP, the appearance of calcifications was sharper. The point-by-point BP method demonstrated improved rendition of microcalcifications in the direction perpendicular to the tube motion direction. With wide angles or for imaging of larger breasts, this point-by-point BP rather than the traditional SAA should also be considered as the basis of further deblurring algorithms that work in conjunction with the BP method.

Proceedings ArticleDOI
23 Sep 2007
TL;DR: This work proposes a robust technique for segmenting all sorts of graphics and texts in any orientation from document pages, essential for better OCR performance and vectorization in computer vision applications.
Abstract: Text, graphics and half-tones are the major constituents of any document page. While half-tone can be characterised by its inherent intensity variation, text and graphics share common characteristics except difference in spatial distribution. The success of document image analysis systems depends on the proper segmentation. The success of document image analysis systems depends on the proper segmentation of text and graphics as text is further subdivided into other classes such as heading, table and math-zones. Segmentation of graphics is essential for better OCR performance and vectorization in computer vision applications. Graphics segmentation from text is particularly difficult in the context of graphics made of small components (dashed or dotted lines etc.) which have many features similar to texts. Here we propose a robust technique for segmenting all sorts of graphics and texts in any orientation from document pages.

Journal ArticleDOI
TL;DR: This work presents an approach for automatic 3D reconstruction of outdoor scenes using computer vision techniques, which is able to achieve faster than real-time performance, while maintaining an accuracy of a few cm.
Abstract: We present an approach for automatic 3D reconstruction of outdoor scenes using computer vision techniques. Our system collects video, GPS and INS data which are processed in real-time to produce geo-registered, detailed 3D models that represent the geometry and appearance of the world. These models are generated without manual measurements or markers in the scene and can be used for visualization from arbitrary viewpoints, documentation and archiving of large areas. Our system consists of a data acquisition system and a processing system that generated 3D models from the video sequences off-line but in real-time. The GPS/INS measurements allow us to geo-register the pose of the camera at the time each frame was captured. The following stages of the processing pipeline perform dense reconstruction and generate the 3D models, which are in the form of a triangular mesh and a set of images that provide texture. By leveraging the processing power of the GPU, we are able to achieve faster than real-time performance, while maintaining an accuracy of a few cm.

Journal ArticleDOI
TL;DR: A methodology which allows such a reconstruction of CSD from potentials evoked by stimulation of a bunch of whiskers recorded in a slab of the rat forebrain by generalizing the one-dimensional inverse CSD method is presented.
Abstract: Estimation of the continuous current-source density in bulk tissue from a finite set of electrode measurements is a daunting task. Here we present a methodology which allows such a reconstruction by generalizing the one-dimensional inverse CSD method. The idea is to assume a particular plausible form of CSD within a class described by a number of parameters which can be estimated from available data, for example a set of cubic splines in 3D spanned on a fixed grid of the same size as the set of measurements. To avoid specificity of particular choice of reconstruction grid we add random jitter to the points positions and show that it leads to a correct reconstruction. We propose different ways of improving the quality of reconstruction which take into account the sources located outside the recording region through appropriate boundary treatment. The efficiency of the traditional CSD and variants of inverse CSD methods is compared using several fidelity measures on different test data to investigate when one of the methods is superior to the others. The methods are illustrated with reconstructions of CSD from potentials evoked by stimulation of a bunch of whiskers recorded in a slab of the rat forebrain on a grid of 4×5×7 positions.

Proceedings ArticleDOI
28 May 2007
TL;DR: A robust, efficient, and easy to implement pixel classification algorithm that correctly establishes the lower and upper bounds of the possible intensity values of an illuminated pixel and of a non-illuminated pixel and can be easily applied to previously captured data.
Abstract: Modeling 3D objects and scenes is an important part of computer graphics. One approach to modeling is projecting binary patterns onto the scene in order to obtain correspondences and reconstruct a densely sampled 3D model. In such structured light systems, determining whether a pixel is directly illuminated by the projector is essential to decoding the patterns. In this paper, we introduce a robust, efficient, and easy to implement pixel classification algorithm for this purpose. Our method correctly establishes the lower and upper bounds of the possible intensity values of an illuminated pixel and of a non-illuminated pixel. Based on the two intervals, our method classifies a pixel by determining whether its intensity is within one interval and not in the other. Experiments show that our method improves both the quantity of decoded pixels and the quality of the final reconstruction producing a dense set of 3D points, inclusively for complex scenes with indirect lighting effects. Furthermore, our method does not require newly designed patterns; therefore, it can be easily applied to previously captured data.

Patent
10 Oct 2007
TL;DR: In this article, the original 2D rotational projections are combined in an overlaying manner with corresponding viewings of a 3D reconstruction, and the combined visualization will allow for an easy check if findings in the 3D RA volume such as stenosis or aneurysms are not overestimated or underestimated due to e.g. incomplete filling with contrast agent and/or a spectral beam hardening during the rotational scan.
Abstract: It is described an improved visualization of an object under examination (107). Thereby, original 2D rotational projections are combined preferably in an overlaying manner with corresponding viewings of a 3D reconstruction. By showing the 2D rotational projections in combination with the 3D reconstruction, 3D vessel information can be compared with the original 2D rotational image information over different rotational angles. In a clinical setup the combined visualization will allow for an easy check if findings in the 3D RA volume such as stenosis or aneurysms are not overestimated or underestimated due to e.g. an incomplete filling with contrast agent and/or a spectral beam hardening during the rotational scan.

Proceedings ArticleDOI
28 May 2007
TL;DR: The potential of bi-dimensional pseudo-random color patterns for structured lighting is demonstrated in terms of patterns computation, ease of extraction, matching confidence level, as well as density of depth estimation for 3D reconstruction.
Abstract: As an extension to classical structured lighting techniques, the use of bi-dimensional pseudo-random color codes is explored to perform range sensing with variable density from a stereo calibrated rig and a projector. Pseudo-random codes are used to create artificial textures on a scene which are extracted and grouped in a confidence map to ensure reliable feature matching between pairs of images taken from two cameras. Depth estimation is performed on corresponding points with progressive refinement as the pseudo-random pattern projection is marched over the scene to increase the density of matched features, and achieve dense 3D reconstruction. The potential of bi-dimensional pseudo-random color patterns for structured lighting is demonstrated in terms of patterns computation, ease of extraction, matching confidence level, as well as density of depth estimation for 3D reconstruction.

Journal ArticleDOI
TL;DR: In this paper, the authors estimate the 3D shape and reflectance properties of an object made of a single material from a set of calibrated views using the View Independent Reflectance Map (VIRM).
Abstract: We consider the problem of estimating the 3D shape and reflectance properties of an object made of a single material from a set of calibrated views. To model the reflectance, we propose to use the View Independent Reflectance Map (VIRM), which is a representation of the joint effect of the diffuse+specular Bidirectional Reflectance Distribution Function (BRDF) and the environment illumination. The object shape is parameterized using a triangular mesh. We pose the estimation problem as minimizing the cost of matching input images, and the images synthesized using the shape and VIRM estimates. We show that by enforcing a constant value of VIRM as a global constraint, we can minimize the cost function by iterating between the VIRM and shape estimation. Experimental results on both synthetic and real objects show that our algorithm can recover both the 3D shape and the diffuse/specular reflectance information. Our algorithm does not require the light sources to be known or calibrated. The estimated VIRM can be used to predict the appearances of objects with the same material from novel viewpoints and under transformed illumination.