scispace - formally typeset
Search or ask a question

Showing papers on "Point (geometry) published in 2005"


Journal ArticleDOI
TL;DR: This paper is a general description of spatstat and an introduction for new users.
Abstract: spatstat is a package for analyzing spatial point pattern data. Its functionality includes exploratory data analysis, model-fitting, and simulation. It is designed to handle realistic datasets, including inhomogeneous point patterns, spatial sampling regions of arbitrary shape, extra covariate data, and "marks" attached to the points of the point pattern. A unique feature of spatstat is its generic algorithm for fitting point process models to point pattern data. The interface to this algorithm is a function ppm that is strongly analogous to lm and glm. This paper is a general description of spatstat and an introduction for new users.

2,268 citations


Proceedings ArticleDOI
04 Jul 2005
TL;DR: This work presents an algorithm for the automatic alignment of two 3D shapes ( data and model), without any assumptions about their initial positions, and develops a fast branch-and-bound algorithm based on distance matrix comparisons to select the optimal correspondence set and bring the two shapes into a coarse alignment.
Abstract: We present an algorithm for the automatic alignment of two 3D shapes (data and model), without any assumptions about their initial positions. The algorithm computes for each surface point a descriptor based on local geometry that is robust to noise. A small number of feature points are automatically picked from the data shape according to the uniqueness of the descriptor value at the point. For each feature point on the data, we use the descriptor values of the model to find potential corresponding points. We then develop a fast branch-and-bound algorithm based on distance matrix comparisons to select the optimal correspondence set and bring the two shapes into a coarse alignment. The result of our alignment algorithm is used as the initialization to ICP (iterative closest point) and its variants for fine registration of the data to the model. Our algorithm can be used for matching shapes that overlap only over parts of their extent, for building models from partial range scans, as well as for simple symmetry detection, and for matching shapes undergoing articulated motion.

634 citations


Journal ArticleDOI
TL;DR: In this paper, a projection algorithm is proposed to minimize a proximity function that measures the distance of a point from all sets in the image space, which generalizes the convex feasibility problem as well as two-sets split feasibility problem.
Abstract: The multiple-sets split feasibility problem requires finding a point closest to a family of closed convex sets in one space such that its image under a linear transformation will be closest to another family of closed convex sets in the image space. It can be a model for many inverse problems where constraints are imposed on the solutions in the domain of a linear operator as well as in the operator's range. It generalizes the convex feasibility problem as well as the two-sets split feasibility problem. We propose a projection algorithm that minimizes a proximity function that measures the distance of a point from all sets. The formulation, as well as the algorithm, generalize earlier work on the split feasibility problem. We offer also a generalization to proximity functions with Bregman distances. Application of the method to the inverse problem of intensity-modulated radiation therapy treatment planning is studied in a separate companion paper and is here only described briefly.

608 citations


Journal ArticleDOI
TL;DR: This surface matching technique is a generalization of the least squares image matching concept and offers high flexibility for any kind of 3D surface correspondence problem, as well as statistical tools for the analysis of the quality of final matching results.
Abstract: The automatic co-registration of point clouds, representing 3D surfaces, is a relevant problem in 3D modeling. This multiple registration problem can be defined as a surface matching task. We treat it as least squares matching of overlapping surfaces. The surface may have been digitized/sampled point by point using a laser scanner device, a photogrammetric method or other surface measurement techniques. Our proposed method estimates the transformation parameters of one or more 3D search surfaces with respect to a 3D template surface, using the Generalized Gauss–Markoff model, minimizing the sum of squares of the Euclidean distances between the surfaces. This formulation gives the opportunity of matching arbitrarily oriented 3D surface patches. It fully considers 3D geometry. Besides the mathematical model and execution aspects we address the further extensions of the basic model. We also show how this method can be used for curve matching in 3D space and matching of curves to surfaces. Some practical examples based on the registration of close-range laser scanner and photogrammetric point clouds are presented for the demonstration of the method. This surface matching technique is a generalization of the least squares image matching concept and offers high flexibility for any kind of 3D surface correspondence problem, as well as statistical tools for the analysis of the quality of final matching results.

569 citations


Journal ArticleDOI
TL;DR: This paper generalizes a successful static model of relationships into a dynamic model that accounts for friendships drifting over time and shows how to make it tractable to learn such models from data, even as the number of entities n gets large.
Abstract: This paper explores two aspects of social network modeling. First, we generalize a successful static model of relationships into a dynamic model that accounts for friendships drifting over time. Second, we show how to make it tractable to learn such models from data, even as the number of entities n gets large. The generalized model associates each entity with a point in p-dimensional Euclidean latent space. The points can move as time progresses but large moves in latent space are improbable. Observed links between entities are more likely if the entities are close in latent space. We show how to make such a model tractable (sub-quadratic in the number of entities) by the use of appropriate kernel functions for similarity in latent space; the use of low dimensional KD-trees; a new efficient dynamic adaptation of multidimensional scaling for a first pass of approximate projection of entities into latent space; and an efficient conjugate gradient update rule for non-linear local optimization in which amortized time per entity during an update is O(log n). We use both synthetic and real-world data on up to 11,000 entities which indicate near-linear scaling in computation time and improved performance over four alternative approaches. We also illustrate the system operating on twelve years of NIPS co-authorship data.

426 citations


Journal ArticleDOI
TL;DR: An in-memory and disk-based implementation of the HilOut algorithm and a thorough scaling analysis for real and synthetic data sets showing that the algorithm scales well in both cases are presented.
Abstract: A new definition of distance-based outlier and an algorithm, called HilOut, designed to efficiently detect the top n outliers of a large and high-dimensional data set are proposed. Given an integer k, the weight of a point is defined as the sum of the distances separating it from its k nearest-neighbors. Outlier are those points scoring the largest values of weight. The algorithm HilOut makes use of the notion of space-filling curve to linearize the data set, and it consists of two phases. The first phase provides an approximate solution, within a rough factor, after the execution of at most d + 1 sorts and scans of the data set, with temporal cost quadratic in d and linear in N and in k, where d is the number of dimensions of the data set and N is the number of points in the data set. During this phase, the algorithm isolates points candidate to be outliers and reduces this set at each iteration. If the size of this set becomes n, then the algorithm stops reporting the exact solution. The second phase calculates the exact solution with a final scan examining further the candidate outliers that remained after the first phase. Experimental results show that the algorithm always stops, reporting the exact solution, during the first phase after much less than d + 1 steps. We present both an in-memory and disk-based implementation of the HilOut algorithm and a thorough scaling analysis for real and synthetic data sets showing that the algorithm scales well in both cases.

348 citations


Proceedings ArticleDOI
04 Jul 2005
TL;DR: A novel approach to the surface reconstruction problem that takes as its input an oriented point set and returns a solid, water-tight model by using Stokes' Theorem to compute the characteristic function of the solid model.
Abstract: In this paper we present a novel approach to the surface reconstruction problem that takes as its input an oriented point set and returns a solid, water-tight model. The idea of our approach is to use Stokes' Theorem to compute the characteristic function of the solid model (the function that is equal to one inside the model and zero outside of it). Specifically, we provide an efficient method for computing the Fourier coefficients of the characteristic function using only the surface samples and normals, we compute the inverse Fourier transform to get back the characteristic function, and we use iso-surfacing techniques to extract the boundary of the solid model.The advantage of our approach is that it provides an automatic, simple, and efficient method for computing the solid model represented by a point set without requiring the establishment of adjacency relations between samples or iteratively solving large systems of linear equations. Furthermore, our approach can be directly applied to models with holes and cracks, providing a method for hole-filling and zippering of disconnected polygonal models.

204 citations


Journal ArticleDOI
TL;DR: Techniques which allow us to use triharmonic radial basis functions for real-time freeform shape editing are presented and an incremental least-squares method enables us to approximately solve the involved linear systems in a robust and efficient manner.
Abstract: Current surface-based methods for interactive freeform editing of high resolution 3D models are very powerful, but at the same time require a certain minimum tessellation or sampling quality in order to guarantee sufficient robustness. In contrast to this, space deformation techniques do not depend on the underlying surface representation and hence are affected neither by its complexity nor by its quality aspects. However, while analogously to surfacebased methods high quality deformations can be derived from variational optimization, the major drawback lies in the computation and evaluation, which is considerably more expensive for volumetric space deformations. In this paper we present techniques which allow us to use triharmonic radial basis functions for real-time freeform shape editing. An incremental least-squares method enables us to approximately solve the involved linear systems in a robust and efficient manner and by precomputing a special set of deformation basis functions we are able to significantly reduce the per-frame costs. Moreover, evaluating these linear basis functions on the GPU finally allows us to deform highly complex polygon meshes or point-based models at a rate of 30M vertices or 13M splats per second, respectively.

194 citations


Proceedings ArticleDOI
01 Dec 2005
TL;DR: This paper devise provable approximation schemes for locating a base station and constructing a network among a set of sensors each of which has a data stream to get to the base station.
Abstract: This paper study two problems that arise in optimization of sensor networks: First, we devise provable approximation schemes for locating a base station and constructing a network among a set of sensors each of which has a data stream to get to the base station Subject to power constraints at the sensors, our goal is to locate the base station and establish a network in order to maximize the lifespan of the network Second, we study optimal sensor placement problems for quality coverage of given domains cluttered with obstacles We assume "line-of-site", sensors, that sense a point only if the straight segment connecting the sensor to this point (the "line-of-site") does not cross any obstacle so obstacles occludes area from using line-of-site sensors, the goal is to minimize the number of sensors required in order to have each point "well covered" according to precise criteria (eg, that each point is seen by two sensors that form at least angle a, or that each point is seen by three sensors that form a triangle containing the point)

161 citations


Journal ArticleDOI
TL;DR: A new method based on fitted directional tangent vectors at the data point has been developed to determine its normal vector and it is demonstrated that the present method is robust and estimates normal vectors with reliable consistency in comparison with the existing plane fitting, quadric surface fitting, triangle-based area weighted average, and triangle- based angle weighted average methods.
Abstract: Reliable estimation of the normal vector at a discrete data point in a scanned cloud data set is essential to the correct implementation of modern CAD/CAM technologies when the continuous CAD model representation is not available. A new method based on fitted directional tangent vectors at the data point has been developed to determine its normal vector. A local Voronoi mesh, based on the 3D Voronoi diagram and the proposed mesh growing heuristic rules, is first created to identify the neighboring points that characterize the local geometry. These local Voronoi mesh neighbors are used to fit a group of quadric curves through which the directional tangent vectors are obtained. The normal vector is then determined by minimizing the variance of the dot products between a normal vector candidate and the associated directional tangent vectors. Implementation results from extensive simulated and practical point cloud data sets have demonstrated that the present method is robust and estimates normal vectors with reliable consistency in comparison with the existing plane fitting, quadric surface fitting, triangle-based area weighted average, and triangle-based angle weighted average methods.

156 citations


Journal ArticleDOI
TL;DR: A new method for an isotropic fairing of a point sampled surface using an anisotropic geometric mean curvature flow is presented, which removes noise from a point set while it detects and enhances geometric features of the surface such as edges and corners.

Journal ArticleDOI
TL;DR: In this paper, a 3D laser scanning data set of data is used to characterize discontinuous rock masses in an unbiased, rapid, and accurate manner, which is less expensive than traditional manual survey and analysis methods.
Abstract: Three-dimensional (3D) laser scanning data can be used to characterize discontinuous rock masses in an unbiased, rapid, and accurate manner. With 3D laser scanning, it is now possible to measure rock faces whose access is restricted or rock slopes along highways or railway lines where working conditions are hazardous. The proposed method is less expensive than traditional manual survey and analysis methods. Laser scanning is a relatively new surveying technique that yields a so-called point cloud set of data; every single point represents a point in 3D space of the scanned rock surface. Because the density of the point cloud can be high (on the order of 5 mm to 1 cm), it allows for an accurate reconstruction of the original rock surface in the form of a 3D interpolated and meshed surface using different interpolation techniques. Through geometric analysis of this 3D mesh and plotting of the facet orientations in a polar plot, it is possible to observe clusters that represent different rock mass discontinu...

Proceedings ArticleDOI
21 Jun 2005
TL;DR: In this article, a mean-shift based clustering procedure is proposed for robust filtering of a noisy set of points sampled from a smooth surface using a kernel density estimation technique for point clustering.
Abstract: In this paper, we develop a method for robust filtering of a noisy set of points sampled from a smooth surface. The main idea of the method consists of using a kernel density estimation technique for point clustering. Specifically, we use a mean-shift based clustering procedure. With every point of the input data we associate a local likelihood measure capturing the probability that a 3D point is located on the sampled surface. The likelihood measure takes into account the normal directions estimated at the scattered points. Our filtering procedure suppresses noise of different amplitudes and allows for an easy detection of outliers, which are then automatically removed by simple thresholding. The remaining set of maximum likelihood points delivers an accurate point-based approximation of the surface. We also show that while some established meshing techniques often fail to reconstruct the surface from original noisy point scattered data, they work well in conjunction with our filtering method.

Patent
28 Sep 2005
TL;DR: In this article, a system and a method for managing storage space is presented, which comprises detecting a free storage space threshold condition for a storage volume and automatically applying a space management technique to achieve a free space threshold conditions.
Abstract: A system and method are provided to manage storage space. The method comprises detecting a free storage space threshold condition for a storage volume and automatically applying a space management technique to achieve a free storage space threshold condition. Space management techniques comprise deleting selected backup data (e.g., persistent consistency point images) and automatically increasing the size of the storage volume.

Journal ArticleDOI
TL;DR: This paper proposes a hierarchical approach to 3D scattered data interpolation and approximation with compactly supported radial basis functions that integrates the best aspects of scattered data fitting with locally and globally supported basis functions.
Abstract: In this paper, we propose a hierarchical approach to 3D scattered data interpolation and approximation with compactly supported radial basis functions. Our numerical experiments suggest that the approach integrates the best aspects of scattered data fitting with locally and globally supported basis functions. Employing locally supported functions leads to an efficient computational procedure, while a coarse-to-fine hierarchy makes our method insensitive to the density of scattered data and allows us to restore large parts of missed data. Given a point cloud distributed over a surface, we first use spatial down sampling to construct a coarse-to-fine hierarchy of point sets. Then we interpolate (approximate) the sets starting from the coarsest level. We interpolate (approximate) a point set of the hierarchy, as an offsetting of the interpolating function computed at the previous level. The resulting fitting procedure is fast, memory efficient, and easy to implement.

Proceedings Article
30 Jul 2005
TL;DR: This paper investigates methods to detect and repair concavities in ROC curves by manipulating model predictions by building a hybrid model combining the two better models with an inversion of the poorer models.
Abstract: In this paper we investigate methods to detect and repair concavities in ROC curves by manipulating model predictions. The basic idea is that, if a point or a set of points lies below the line spanned by two other points in ROC space, we can use this information to repair the concavity. This effectively builds a hybrid model combining the two better models with an inversion of the poorer models; in the case of ranking classifiers, it means that certain intervals of the scores are identified as unreliable and candidates for inversion. We report very encouraging results on 23 UCI data sets, particularly for naive Bayes where the use of two validation folds yielded significant improvements on more than half of them, with only one loss.

Proceedings ArticleDOI
13 Jun 2005
TL;DR: The problem of automatic data-driven scale selection to improve point cloud classification is investigated and the approach is validated with results using data from different sensors in various environments classified into different terrain types.
Abstract: Three-dimensional ladar data are commonly used to perform scene understanding for outdoor mobile robots, specifically in natural terrain. One effective method is to classify points using features based on local point cloud distribution into surfaces, linear structures or clutter volumes. But the local features are computed using 3D points within a support-volume. Local and global point density variations and the presence of multiple manifolds make the problem of selecting the size of this support volume, or scale, challenging. In this paper, we adopt an approach inspired by recent developments in computational geometry (Mitra et al., 2005) and investigate the problem of automatic data-driven scale selection to improve point cloud classification. The approach is validated with results using data from different sensors in various environments classified into different terrain types (vegetation, solid surface and linear structure).

Journal ArticleDOI
TL;DR: It is proved that surface position and shape up to third order can be derived as a function of local position, orientation and local scale measurements in the image when two orientations are available at the same point.
Abstract: We study the problem of recovering the 3D shape of an unknown smooth specular surface from a single image. The surface reflects a calibrated pattern onto the image plane of a calibrated camera. The pattern is such that points are available in the image where position, orientations, and local scale may be measured (e.g. checkerboard). We first explore the differential relationship between the local geometry of the surface around the point of reflection and the local geometry in the image.We then study the inverse problem and give necessary and sufficient conditions for recovering surface position and shape.We prove that surface position and shape up to third order can be derived as a function of local position, orientation and local scale measurements in the image when two orientations are available at the same point (e.g. a corner). Information equivalent to scale and orientation measurements can be also extracted from the reflection of a planar scene patch of arbitrary geometry, provided that the reflections of (at least) 3 distinctive points may be identified.We validate our theoretical results with both numerical simulations and experiments with real surfaces.

Journal ArticleDOI
TL;DR: It is shown how geometrical constrains can be implemented in an approach based on nonredundant curvilinear coordinates avoiding the inclusion of the constraints in the set of redundant coordinates used to define the internal coordinates.
Abstract: A modification of the constrained geometry optimization method by Anglada and Bofill (Anglada, J. M.; Bofill, J. M. J. Comput. Chem. 1997, 18, 992-1003) is designed and implemented. The changes include the choice of projection, quasi-line-search, and the use of a Rational Function optimization approach rather than a reduced-restricted-quasi-Newton-Raphson method in the optimization step. Furthermore, we show how geometrical constrains can be implemented in an approach based on nonreclunclant curvilinear coordinates avoiding the inclusion of the constraints in the set of redundant coordinates used to define the internal coordinates. The behavior of the new implementation is demonstrated in geometry optimizations featuring single or multiple geometrical constraints (bond lengths, angles, etc.), optimizations on hyperspherical cross sections (as in the computation of steepest descent paths), and location of energy minima on the intersection subspace of two potential energy surfaces (i.e. minimum energy crossing points). In addition, a novel scheme to determine the crossing point geometrically nearest to a given molecular structure is proposed.

Journal ArticleDOI
TL;DR: In this paper, a method for determining the initial contact point and nano-indentation load-infentation depth characteristics of polydimethylsiloxane is presented. But the method is applied to the prediction of the load-indentsation depth.
Abstract: In this paper, we present a method for determining the initial contact point and nanoindentation load–indentation depth characteristics for soft materials. The method is applied to the prediction of the load–indentation depth characteristics of polydimethylsiloxane. It involves the combined use of Johnson–Kendall–Roberts and Maugis–Dugdale adhesion theories and nonlinear least squares fitting in the determination of the initial contact point, the transition parameter, and the contact radius at zero contact load. The elastic modulus and the work of adhesion are also extracted from the load–indentation depth curves.

Patent
08 Mar 2005
TL;DR: In this article, the user specifies at least one seed point from the set that lies on a surface of the structure of interest, and using the context and point data, the system loads points in a region near the seed point(s), and determines the dimensions and orientation of an initial surface component in the context that corresponds to those points.
Abstract: A computer model of a physical structure (or object) can be generated using context-based hypothesis testing. For a set of point data, a user selects a context specifying a geometric category corresponding to the structure shape. The user specifies at least one seed point from the set that lies on a surface of the structure of interest. Using the context and point data, the system loads points in a region near the seed point(s), and determines the dimensions and orientation of an initial surface component in the context that corresponds to those points. If the selected component is supported by the points, that component can be added to a computer model of the surface. The system can repeatedly find points near a possible extension of the surface model, using the context and current surface component(s) to generate hypotheses for extending the surface model to these points. Well-supported components can be added to the surface model until the surface of the structure of interest has been modeled as far as is well-supported by the point data.

Journal ArticleDOI
TL;DR: In this paper, a point geometry parameterization based on the drill grinding parameters is used to ensure manufacturability of the optimized geometry and a significant reduction is shown in the drilling forces for the optimized drill point profiles.
Abstract: This paper investigates the optimization of twist drill point geometries in order to minimize thrust and torque in drilling. A point geometry parameterization based on the drill grinding parameters is used to ensure manufacturability of the optimized geometry. Three commonly used drill point geometries, namely, conical, Racon® and helical, are optimized for drilling forces while maintaining the inherent characteristics of each of the profiles. A significant reduction is shown in the drilling forces for the optimized drills. Drills with the optimized conical point profile are produced and tests run to validate the reduction in thrust and torque.

Journal ArticleDOI
TL;DR: This work summarizes the theoretical foundations needed to deal with the pose problem and contains mainly basics of Euclidean, projective and conformal geometry, which is not well known in computer science.
Abstract: 2D-3D pose estimation means to estimate the relative position and orientation of a 3D object with respect to a reference camera system. This work has its main focus on the theoretical foundations of the 2D-3D pose estimation problem: We discuss the involved mathematical spaces and their interaction within higher order entities. To cope with the pose problem (how to compare 2D projective image features with 3D Euclidean object features), the principle we propose is to reconstruct image features (e.g. points or lines) to one dimensional higher entities (e.g. 3D projection rays or 3D reconstructed planes) and express constraints in the 3D space. It turns out that the stratification hierarchy [11] introduced by Faugeras is involved in the scenario. But since the stratification hierarchy is based on pure point concepts a new algebraic embedding is required when dealing with higher order entities. The conformal geometric algebra (CGA) [24] is well suited to solve this problem, since it subsumes the involved mathematical spaces. Operators are defined to switch entities between the algebras of the conformal space and its Euclidean and projective subspaces. This leads to another interpretation of the stratification hierarchy, which is not restricted to be based solely on point concepts. This work summarizes the theoretical foundations needed to deal with the pose problem. Therefore it contains mainly basics of Euclidean, projective and conformal geometry. Since especially conformal geometry is not well known in computer science, we recapitulate the mathematical concepts in some detail. We believe that this geometric model is useful also for many other computer vision tasks and has been ignored so far. Applications of these foundations are presented in Part II [36].

Journal ArticleDOI
TL;DR: The problem of how to retrieve Euclidean entities of a 3D scene from a single uncalibrated image is studied and two methods to compute the camera projection matrix through the homography of a reference space plane and its vertical vanishing point are presented.

Patent
08 Jul 2005
TL;DR: In this paper, a navigation apparatus and method that store route information, the stored route information including, for each of a plurality of routes starting at a first point and ending at a second point, a required travel time and a traffic condition is presented.
Abstract: A navigation apparatus and method that store route information, the stored route information including, for each of a plurality of routes starting at a first point and ending at a second point, a required travel time and a traffic condition. The apparatus and method determine whether a guidance route includes the first point and the second point of a stored route. The apparatus and method read out, if a guidance route includes the first point and the second point of a stored route, the stored route information for the stored route that shares the first point and the second point with the guidance route. The read out stored route information having a same or similar traffic condition as a current traffic condition. The apparatus and method output the required time to travel the stored route from the read route information.

01 Jan 2005
TL;DR: In this paper, the authors describe the interim results of a study to characterize discontinuous rock masses using 3D laser scanning data, which yields a so-called "point cloud" set of data, where every single point represents a point in 3D space of the scanned rock surface.
Abstract: This paper describes the interim results of a study to characterize discontinuous rock masses using 3D laser scanning data. One of the main advantages of this method is that now an unbiased, rapid and accurate discontinuity analysis can be done. With 3D laser scanning it is now also possible to measure rock faces whose access is restricted or rock slopes along highways or railway lines where working conditions are hazardous. It is also shown that the proposed method will also be cheaper than traditional manual survey and analysis methods. Laser scanning is a relative new surveying technique, which yields a so-called ‘point cloud’ set of data, where every single point represents a point in 3D space of the scanned rock surface. Since the density of the point cloud can be high (in the order of 5 mm to 1 cm), it allows for an accurate re-construction of the original rock surface in the form of a 3D interpolated and meshed surface, using different interpolation techniques. Through geometric analysis of this 3D mesh and plotting of the facet orientations in a polar plot, it is possible to observe clusters, which represent different rock mass discontinuity sets. With fuzzy k-means clustering algorithms individual discontinuity sets can be outlined automatically and the mean orientations of these identified sets can be computed. Assuming a Fisher’s distribution it is subsequently demonstrated that the facet outliers can be removed. Finally, it is shown that discontinuity set spacings can be calculated as well.

Proceedings ArticleDOI
29 Jun 2005
TL;DR: This work presents a technique for capturing high-resolution 4D reflectance fields using the reciprocity property of light transport, and demonstrates how this technique reproduces sharp specular reflections and self-shadowing more accurately than previous approaches.
Abstract: We present a technique for capturing high-resolution 4D reflectance fields using the reciprocity property of light transport. In our technique we place the object inside a diffuse spherical shell and scan a laser across its surface. For each incident ray, the object scatters a pattern of light onto the inner surface of the sphere, and we photograph the resulting radiance from the sphere's interior using a camera with a fisheye lens. Because of reciprocity, the image of the inside of the sphere corresponds to the reflectance function of the surface point illuminated by the laser, that is, the color that point would appear to a camera along the laser ray when the object is lit from each direction on the surface of the sphere. The measured reflectance functions allow the object to be photorealistically rendered from the laser's viewpoint under arbitrary directional illumination conditions. Since each captured re- flectance function is a high-resolution image, our data reproduces sharp specular reflections and self-shadowing more accurately than previous approaches. We demonstrate our technique by scanning objects with a wide range of reflectance properties and show accurate renderings of the objects under novel illumination conditions.

Journal ArticleDOI
TL;DR: A fast centroid molecular dynamics methodology is proposed in which the effective centroid forces are predetermined through a force-matching algorithm applied to a standard path integral molecular dynamics simulation, which greatly reduces the computational cost of generating centroid trajectories, thus extending the applicability of CMD.
Abstract: A fast centroid molecular dynamics (CMD) methodology is proposed in which the effective centroid forces are predetermined through a force-matching algorithm applied to a standard path integral molecular dynamics simulation. The resulting method greatly reduces the computational cost of generating centroid trajectories, thus extending the applicability of CMD. The method is applied to the study of liquid para-hydrogen at two state points and liquid ortho-deuterium at one state point. The static and dynamical results are compared to those obtained from full adiabatic CMD simulations and found to be in excellent agreement for all three systems; the transport properties are also compared to experiment and found to have a similar level of agreement.

Journal ArticleDOI
Gilbert Crombez1
TL;DR: In this article, the authors distinguish classes of operators T with fixed points on a real Hilbert space by comparing the distances of a point x and its image Tx to the (set of) fixed points of T; this leads to a ranking of those classes, based on a nonnegative parameter.
Abstract: We distinguish classes of operators T with fixed points on a real Hilbert space by comparing the distances of a point x and its image Tx to the (set of) fixed points of T; this leads to a ranking of those classes, based on a nonnegative parameter. That same parameter also lets us conclude about the sign of and an upper bound for a characteristic inner product result that arises in iterative processes to obtain a common fixed point of a set of operators. We use that parameter as the starting point for a geometrically-inclined study of specific iterative algorithms intended to find a common fixed point of operators belonging to such class.

Proceedings ArticleDOI
25 Jun 2005
TL;DR: A version of the particle swarm algorithm that is both dynamic and Gaussian looks very promising, and probability distributions around a center can be used instead of the usual trajectory approach.
Abstract: The particle swarm algorithm is usually a dynamic process, where a point in the search space to be tested depends on the previous point and the direction of movement. The process can be decomposed, and probability distributions around a center can be used instead of the usual trajectory approach. A version that is both dynamic and Gaussian looks very promising.