scispace - formally typeset
Search or ask a question

Showing papers on "Point (geometry) published in 2011"


Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper proposes a novel closed-form solution to the P3P problem, which computes the aligning transformation directly in a single stage, without the intermediate derivation of the points in the camera frame, at much lower computational cost.
Abstract: The Perspective-Three-Point (P3P) problem aims at determining the position and orientation of the camera in the world reference frame from three 2D-3D point correspondences. This problem is known to provide up to four solutions that can then be disambiguated using a fourth point. All existing solutions attempt to first solve for the position of the points in the camera reference frame, and then compute the position and orientation of the camera in the world frame, which alignes the two point sets. In contrast, in this paper we propose a novel closed-form solution to the P3P problem, which computes the aligning transformation directly in a single stage, without the intermediate derivation of the points in the camera frame. This is made possible by introducing intermediate camera and world reference frames, and expressing their relative position and orientation using only two parameters. The projection of a world point into the parametrized camera pose then leads to two conditions and finally a quartic equation for finding up to four solutions for the parameter pair. A subsequent backsubstitution directly leads to the corresponding camera poses with respect to the world reference frame. We show that the proposed algorithm offers accuracy and precision comparable to a popular, standard, state-of-the-art approach but at much lower computational cost (15 times faster). Furthermore, it provides improved numerical stability and is less affected by degenerate configurations of the selected world points. The superior computational efficiency is particularly suitable for any RANSAC-outlier-rejection step, which is always recommended before applying PnP or non-linear optimization of the final solution.

563 citations


01 Jan 2011
TL;DR: It is argued that the same basic structural principles that constrain lexical primitives and the lexicon-syntax interface also operate on primitives of a Sentience Domain, and restrict the pragmatics-sy syntax interface.
Abstract: The pragmatic force of a sentence and the pragmatic roles of discourse participants have traditionally been considered to be peripheral to the syntactic component of Grammar. Recently, there have been a variety of proposals for syntactic projections that encode information relevant to the interface between syntax and pragmatics. among others). At the same time, linguists have been exploring the various notions of pragmatic prominence or point of view that are relevant to that interface. Studies of this sort naturally raise questions about the extent to which pragmatic information is syntactically represented. After all, the idea that syntax encodes extensive pragmatic information was rejected as being too unconstrained in the 1970s. On a separate track, linguists have observed that sentience (also variously described as animacy, subjectivity or experiencer-hood) plays an interesting role in the grammar. However, these phenomena have been treated as involving pragmatics or Discourse Representation; syntactic representation of sentience has been largely limited to treatments such as associating lexical features for animacy or logphoricity with individual lexical items. Our proposal will unify both tracks: representation of sentience and representation of pragmatic properties, under one syntactic approach. We will argue that basic syntactic principles constrain projections of pragmatic force as well as the inventory of grammatically relevant pragmatic roles. We take our inspiration from the among others, who have explored constraints on the mapping from Lexical Conceptual Structure (LCS) to syntactic structure. Although there are interesting differences among the proposals made by these authors, they seem to be converging on two points: syntactic principles impose constraints on possible lexical items and their projections, and semantic roles are not primitive, but are determined within these basic asymmetric projections. We will argue that the same basic structural principles that constrain lexical primitives and the lexicon-syntax interface also operate on primitives of a Sentience Domain, and restrict the pragmatics-syntax interface. The above authors have offered theories of what can count as a " grammatically relevant " thematic property. Our goal is to use their insights to restrict what will count as a "grammatically-relevant" pragmatic property. We will not be proposing a new theory of the specific structural restrictions on the lexicon-syntax interface, and we don't offer much insight into how one might choose among the existing theories. What we will do is use Hale and Keyser's theory as a point of departure, and show how the constraints they propose mediate the interaction between …

349 citations


Journal ArticleDOI
TL;DR: In this paper, the influence of the scan geometry on the individual point precision or local measurement noise is considered, and the dependence of the measurement noise on range and incidence angle can be successfully modeled if planar surfaces are observed.
Abstract: A terrestrial laser scanner measures the distance to an object surface with a precision in the order of millimeters. The quality of the individual points in a point cloud, although directly affecting standard processing steps like point cloud registration and segmentation, is still not well understood. The quality of a scan point is influenced by four major factors: instrument mechanism, atmospheric conditions, object surface properties and scan geometry. In this paper, the influence of the scan geometry on the individual point precision or local measurement noise is considered. The local scan geometry depends on the distance and the orientation of the scanned surface, relative to the position of the scanner. The local scan geometry is parameterized by two main parameters, the range, i.e. the distance from the object to the scanner and the incidence angle, i.e. the angle between incoming laser beam and the local surface normal. In this paper, it is shown that by studying the influence of the local scan geometry on the signal to noise ratio, the dependence of the measurement noise on range and incidence angle can be successfully modeled if planar surfaces are observed. The implications of this model is demonstrated further by comparing two point clouds of a small room, obtained from two different scanner positions: a center position and a corner position. The influence of incidence angle on the noise level is quantified on scans of this room, and by moving the scanner by 2 m, it is reduced by 20%. The improvement of the standard deviation is significant, going from 3.23 to 2.55 mm. It is possible to optimize measurement setups in such a way that the measurement noise due to bad scanning geometry is minimized and therefore contribute to a more efficient acquisition of point clouds of better quality.

317 citations


Journal ArticleDOI
TL;DR: In this article, the Riemannian/Alexandrov geometry of Gaussian measures, from the view point of the L 2 -Wasserstein geometry, is studied.
Abstract: This paper concerns the Riemannian/Alexandrov geometry of Gaussian measures, from the view point of the L 2 -Wasserstein geometry. The space of Gaussian measures is of finite dimension, which allows to write down the ex plicit Riemannian metric which in turn induces the L 2 -Wasserstein distance. Moreover, its completion as a metric space provides a complete picture of the singular behavior of the L 2 Wasserstein geometry. In particular, the singular set is st ratified according to the dimension of the support of the Gaussian measures, providing an explicit nontrivial example of Alexandrov space with extremal sets.

195 citations


Journal ArticleDOI
TL;DR: The classical computer vision problems of rigid and nonrigid structure from motion (SFM) with occlusion are addressed and a novel 3D shape trajectory approach is proposed that solves for the deformable structure as the smooth time trajectory of a single point in a linear shape space.
Abstract: We address the classical computer vision problems of rigid and nonrigid structure from motion (SFM) with occlusion. We assume that the columns of the input observation matrix W describe smooth 2D point trajectories over time. We then derive a family of efficient methods that estimate the column space of W using compact parameterizations in the Discrete Cosine Transform (DCT) domain. Our methods tolerate high percentages of missing data and incorporate new models for the smooth time trajectories of 2D-points, affine and weak-perspective cameras, and 3D deformable shape. We solve a rigid SFM problem by estimating the smooth time trajectory of a single camera moving around the structure of interest. By considering a weak-perspective camera model from the outset, we directly compute euclidean 3D shape reconstructions without requiring postprocessing steps such as euclidean upgrade and bundle adjustment. Our results on real SFM data sets with high percentages of missing data compared positively to those in the literature. In nonrigid SFM, we propose a novel 3D shape trajectory approach that solves for the deformable structure as the smooth time trajectory of a single point in a linear shape space. A key result shows that, compared to state-of-the-art algorithms, our nonrigid SFM method can better model complex articulated deformation with higher frequency DCT components while still maintaining the low-rank factorization constraint. Finally, we also offer an approach for nonrigid SFM when W is presented with missing data.

181 citations


Journal ArticleDOI
TL;DR: A numerical scheme of computing quantities involving gradients of shape functions is introduced for the material point method, so that the quantities are continuous as material points move across cell boundaries, and is proved to satisfy mass and momentum conservations exactly.

144 citations


Journal ArticleDOI
TL;DR: A new adaptive simplification method to reduce the number of the scanned dense points by employing the k -means clustering algorithm to gather similar points together in the spatial domain and uses the maximum normal vector deviation as a measure of cluster scatter to partition the gathered point sets into a series of sub-clusters in the feature field.
Abstract: 3D scanning devices usually produce huge amounts of dense points, which require excessively large storage space and long post-processing times. This paper presents a new adaptive simplification method to reduce the number of the scanned dense points. An automatic recursive subdivision scheme is designed to pick out representative points and remove redundant points. It employs the k -means clustering algorithm to gather similar points together in the spatial domain and uses the maximum normal vector deviation as a measure of cluster scatter to partition the gathered point sets into a series of sub-clusters in the feature field. To maintain the integrity of the original boundary, a special boundary detection algorithm is developed, which is run before the recursive subdivision procedure. To avoid the final distribution of the simplified points to become locally greedy and unbalanced, a refinement algorithm is put forward, which is run after the recursive subdivision procedure. The proposed method may generate uniformly distributed sparse sampling points in the flat areas and necessary higher density in the high curvature regions. The effectiveness and performance of the novel simplification method is validated and illustrated through experimental results and comparison with other point sampling methods.

136 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel active learning algorithm which takes into account the local structure of the data space, and proposes a transductive learning algorithm called Locally Linear Reconstruction (LLR) to reconstruct every other point.
Abstract: We consider the active learning problem, which aims to select the most representative points. Out of many existing active learning techniques, optimum experimental design (OED) has received considerable attention recently. The typical OED criteria minimize the variance of the parameter estimates or predicted value. However, these methods see only global euclidean structure, while the local manifold structure is ignored. For example, I-optimal design selects those data points such that other data points can be best approximated by linear combinations of all the selected points. In this paper, we propose a novel active learning algorithm which takes into account the local structure of the data space. That is, each data point should be approximated by the linear combination of only its neighbors. Given the local reconstruction coefficients for every data point and the coordinates of the selected points, a transductive learning algorithm called Locally Linear Reconstruction (LLR) is proposed to reconstruct every other point. The most representative points are thus defined as those whose coordinates can be used to best reconstruct the whole data set. The sequential and convex optimization schemes are also introduced to solve the optimization problem. The experimental results have demonstrated the effectiveness of our proposed method.

122 citations


Journal ArticleDOI
TL;DR: A new method is presented to improve the performance of maximum power point tracking in solar panels using a combination of two loops, set point calculation and fine tuning loops.

115 citations


Journal ArticleDOI
TL;DR: In this paper, a parametric level set method for reconstruction of obstacles in general inverse problems is considered, where the level set function is parameterized in terms of adaptive compactly supported radial basis functions.
Abstract: In this paper, a parametric level set method for reconstruction of obstacles in general inverse problems is considered. General evolution equations for the reconstruction of unknown obstacles are derived in terms of the underlying level set parameters. We show that using the appropriate form of parameterizing the level set function results in a significantly lower dimensional problem, which bypasses many difficulties with traditional level set methods, such as regularization, reinitialization, and use of signed distance function. Moreover, we show that from a computational point of view, low order representation of the problem paves the way for easier use of Newton and quasi-Newton methods. Specifically for the purposes of this paper, we parameterize the level set function in terms of adaptive compactly supported radial basis functions, which, used in the proposed manner, provide flexibility in presenting a larger class of shapes with fewer terms. Also they provide a “narrow-banding” advantage which can further reduce the number of active unknowns at each step of the evolution. The performance of the proposed approach is examined in three examples of inverse problems, i.e., electrical resistance tomography, X-ray computed tomography, and diffuse optical tomography.

107 citations


Proceedings ArticleDOI
05 Aug 2011
TL;DR: This work presents a flexible and simple optimization strategy based on the idea of increasing the mutual distances by successively moving each point to the "farthest point," i.e., the location that has the maximum distance from the rest of the point set.
Abstract: Efficient sampling often relies on irregular point sets that uniformly cover the sample space. We present a flexible and simple optimization strategy for such point sets. It is based on the idea of increasing the mutual distances by successively moving each point to the "farthest point," i.e., the location that has the maximum distance from the rest of the point set. We present two iterative algorithms based on this strategy. The first is our main algorithm which distributes points in the plane. Our experimental results show that the resulting distributions have almost optimal blue noise properties and are highly suitable for image plane sampling. The second is a variant of the main algorithm that partitions any point set into equally sized subsets, each with large mutual distances; the resulting partitionings yield improved results in more general integration problems such as those occurring in physically based rendering.

Book ChapterDOI
01 Jan 2011
TL;DR: A nontrivial integration of the ideas of the hybrid steepest descent method and the Moreau–Yosida regularization is proposed, yielding a useful approach to the challenging problem of nonsmooth convex optimization over Fix(T).
Abstract: The first aim of this paper is to present a useful toolbox of quasi-nonexpansive mappings for convex optimization from the viewpoint of using their fixed point sets as constraints. Many convex optimization problems have been solved through elegant translations into fixed point problems. The underlying principle is to operate a certain quasi-nonexpansive mapping T iteratively and generate a convergent sequence to its fixed point. However, such a mapping often has infinitely many fixed points, meaning that a selection from the fixed point set Fix(T) should be of great importance. Nevertheless, most fixed point methods can only return an “unspecified” point from the fixed point set, which requires many iterations. Therefore, based on common sense, it seems unrealistic to wish for an “optimal” one from the fixed point set. Fortunately, considering the collection of quasi-nonexpansive mappings as a toolbox, we can accomplish this challenging mission simply by the hybrid steepest descent method, provided that the cost function is smooth and its derivative is Lipschitz continuous. A question arises: how can we deal with “nonsmooth” cost functions? The second aim is to propose a nontrivial integration of the ideas of the hybrid steepest descent method and the Moreau–Yosida regularization, yielding a useful approach to the challenging problem of nonsmooth convex optimization over Fix(T). The key is the use of smoothing of the original nonsmooth cost function by its Moreau–Yosida regularization whose the derivative is always Lipschitz continuous. The field of application of hybrid steepest descent method can be extended to the minimization of the ideal smooth approximation Fix(T). We present the mathematical ideas of the proposed approach together with its application to a combinatorial optimization problem: the minimal antenna-subset selection problem under a highly nonlinear capacity-constraint for efficient multiple input multiple output (MIMO) communication systems.

Proceedings ArticleDOI
03 Jun 2011
TL;DR: It is shown that how the modified k- mean algorithm will decrease the complexity & the effort of numerical calculation, maintaining the easiness of implementing the k-mean algorithm.
Abstract: This paper presents a data clustering approach using modified K-Means algorithm based on the improvement of the sensitivity of initial center (seed point) of clusters. This algorithm partitions the whole space into different segments and calculates the frequency of data point in each segment. The segment which shows maximum frequency of data point will have the maximum probability to contain the centroid of cluster. The number of cluster's centroid (k) will be provided by the user in the same manner like the traditional K-mean algorithm and the number of division will be k∗k (‘k’ vertically as well as ‘k’ horizontally). If the highest frequency of data point is same in different segments and the upper bound of segment crosses the threshold ‘k’ then merging of different segments become mandatory and then take the highest k segment for calculating the initial centroid (seed point) of clusters. In this paper we also define a threshold distance for each cluster's centroid to compare the distance between data point and cluster's centroid with this threshold distance through which we can minimize the computational effort during calculation of distance between data point and cluster's centroid. It is shown that how the modified k-mean algorithm will decrease the complexity & the effort of numerical calculation, maintaining the easiness of implementing the k-mean algorithm. It assigns the data point to their appropriate class or cluster more effectively.

Journal ArticleDOI
TL;DR: This article used a five-point rating scale to assess the appropriateness of two types of speech acts (requests and opinions) produced by 48 Japanese EFL students and found similarities and differences in their use of pragmatic norms and social rules.
Abstract: This study addresses variability among native speaker raters who evaluated pragmatic performance of learners of English as a foreign language. Using a five-point rating scale, four native English speakers of mixed cultural background (one African American, one Asian American, and two Australians) assessed the appropriateness of two types of speech acts (requests and opinions) produced by 48 Japanese EFL students. To explore norms and the reasoning behind the raters’ assessment practice, individual introspective verbal interviews were conducted. Eight students' speech act productions (64 speech acts in total) were selected randomly, and the raters were asked to rate each speech act and then explain their rating decision. Interview data revealed similarities and differences in their use of pragmatic norms and social rules in evaluating appropriateness.

Journal ArticleDOI
TL;DR: A scale space strategy for orienting and meshing exactly and completely a raw point set based on the intrinsic heat equation, also called mean curvature motion (MCM), and a mathematical proof of its consistency with MCM is given.
Abstract: This paper develops a scale space strategy for orienting and meshing exactly and completely a raw point set. The scale space is based on the intrinsic heat equation, also called mean curvature motion (MCM). A simple iterative scheme implementing MCM directly on the raw point set is described, and a mathematical proof of its consistency with MCM is given. Points evolved by this MCM implementation can be trivially backtracked to their initial raw position. Therefore, both the orientation and mesh of the data point set obtained at a smooth scale can be transported back on the original. The gain in visual accuracy is demonstrated on archaeological objects by comparison with several state of the art meshing methods.

Book
15 May 2011
TL;DR: By improving the waiting environment, passengers will find waiting more pleasant and the waiting time will appear to be shorter, and by adding the right environmental stimuli at the right moment, both the station and the wait are more positively evaluated.
Abstract: In the railway sector there is a great deal of interest in objective time but hardly any in passengers’ subjective experience of time. The focus of this publication is thus not on (shortening) objective time but on how time itself is experienced and how this can be improved. Aware that a journey must not only be quick but also pleasant, Netherlands Railways (NS) consequently sets itself the following objective: “To transport our passengers safely, on time and in comfort via appealing stations.” Particularly the wait is found to be unpleasant, with passengers regarding stations and especially platforms as sombre, boring and grey places, devoid of atmosphere and colour. By improving the waiting environment, we can kill two birds with one stone: passengers will find waiting more pleasant and the waiting time will appear to be shorter. The practical question in this research thus reads: “Which measures are effective to make the waiting time at stations more pleasant and/or to shorten the perception of waiting time?” The conclusion is that by adding the right environmental stimuli at the right moment, both the station and the wait are more positively evaluated, resulting in the score for the general appraisal of the platform increasing by half to one full point.

Patent
24 Jun 2011
TL;DR: In this paper, a point group data processing device is provided with a non-planar region elimination unit (101) that eliminates point groups data pertaining to nonplanar regions entailing a large calculation burden.
Abstract: A point group data processing device is provided with: a non-planar region elimination unit (101) that eliminates point group data pertaining to non-planar regions entailing a large calculation burden from point group data that associate two-dimensional images that are to be measured with three-dimensional coordinate data for a plurality of points that constitute the two-dimensional images; a surface labeling part (102) that applies labels that identify surfaces, after non-planar region data have been eliminated from the point group data; and an outline calculation part (106) that calculates the outline of a subject using local planes based on local regions that are contiguous to the labeled surfaces.

Journal ArticleDOI
01 Aug 2011
TL;DR: In order to support spatial applications that involve large flow of queries and require fast response, an extremely efficient algorithm is proposed to find a high-quality near-optimal meeting point, which is orders of magnitude faster than the exact OMP algorithms.
Abstract: Given a set of points Q on a road network, an optimal meeting point (OMP) query returns the point on a road network G = (V, E) with the smallest sum of network distances to all the points in Q. This problem has many real world applications, such as minimizing the total travel cost for a group of people who want to find a location for gathering. While this problem has been well studied in the Euclidean space, the recently proposed state-of-the-art algorithm for solving this problem in the context of road networks is still not efficient. In this paper, we propose a new baseline algorithm for the OMP query, which reduces the search space from |Q| · |E| to |V| + |Q|. We also present two effective pruning techniques that further accelerate the baseline algorithm. Finally, in order to support spatial applications that involve large flow of queries and require fast response, an extremely efficient algorithm is proposed to find a high-quality near-optimal meeting point, which is orders of magnitude faster than the exact OMP algorithms. Extensive experiments are conducted to verify the efficiency of our algorithms.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: The proposed approach, firmly grounded on the geometry of the multiple views, introduces a calibration procedure that is efficient, accurate, highly innovative but also practical and easy and can run online with little intervention from the user.
Abstract: A novel approach to 3D gaze estimation for wearable multi-camera devices is proposed and its effectiveness is demonstrated both theoretically and empirically. The proposed approach, firmly grounded on the geometry of the multiple views, introduces a calibration procedure that is efficient, accurate, highly innovative but also practical and easy. Thus, it can run online with little intervention from the user. The overall gaze estimation model is general, as no particular complex model of the human eye is assumed in this work. This is made possible by a novel approach, that can be sketched as follows: each eye is imaged by a camera; two conics are fitted to the imaged pupils and a calibration sequence, consisting in the subject gazing a known 3D point, while moving his/her head, provides information to 1) estimate the optical axis in 3D world; 2) compute the geometry of the multi-camera system; 3) estimate the Point of Regard in 3D world. The resultant model is being used effectively to study visual attention by means of gaze estimation experiments, involving people performing natural tasks in wide-field, unstructured scenarios.

Journal ArticleDOI
TL;DR: This work will present a simple deterministic scheme to generate nearly uniform point sets with antipodal symmetry, which is of special importance to many scientific and engineering applications.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper defines 2.5D building topology as a set of roof features, wall features, and point features; together with the associations between them; based on this definition, it is extended into a 2.
Abstract: 2.5D building reconstruction aims at creating building models composed of complex roofs and vertical walls. In this paper, we define 2.5D building topology as a set of roof features, wall features, and point features; together with the associations between them. Based on this definition, we extend 2.5D dual contouring into a 2.5D modeling method with topology control. Comparing with the previous method, we put less restrictions on the adaptive simplification process. We show results under intense geometry simplifications. Our results preserve significant topology structures while the number of triangles is comparable to that of manually created models or primitive-based models.

Posted Content
TL;DR: The planner is shown to improve performance on a variety of planning problems, by focusing sampling on more challenging regions of a planning problem, including collision boundary areas such as narrow passages.
Abstract: A simple sample-based planning method is presented which approximates connected regions of free space with volumes in Configuration space instead of points The algorithm produces very sparse trees compared to point-based planning approaches, yet it maintains probabilistic completeness guarantees The planner is shown to improve performance on a variety of planning problems, by focusing sampling on more challenging regions of a planning problem, including collision boundary areas such as narrow passages

Journal ArticleDOI
TL;DR: In this paper, a method for generating five-axis toolpaths with smooth tool motion and high efficiency based on the accessibility map (A-map) of the cutter at a point on the part surface is presented.
Abstract: In five-axis high speed milling, one of the key requirements to ensure the quality of the machined surface is that the tool-path must be smooth, i.e., the cutter posture change from one cutter contact point to the next needs to be minimized. This paper presents a new method for generating five-axis tool-paths with smooth tool motion and high efficiency based on the accessibility map (A-map) of the cutter at a point on the part surface. The cutter’s A-map at a point refers to its posture range in terms of the two rotational angles, within which the cutter does not have any interference with the part and the surrounding objects. By using the A-map at a point, the posture change rates along the possible cutting directions (called the smoothness map or S-map) at the point are estimated. Based on the A-maps and S-maps of all the sampled points of the part surface, the initial tool-path with the smoothest posture change is generated first. Subsequently, the adjacent tool-paths are generated one at a time by considering both path smoothness and machining efficiency. Compared with traditional tool-path generation methods, e.g., iso-planar, the proposed method can generate tool-paths with smaller posture change rate and yet shorter overall path length. The developed techniques can be used to automate five-axis tool-path generation, in particular for high speed machining (finish cut).

Proceedings ArticleDOI
05 Dec 2011
TL;DR: This paper describes a fast plane extraction algorithm for 3D range data, taking advantage of the point neighborhood structure in data acquired from 3D sensors like range cameras, laser range finders and Microsoft Kinect, which divides the plane-segment extraction task into three steps.
Abstract: This paper describes a fast plane extraction algorithm for 3D range data. Taking advantage of the point neighborhood structure in data acquired from 3D sensors like range cameras, laser range finders and Microsoft Kinect, it divides the plane-segment extraction task into three steps. The first step is a 2D line segment extraction from raw sensor data, interpreted as 2D data, followed by a line segment based connected component search. The final step finds planes based on connected segment component sets. The first step inspects 2D sub spaces only, leading to a line segment representation of the 3D scan. Connected components of segments represent candidate sets of coplanar segments. Line segment representation and connected components vastly reduce the search space for the plane-extraction step. A region growing algorithm is utilized to find coplanar segments and their optimal (least square error) plane approximation. Region growing contains a fast plane update technique in its core, which combines sets of co-planar segments to form planar elements. Experiments are performed on real world data from different sensors.

Journal ArticleDOI
TL;DR: To alleviate the effect of matching of corresponding feature points and extraction error of single feature point, a method by using a single camera as monocular measurement is presented based on image processing.
Abstract: To alleviate the effect of matching of corresponding feature points and extraction error of single feature point,a method by using a single camera as monocular measurement is presented based on image processing.Firstly,this paper sets up the mapping relationship between image point and target point,and establishes pinhole imaging model.Secondly,it describes the mapping relationship between object area and image area of object by using image analysis,and establishes the model of distance measurement in optical direction.Then,the principle of distance measurement between optical center and feature point is proposed after image processing and feature point extracting are carried out.At last,the paper starts out verification experiments and analyzes the cause that error increases with further distance.After analyzing the data of experiment,the conclusion that error is related with the optical axis deviation is made.Sequentially,with the maximum relative error 1.68% of revised data,there is a remarkable improvement which proves the feasibility and the effectiveness of the proposed principle.

Journal ArticleDOI
Shaoyi Du1, Jihua Zhu1, Nanning Zheng1, Yuehu Liu1, Ce Li1 
TL;DR: A novel robust iterative closest point (ICP) algorithm is proposed which can compute rigid transformation, correspondence, and overlapping percentage automatically at each iterative step.
Abstract: The problem of registering point sets with outliers including noises and missing data is discussed in this paper. To solve this problem, a novel objective function is proposed by introducing an overlapping percentage for partial registration. Moreover, a novel robust iterative closest point (ICP) algorithm is proposed which can compute rigid transformation, correspondence, and overlapping percentage automatically at each iterative step. This new algorithm uses as many point pairs as possible to yield a more reliable and accurate registration result between two m-D point sets with outliers. Experimental results demonstrate that our algorithm is more robust than the traditional ICP and the state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: The concept of optimal realization, which can be a helpful concept in practice, is introduced for transfer functions having rational orders and an algorithm is suggested to obtain the optimal realizations of rational order transfer functions.
Abstract: In this paper, the concept of minimal state space realization for a fractional order system is defined from the inner dimension point of view. Some basic differences of the minimal realization concept in the fractional and integer order systems are discussed. Five lower bounds are obtained for the inner dimension of a minimal state space realization of a fractional order transfer function. Also, the concept of optimal realization, which can be a helpful concept in practice, is introduced for transfer functions having rational orders. An algorithm is suggested to obtain the optimal realizations of rational order transfer functions. The introduced concept might be used to get minimal realizations of rational order transfer functions. This point is illustrated by presenting some examples.

Proceedings Article
07 Aug 2011
TL;DR: It is shown that integral kernels can be directly incorporated into a Gaussian process classification (GPC) framework to provide a continuous non-parametric Bayesian estimation of occupancy.
Abstract: We address the problem of building a continuous occupancy representation of the environment with ranging sensors. Observations from such sensors provide two types of information: a line segment or a beam indicating no returns along them (free-space); a point or return at the end of the segment representing an occupied surface. To model these two types of observations in a principled statistical manner, we propose a novel methodology based on integral kernels. We show that integral kernels can be directly incorporated into a Gaussian process classification (GPC) framework to provide a continuous non-parametric Bayesian estimation of occupancy. Directly handling line segment and point observations avoids the need to discretise segments into points, reducing the computational cost of GPC inference and learning. We present experiments on 2D and 3D datasets demonstrating the benefits of the approach.

Posted Content
TL;DR: In this paper, the authors studied radial solutions of the Emden-Fowler equation on the hyperbolic space and determined the exact asymptotic behavior of wide classes of finite and infinite energy solutions.
Abstract: We study the Emden-Fowler equation $-\Delta u=|u|^{p-1}u$ on the hyperbolic space ${\mathbb H}^n$. We are interested in radial solutions, namely solutions depending only on the geodesic distance from a given point. The critical exponent for such equation is $p=(n+2)/(n-2)$ as in the Euclidean setting, but the properties of the solutions show striking differences with the Euclidean case. While the papers \cite{mancini, bhakta} consider finite energy solutions, we shall deal here with infinite energy solutions and we determine the exact asymptotic behavior of wide classes of finite and infinite energy solutions.

01 Jan 2011
TL;DR: In this paper, the existence of the best proximity point of cyclic contractions in ordered metric spaces has been investigated and shown to be true for all cyclic contracts with respect to AMS Subject Classifications.
Abstract: In this paper, we shall give some results on existence of the best proximity point of cyclic ’-contractions in ordered metric spaces. AMS Subject Classifications: 47H04, 47H10.