scispace - formally typeset
Search or ask a question

Showing papers on "Point (geometry) published in 2006"


01 Jan 2006
TL;DR: A method for segmentation of point clouds using smoothness constraint, which finds smoothly connected areas in point clouds by using only local surface normals and point connectivity which can be enforced using either k-nearest or fixed distance neighbours is presented.
Abstract: For automatic processing of point clouds their segmentation is one of the most important processes. The methods based on curvature and other higher level derivatives often lead to over segmentation, which later needs a lot of manual editing. We present a method for segmentation of point clouds using smoothness constraint, which finds smoothly connected areas in point clouds. It uses only local surface normals and point connectivity which can be enforced using either k-nearest or fixed distance neighbours. The presented method requires a small number of intuitive parameters, which provide a tradeoff between underand over-segmentation. The application of the presented algorithm on industrial point clouds shows its effectiveness compared to curvature based approaches.

515 citations


Proceedings ArticleDOI
29 Jul 2006
TL;DR: A progressive compression method for point sampled models that is specifically apt at dealing with densely sampled surface geometry and it is demonstrated that additional point attributes, such as color, can be well integrated and efficiently encoded in this framework.
Abstract: In this paper we present a progressive compression method for point sampled models that is specifically apt at dealing with densely sampled surface geometry. The compression is lossless and therefore is also suitable for storing the unfiltered, raw scan data. Our method is based on an octree decomposition of space. The point-cloud is encoded in terms of occupied octree-cells. To compress the octree we employ novel prediction techniques that were specifically designed for point sampled geometry and are based on local surface approximations to achieve high compression rates that outperform previous progressive coders for point-sampled geometry. Moreover we demonstrate that additional point attributes, such as color, which are of great importance for point-sampled geometry, can be well integrated and efficiently encoded in this framework.

406 citations


Proceedings Article
04 Dec 2006
TL;DR: This work formulate the motion coherence constraint and derive a solution of regularized ML estimation through the variational approach, which leads to an elegant kernel form, and derives the EM algorithm for the penalized ML optimization with deterministic annealing.
Abstract: We introduce Coherent Point Drift (CPD), a novel probabilistic method for non-rigid registration of point sets. The registration is treated as a Maximum Likelihood (ML) estimation problem with motion coherence constraint over the velocity field such that one point set moves coherently to align with the second set. We formulate the motion coherence constraint and derive a solution of regularized ML estimation through the variational approach, which leads to an elegant kernel form. We also derive the EM algorithm for the penalized ML optimization with deterministic annealing. The CPD method simultaneously finds both the non-rigid transformation and the correspondence between two point sets without making any prior assumption of the transformation model except that of motion coherence. This method can estimate complex non-linear non-rigid transformations, and is shown to be accurate on 2D and 3D examples and robust in the presence of outliers and missing points.

330 citations


Journal ArticleDOI
TL;DR: This paper introduces the notion of a neighborhood structure for the general point matching problem, and forms point matching as an optimization problem to preserve local neighborhood structures during matching.
Abstract: In previous work on point matching, a set of points is often treated as an instance of a joint distribution to exploit global relationships in the point set. For nonrigid shapes, however, the local relationship among neighboring points is stronger and more stable than the global one. In this paper, we introduce the notion of a neighborhood structure for the general point matching problem. We formulate point matching as an optimization problem to preserve local neighborhood structures during matching. Our approach has a simple graph matching interpretation, where each point is a node in the graph, and two nodes are connected by an edge if they are neighbors. The optimal match between two graphs is the one that maximizes the number of matched edges. Existing techniques are leveraged to search for an optimal solution with the shape context distance used to initialize the graph matching, followed by relaxation labeling updates for refinement. Extensive experiments show the robustness of our approach under deformation, noise in point locations, outliers, occlusion, and rotation. It outperforms the shape context and TPS-RPM algorithms on most scenarios.

323 citations


Journal Article
TL;DR: In this paper, a multi-view multi-hypothesis approach to segmenting and tracking multiple (possibly occluded) persons on a ground plane is proposed, where several iterations of segmentation are performed using information from human appearance models and ground plane homography.
Abstract: A multi-view multi-hypothesis approach to segmenting and tracking multiple (possibly occluded) persons on a ground plane is proposed. During tracking, several iterations of segmentation are performed using information from human appearance models and ground plane homography. To more precisely locate the ground location of a person, all center vertical axes of the person across views are mapped to the top-view plane and their intersection point on the ground is estimated. To tackle the explosive state space due to multiple targets and views, iterative segmentation-searching is incorporated into a particle filtering framework. By searching for people's ground point locations from segmentations, a set of a few good particles can be identified, resulting in low computational cost. In addition, even if all the particles are away from the true ground point, some of them move towards the true one through the iterated process as long as they are located nearby. We demonstrate the performance of the approach on several video sequences.

200 citations


Proceedings Article
13 Jul 2006
TL;DR: In this article, the authors highlight two components of the condition used in constraint-based algorithms, which they call "Adjacency-Faithfulness" and "Orientation-faithfulness."
Abstract: Most causal discovery algorithms in the literature exploit an assumption usually referred to as the Causal Faithfulness or Stability Condition. In this paper, we highlight two components of the condition used in constraint-based algorithms, which we call "Adjacency-Faithfulness" and "Orientation-Faithfulness." We point out that assuming Adjacency-Faithfulness is true, it is possible to test the validity of Orientation-Faithfulness. Motivated by this observation, we explore the consequence of making only the Adjacency-Faithfulness assumption. We show that the familiar PC algorithm has to be modified to be correct under the weaker, Adjacency-Faithfulness assumption. The modified algorithm, called Conservative PC (CPC), checks whether Orientation-Faithfulness holds in the orientation phase, and if not, avoids drawing certain causal conclusions the PC algorithm would draw. However, if the stronger, standard causal Faithfulness condition actually obtains, the CPC algorithm outputs the same pattern as the PC algorithm does in the large sample limit. We also present a simulation study showing that the CPC algorithm runs almost as fast as the PC algorithm, and outputs significantly fewer false causal arrowheads than the PC algorithm does on realistic sample sizes.

151 citations


Patent
08 Aug 2006
TL;DR: In this article, a surgical navigation system is used to position the femoral component of a prosthetic hip during hip joint replacement surgery without separately affixing a marker to the femur.
Abstract: Methods and apparatus using a surgical navigation system to position the femoral component of a prosthetic hip during hip joint replacement surgery without separately affixing a marker to the femur. The navigation system acquires the center of rotation of the hip joint as well as at least one point on the femur in the pelvic frame of reference. From these two points, the navigation system calculates the position and length of a first line between the center of rotation of the hip joint and the point on the femur. Optionally, a second point on the femur that is not on the first line is palpated. The system can calculate the position and length of a second line that is perpendicular to the first line and that runs from the first line to the second palpated point on the femur. The prosthetic cup is implanted and its center of rotation is recorded. A tool for forming the bore within which the stem of the femoral implant component will be placed is tracked by the navigation system. While the tool is fixed to the femur, the surgeon re-palpates the same point(s) on the femur that were previously palpated. The navigation system calculates the position and length of a first line between the center of rotation of the prosthetic cup and the re-palpated first point. If a second point on the femur was re-palpated, the navigation system also calculates the position and length of a perpendicular line between the first line and the second point. The surgical navigation system uses this information to calculate and display to the surgeon relevant information about the surgery, such as change in the patient's leg length and/or medialization/lateralization of the joint.

138 citations


Journal ArticleDOI
TL;DR: The paper makes the first attempt to show how the ant colony optimization (ACO) algorithm can be applied to DLP with the budget constraints, and results are obtained from the solution of several test problems.
Abstract: The main characteristic of today's manufacturing environments is volatility Under a volatile environment, demand is not stable It changes from one production period to another To operate efficiently under such environments, the facilities must be adaptive to changing production requirements From a layout point of view, this situation requires the solution of the dynamic layout problem (DLP) DLP is a computationally complex combinatorial optimization problem for which optimal solutions can only be found for small size problems It is known that classical optimization procedures are not adequate for this problem Therefore, several heuristics including taboo search, simulated annealing and genetic algorithm are applied to this problem to find a good solution This work makes use of the ant colony optimization (ACO) algorithm to solve the DLP by considering the budget constraints The paper makes the first attempt to show how the ACO can be applied to DLP with the budget constraints In the paper, example applications are presented and computational experiments are performed to present suitability of the ACO to solve the DLP problems Promising results are obtained from the solution of several test problems

135 citations


Journal ArticleDOI
TL;DR: In this article, Japanese college students at two different proficiency levels were evaluated for their ability to produce a speech act of request in a spoken role play task and their performance was analyzed quantitatively by rating performance on a six-point scale for overall appropriateness, as well as identifying the directness levels of the linguistic expressions used to produce requests.
Abstract: Fifty-nine Japanese college students of English at two different proficiency levels were evaluated for their ability to produce a speech act of request in a spoken role play task. Learners’ production was analyzed quantitatively by rating performance on a six-point scale for overall appropriateness, as well as qualitatively by identifying the directness levels of the linguistic expressions used to produce requests. Results revealed a significant L2 proficiency influence on overall appropriateness, but only a marginal difference in the types of linguistic expressions used between the two proficiency groups. Moreover, grammatical and discourse control encoded in the rating scale seemed to have affected the quality of speech acts.

125 citations


Book ChapterDOI
07 May 2006
TL;DR: A multi-view multi-hypothesis approach to segmenting and tracking multiple (possibly occluded) persons on a ground plane is proposed and a set of a few good particles can be identified, resulting in low computational cost.
Abstract: A multi-view multi-hypothesis approach to segmenting and tracking multiple (possibly occluded) persons on a ground plane is proposed. During tracking, several iterations of segmentation are performed using information from human appearance models and ground plane homography. To more precisely locate the ground location of a person, all center vertical axes of the person across views are mapped to the top-view plane and their intersection point on the ground is estimated. To tackle the explosive state space due to multiple targets and views, iterative segmentation-searching is incorporated into a particle filtering framework. By searching for people's ground point locations from segmentations, a set of a few good particles can be identified, resulting in low computational cost. In addition, even if all the particles are away from the true ground point, some of them move towards the true one through the iterated process as long as they are located nearby. We demonstrate the performance of the approach on several video sequences.

120 citations


Journal ArticleDOI
01 Aug 2006
TL;DR: This paper proves that the dynamic system converges in a small number of iterations to a state where the minimal distance to a target is recorded at each grid point and shows that this robot-path-planning algorithm can be made to always choose an optimal path.
Abstract: This paper presents a simple yet efficient dynamic-programming (DP) shortest path algorithm for real-time collision-free robot-path planning applicable to situations in which targets and barriers are permitted to move. The algorithm works in real time and requires no prior knowledge of target or barrier movements. In the case that the barriers are stationary, this paper proves that this algorithm always results in the robot catching the target, provided it moves at a greater speed than the target, and the dynamic-system update frequency is sufficiently large. Like most robot-path-planning approaches, the environment is represented by a topologically organized map. Each grid point on the map has only local connections to its neighboring grid points from which it receives information in real time. The information stored at each point is a current estimate of the distance to the nearest target and the neighbor from which this distance was determined. Updating the distance estimate at each grid point is done using only the information gathered from the point's neighbors, that is, each point can be considered an independent processor, and the order in which grid points are updated is not determined based on global knowledge of the current distances at each point or the previous history of each point. The robot path is determined in real time completely from the information at the robot's current grid-point location. The computational effort to update each point is minimal, allowing for rapid propagation of the distance information outward along the grid from the target locations. In the static situation, where both the targets and the barriers do not move, this algorithm is a DP solution to the shortest path problem, but is restricted by lack of global knowledge. In this case, this paper proves that the dynamic system converges in a small number of iterations to a state where the minimal distance to a target is recorded at each grid point and shows that this robot-path-planning algorithm can be made to always choose an optimal path. The effectiveness of this algorithm is demonstrated through a number of simulations

Proceedings ArticleDOI
17 Jun 2006
TL;DR: This paper uses an automatic segmentation algorithm to identify the set of rigid points which in turn is used to estimate the internal camera calibration parameters and the overall rigid motion and formalises the problem of non-rigid shape estimation as a constrained non-linear minimization adding priors on the degree of deformability of each point.
Abstract: In this paper we focus on the estimation of the 3D Euclidean shape and motion of a non-rigid object which is moving rigidly while deforming and is observed by a perspective camera. Our method exploits the fact that it is often a reasonable assumption that some of the points are deforming throughout the sequence while others remain rigid. First we use an automatic segmentation algorithm to identify the set of rigid points which in turn is used to estimate the internal camera calibration parameters and the overall rigid motion. Finally we formalise the problem of non-rigid shape estimation as a constrained non-linear minimization adding priors on the degree of deformability of each point. We perform experiments on synthetic and real data which show firstly that even when using a minimal set of rigid points it is possible to obtain reliable metric information and secondly that the shape priors help to disambiguate the contribution to the image motion caused by the deformation and the perspective distortion.

Book ChapterDOI
01 Jan 2006
TL;DR: Practical techniques for fitting stochastic models to spatial point pattern data in the statistical package R are described and demonstrated on two example datasets.
Abstract: We describe practical techniques for fitting stochastic models to spatial point pattern data in the statistical package R. The techniques have been implemented in our package spatstat in R. They are demonstrated on two example datasets.

Journal ArticleDOI
TL;DR: In this article, an explicit material point finite element (FE) method is proposed and a computer code EMPFE-3D is developed for simulating hypervelocity impact, which combines the advantages of Eulerian and Lagrangian descriptions of motion while eliminating their drawbacks due to element entanglement and numerical dissipation.
Abstract: In this paper, an explicit material point finite element (FE) method is proposed and a computer code EMPFE-3D is developed for simulating hyper-velocity impact. The material domain is discretized by a mesh of finite elements. The momentum equations are solved on a predefined computational grid (like the material point method) in the large deformation zone, and on the FE mesh (like the traditional FE method) elsewhere. The grid may be fixed in space or moved in a predefined way. The nodes covered by the grid are treated as material particles, and the remaining nodes are treated as FE nodes. The proposed method yields the same results as the traditional FE method if the grid vanishes. On the other hand, it yields the same results as the material point method if the grid covers the entire material domain at all time steps. The method combines the advantages of Eulerian and Lagrangian descriptions of motion while eliminates their drawbacks due to element entanglement and numerical dissipation. The method is computationally efficient and can be easily implemented in an existing explicit FE code like DYNA3D. Copyright © 2005 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This work creates a framework for voxel-less multiresolution representation of images in nuclear medicine by developing a method for three-dimensional tomographic reconstruction using a point cloud image representation.
Abstract: Medical images in nuclear medicine are commonly represented in three dimensions as a stack of two-dimensional images that are reconstructed from tomographic projections. Although natural and straightforward, this may not be an optimal visual representation for performing various diagnostic tasks. A method for three-dimensional (3-D) tomographic reconstruction is developed using a point cloud image representation. A point cloud is a set of points (nodes) in space, where each node of the point cloud is characterized by its position and intensity. The density of the nodes determines the local resolution allowing for the modeling of different parts of the image with different resolution. The reconstructed volume, which in general could be of any resolution, size, shape, and topology, is represented by a set of nonoverlapping tetrahedra defined by the nodes. The intensity at any point within the volume is defined by linearly interpolating inside a tetrahedron from the values at the four nodes that define the tetrahedron. This approach creates a continuous piecewise linear intensity over the reconstruction domain. The reconstruction provides a distinct multiresolution representation, which is designed to accurately and efficiently represent the 3-D image. The method is applicable to the acquisition of any tomographic geometry, such as parallel-, fan-, and cone-beam; and the reconstruction procedure can also model the physics of the image detection process. An efficient method for evaluating the system projection matrix is presented. The system matrix is used in an iterative algorithm to reconstruct both the intensity and location of the distribution of points in the point cloud. Examples of the reconstruction of projection data generated by computer simulations and projection data experimentally acquired using a Jaszczak cardiac torso phantom are presented. This work creates a framework for voxel-less multiresolution representation of images in nuclear medicine

01 Oct 2006
TL;DR: In this article, a Reproducing Kernel Hilbert Space (RKHS) norm is induced on the space of Borel measures and an efficient alternating conjugate gradient decent algorithm is derived.
Abstract: A central notion in Computational Anatomy is the generation of registration maps,mapping a large set of anatomical data to a common coordinate system to study intra-population variability and inter-population differences. In previous work methods for estimating the common coordinate system or the template given a collection imaging data were presented based on the notion of Frechet mean estimation using a metric on the space of diffeomorphisms. In this paper we extend the methodology to the estimation of a template given a collection of unlabeled point sets and surfaces. Using a representation of points and surfaces as currents a Reproducing Kernel Hilbert Space (RKHS) norm is induced on the space of Borel measures. Using this norm and a metric on the space of diffeomorphisms the template estimation problem is possed as a minimum mean squared error estimation problem. An efficient alternating conjugate gradient decent algorithm is derived and results exemplifying the methodology are presented.

Patent
16 Aug 2006
TL;DR: In this article, a method for recognizing a dual point user input on a touch-based user input device is presented, in which a first user input to an input device relating to a first position and a second user input with respect to a second position is used to determine on the basis of the first position signal and the second position signal if the second user's input has its source in a simultaneous dual point input.
Abstract: There is disclosed a method for recognizing a dual point user input on a touch based user input device, comprising receiving a first user input to said input device relating to a first position, forming a first position signal relating to said first user input, receiving a second user input to said input device relating to a second position, forming a second position signal relating to said first input and said second input, determining on the basis of said first position signal and said second position signal, if said second user input has its source in a simultaneous dual point user input, generating a third position based on said first position and said second position, and using said first and third positions, as the coordinates of said dual point user input. Further there is provided a touch based input device controller for a touch based user input device, wherein said input device is only capable of outputting a single input position signal that depends on the actual user input, comprising, an input connectable to said touch based user input device to receive successive position signals each representing a position on said touch based user input device, which a user has touched, a memory, connected to said input, tcr store at least one of said position signals, a differentiator to detect time dependent transition properties between two different successive positions, a first evaluation circuit connected to said differentiator to determine, if a position following a preceding position is caused by a single point user input or by a dual point user input, a second evaluation circuit, connected to said input, said memory and said first evaluation circuit, wherein said second evaluation circuit is provided to calculate a dual point user input by reflecting a first input position at a successive second position, and an output, connected to said second evaluation unit, connectable to a processing unit.

Journal ArticleDOI
TL;DR: The computational problem of multiple common point set detection and, particularly, the matching problem in K-partite-epsilon graphs, where K partitions are associated with K structures and edges are defined between epsilon-close points, is discussed and it is shown that the K- partite- Epsilon matching problem is NP-hard in the Euclidean space with dimension larger than one.
Abstract: Recognition of binding patterns common to a set of protein structures is important for recognition of function, prediction of binding, and drug design. We consider protein binding sites represented by a set of 3D points with assigned physico-chemical and geometrical properties important for protein–ligand interactions. We formulate the multiple binding site alignment problem as detection of the largest common set of such 3D points. We discuss the computational problem of multiple common point set detection and, particularly, the matching problem in K-partite-e graphs, where K partitions are associated with K structures and edges are defined between e-close points. We show that the K-partite-e matching problem is NP-hard in the Euclidean space with dimension larger than one. Consequently, we show that the largest common point set problem between three point sets is NP-hard. On the practical side, we present a novel computational method, MultiBind, for recognition of binding patterns common to a set of prot...

Proceedings Article
01 Jan 2006
TL;DR: Property of point sets are investigated to derive criteria for automatic hole detection and a final boundary loop extraction step uses this probability and exploits additional coherence properties of the boundary to derive a robust and automatic holes detection algorithm.
Abstract: Models of non-trivial objects resulting from a 3d data acquisition process (e.g. Laser Range Scanning) often contain holes due to occlusion, reflectance or transparency. As point set surfaces are unstructured surface representations with no adjacency or connectivity information, defining and detecting holes is a non-trivial task. In this paper we investigate properties of point sets to derive criteria for automatic hole detection. For each point, we combine several criteria into an integrated boundary probability. A final boundary loop extraction step uses this probability and exploits additional coherence properties of the boundary to derive a robust and automatic hole detection algorithm.

Patent
08 Feb 2006
TL;DR: In this paper, a method of generating an evidence grid representing an environment comprises the steps of collecting a set of stereo images at multiple locations within the environment using stereo sensors, and processing the stereo images to determine occupancy and distance data associated with each point in space represented in the set of stereogram images.
Abstract: A method of generating an evidence grid representing an environment comprises the steps of collecting a set of stereo images at multiple locations within the environment using stereo sensors, and processing the set of stereo images to determine occupancy and distance data associated with each point in space represented in the set of stereo images; applying a sensor model to each point in space to determine a probability of occupancy of each voxel representing each point in space; and generating an evidence grid of the environment by combining the probabilities for each voxel.

Journal ArticleDOI
Xiaohu Guo1, Xin Li1, Yunfan Bao1, Xianfeng Gu1, Hong Qin1 
TL;DR: This paper proposes a meshless surface computational paradigm in which the partial differential equations can be applied and solved directly over point samples via moving least squares (MLS) shape functions defined on the global parametric domain without explicit connectivity information.
Abstract: This paper presents a new approach to the physically-based thin-shell simulation of point-sampled geometry via explicit, global conformal point-surface parameterization and meshless dynamics. The point-based global parameterization is founded upon the rigorous mathematics of Riemann surface theory and Hodge theory. The parameterization is globally conformal everywhere except for a minimum number of zero points. Within our parameterization framework, any well-sampled point surface is functionally equivalent to a manifold, enabling popular and powerful surface-based modeling and physically-based simulation tools to be readily adapted for point geometry processing and animation. In addition, we propose a meshless surface computational paradigm in which the partial differential equations (for dynamic physical simulation) can be applied and solved directly over point samples via moving least squares (MLS) shape functions defined on the global parametric domain without explicit connectivity information. The global conformal parameterization provides a common domain to facilitate accurate meshless simulation and efficient discontinuity modeling for complex branching cracks. Through our experiments on thin-shell elastic deformation and fracture simulation, we demonstrate that our integrative method is very natural, and that it has great potential to further broaden the application scope of point-sampled geometry in graphics and relevant fields

Patent
06 Feb 2006
TL;DR: In this paper, the authors describe a method, an apparatus, and a carrier medium carrying computer readable code segments to instruct a processor to execute the method in a wireless network that includes at least one access point.
Abstract: A method, an apparatus, and a carrier medium carrying computer readable code segments to instruct a processor to execute the method. The method is in a wireless network that includes at least one access point. The method includes, from time-to-time, measuring a first set of at least one property of each access point of a set of at least one classified access point of the wireless network. The method further includes re-classifying each access point based on at least one function of a second set of at least one property of the access point, the second set of properties including the first set of properties. The set of at least one access point is classified according to a set of AP classifications, and the re-classifying is into one of the AP classifications.

Journal ArticleDOI
TL;DR: In this paper, a neural network model of simulating multi-point tides at multi-points considering tide-generating forces is presented. But the proposed model is examined efficient as the harmonic method to estimate the tides at a single point.

Journal ArticleDOI
TL;DR: In this paper, the L q -discrepancy of point distributions in the multi-dimensional unit cube with respect to certain p-ary arithmetic operations has been studied, where point distributions can be thought of as finite subgroups in a compact totally disconnected group of the Cantor type.
Abstract: Abstract The goal of this paper is to study point distributions in the multi-dimensional unit cube which possess the structure of finite abelian groups with respect to certain p-ary arithmetic operations. Such distributions can be thought of as finite subgroups in a compact totally disconnected group of the Cantor type. We apply the methods of L q harmonic analysis to estimate very precisely the L q -discrepancies for such distributions. Following this approach, we explicitly construct point distributions with the minimal order of the L q -discrepancy for each q, 1 < q < ∞.

Patent
30 Nov 2006
TL;DR: In this article, a system, method and computer program product are provided for generating backup sets for a specific point in time from backup data stored in a backup storage device, where the backup set is a collection of the files and data that would have been the most recently backed-up version of a computer's file and data at a particular point in the past.
Abstract: A system, method and computer program product are provided for generating backup sets for a specific point in time from backup data stored in a backup storage device. A backup set, in accordance with an illustrative embodiment, is a collection of the files and data that would have been the most recently backed-up version of a computer's files and data at the particular point in time if the backup set had been created at that point in time. In order to generate the backup set, versions of data objects are identified in the backup storage system that have backup timestamps equal to or prior to the designated point in time and that have deactivation times that are after the designated point in time. In addition, a mechanism for deleting versions of data objects based on a retention time and an inactive version limit is provided.

Journal ArticleDOI
TL;DR: The data constitute the first tri-dimensional assessment of the subjective vertical using a common measurement procedure and provide the basis for the tri-axial modeling of vestibular function presented in the companion paper.
Abstract: Localization of the subjective vertical during body tilt in pitch and in roll has been extensively studied because of the relevance of these axes for aviation and control of posture. Studies of yaw orientation relative to gravity are lacking. Our goal was to perform the first thorough evaluation of static orientation in recumbent yaw and to collect as efficiently as possible roll and pitch orientation data which would be consistent with the literature, using the same technique as our yaw tests. This would create the first comprehensive, coherent data set for all three axes suitable for quantitative tri- dimensional modeling of spatial orientation. We tested localization of the vertical for subjects tilted in pitch (� 100� to +130� ), in roll (� 90� to +90� ), and in yaw while recumbent (� 80� to +80� ). We had subjects point a gravity-neutral probe to the gravitational vertical (haptically indicated vertical) and report verbally their perceived tilt. Subjects underestimated their body tilts in recumbent yaw and pitch and overestimated their tilts in roll. The haptic settings for pitch and roll were consis- tent with data in the literature obtained with haptic and visual indications. Our data constitute the first tri- dimensional assessment of the subjective vertical using a common measurement procedure and provide the basis for the tri-axial modeling of vestibular function pre- sented in the companion paper.

Journal ArticleDOI
TL;DR: Different algorithms are compared and it is shown that using an integration rule based on Gauss-points for simplices is of interest and can be handled efficiently and allows to recover the convergence rate theoretically expected, even with curved elements.
Abstract: Spectral element approximations for triangles are not yet as mature as for quadrilaterals. Here we compare different algorithms and show that using an integration rule based on Gauss-points for simplices is of interest. We point out that this can be handled efficiently and allows to recover the convergence rate theoretically expected, even with curved elements.

Journal ArticleDOI
TL;DR: In this paper, reachable sets from a point for the Heisenberg sub-Lorentzian metric on R3 were computed and an estimate for the distance function was given.
Abstract: In this paper we compute reachable sets from a point for the Heisenberg sub-Lorentzian metric on R3 and give an estimate (from below) for the distance function.

Journal ArticleDOI
TL;DR: This paper finishes determining the minimum distance of two-point codes on a Hermitian curve by solving the inequality of the following type: For α ≥ 1, β ≥ 1 using LaSalle's inequality.
Abstract: This is a continuation of the previous papers [3, 4, 5]. We finish determining the minimum distance of two-point codes on a Hermitian curve.

Journal ArticleDOI
01 Dec 2006
TL;DR: Several Hardy-type inequalities with explicit constants for compactly supported smooth functions on open sets in the Euclidean space Ωn were proved in this article, where the constants were explicit constants.
Abstract: Several Hardy-type inequalities with explicit constants are proved for compactly supported smooth functions on open sets in the Euclidean space ℝn.