scispace - formally typeset
Search or ask a question
Dissertation

Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework

17 Jul 2013-
TL;DR: La information GPS est indisponible pendant une longue periode, the trajectoire estimee par uniquement les approches relatives tend `a diverger, en raison de l’accumulation of the position du vehicule en tout temps.
Abstract: In some dense urban environments (e.g., a street with tall buildings around), vehicle localization result provided by Global Positioning System (GPS) receiver might not be accurate or even unavailable due to signal reflection (multi-path) or poor satellite visibility. In order to improve the accuracy and robustness of assisted navigation systems so as to guarantee driving security and service continuity on road, a vehicle localization approach is presented in this thesis by taking use of the redundancy and complementarities of multiple sensors. At first, GPS localization method is complemented by onboard dead-reckoning (DR) method (inertial measurement unit, odometer, gyroscope), stereovision based visual odometry method, horizontal laser range finder (LRF) based scan alignment method, and a 2D GIS road network map based map-matching method to provide a coarse vehicle pose estimation. A sensor selection step is applied to validate the coherence of the observations from multiple sensors, only information provided by the validated sensors are combined under a loosely coupled probabilistic framework with an information filter. Then, if GPS receivers encounter long term outages, the accumulated localization error of DR-only method is proposed to be bounded by adding a GIS building map layer. Two onboard LRF systems (a horizontal LRF and a vertical LRF) are mounted on the roof of the vehicle and used to detect building facades in urban environment. The detected building facades are projected onto the 2D ground plane and associated with the GIS building map layer to correct the vehicle pose error, especially for the lateral error. The extracted facade landmarks from the vertical LRF scan are stored in a new GIS map layer. The proposed approach is tested and evaluated with real data sequences. Experimental results with real data show that fusion of the stereoscopic system and LRF can continue to localize the vehicle during GPS outages in short period and to correct the GPS positioning error such as GPS jumps; the road map can help to obtain an approximate estimation of the vehicle position by projecting the vehicle position on the corresponding road segment; and the integration of the building information can help to refine the initial pose estimation when GPS signals are lost for long time.

Content maybe subject to copyright    Report

Citations
More filters
Dissertation
05 Dec 2008
TL;DR: In this paper, the authors present an approach for the estimation of the pose relative to a vehicule based on GPS data, which is recalee par des mesures GPS lorsque celles-ci sont disponibles.
Abstract: Cette these s'interesse a l'apport d'un modele virtuel 3D urbain pour l'ego-localisation de vehicules intelligents ainsi que la detection et la localisation d'obstacles. La localisation du vehicule s'appuie sur plusieurs sources d'information: un GPS, des capteurs proprioceptifs (odometres et gyrometre), une camera et un modele virtuel 3D de l'environnement. Les capteurs proprioceptifs permettent d'obtenir une estimation quasi continue de la pose relative du vehicule. Cette estimation de la pose est recalee par des mesures GPS lorsque celles-ci sont disponibles. Afin de palier la derive de la localisation a l'estime lors de longues indisponibilites des informations GPS, on construit une observation cartographique 3D. Celle-ci est basee sur le recalage entre le modele virtuel 3D urbain et les images acquises par la camera. Des travaux experimentaux illustrent l'approche developpee. Par ailleurs, l'apport d'un modele virtuel 3D urbain est egalement etudie pour la detection et la localisation des obstacles. Une fois localise dans le modele 3D, les obstacles de l'infrastructure tels que les bâtiments sont connus et localises. Pour detecter les obstacles n'appartenant pas a l'infrastructure (vehicules, pietons ... ), on compare l'image reelle et l'image virtuelle en considerant que ce type d'obstacles est present dans l'image reelle mais absent de l'image virtuelle. A partir de l'information de profondeur disponible grâce au modele 3D, les obstacles detectes sont ensuite geolocalises. Les resultats experimentaux obtenus sont compares et valides grâce a un telemetre laser.

11 citations

Patent
12 Aug 2016
TL;DR: In this paper, a fixed light source is identified in each of the first and second images, and the first image data and the second image data are transformed to provide an inverse perspective map (IPM) comprising a first transformed intersection and a second transformed intersection, respectively.
Abstract: First and second image data is captured comprising a first and second image, respectively. A fixed light source is identified in each of the first and second images. A first ground plane is determined in the first image data. A first (second) intersection is determined, wherein the first (second) intersection is a point in the first image where a virtual lamp post corresponding to the fixed light source in the first (second) image intersects with the first (second) ground plane. The first image data and the second image data are transformed to provide a first and second inverse perspective map (IPM) comprising a first transformed intersection and a second transformed intersection, respectively. Movement parameters are determined based on the location of the first transformed intersection in the first IPM and the location of the second transformed intersection in the second IPM.

10 citations

Proceedings ArticleDOI
01 Nov 2016
TL;DR: Advanced Driver Assistance Systems (ADAS) towards autonomous driving require an ego vehicle localization on maps to be able to use the map data for e.g. behavior and trajectory prediction of traffic participants.
Abstract: Advanced Driver Assistance Systems (ADAS) towards autonomous driving require an ego vehicle localization on maps to be able to use the map data for e.g. behavior and trajectory prediction of traffic participants.

9 citations


Cites background from "Multi-sources fusion based vehicle ..."

  • ...Several sensor types can be combined to increase robustness [8], [9]....

    [...]

Patent
24 Jan 2017
TL;DR: In this paper, an image is received that was captured by an image capturing device in communication with a probe apparatus on board a vehicle, wherein the image comprises at least a portion of a roadway.
Abstract: Methods, apparatuses, computer program products, and systems are provided for training a network to act as an overhanging structure detector using an unsupervised machine learning technique. An image is received that was captured by an image capturing device in communication with a probe apparatus on board a vehicle, wherein the image comprises at least a portion of a roadway. A sky projection is generated based on at least a portion of the image. It is determined whether the sky projection comprises a feature that defines a feature direction that is substantially non-vertical. Responsive to determining that the sky projection does comprise a feature that defines a feature direction that is substantially non-vertical, it is determined that the image comprises an overhanging structure.

4 citations

Journal Article
TL;DR: In this article, the assumption of small foreground moving object assumption is relaxed and the observations in motion field and image alignment are integrated to provide a robust moving object detection solution in unconstrained indoor environment.
Abstract: Most moving object detection methods rely on approaches similar to background subtraction or frame differences that require camera to be fixed at a certain position. However, on mobile robots, a background model can not be maintained because of the camera motion introduced by the robot motion. To overcome such obstacle, some researchers proposed methods that use optical flow and stereo vision to detect moving objects on moving platforms. These methods work under a assumption that the areas belong to the interesting foreground moving objects are relatively small compare to the areas belong to the uninteresting background scene. However, in many situations, the moving objects may approach closely to the robot on which the camera is located. In such a case, the assumption of small foreground moving object will be violated. This paper presents a framework which shows that the small foreground moving object assumption could be relaxed. Further, it integrates the observations in motion field and image alignment to provide a robust moving object detection solution in unconstrained indoor environment.

4 citations

References
More filters
Journal ArticleDOI
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Abstract: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

46,906 citations

Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations

Journal ArticleDOI
Paul J. Besl1, H.D. McKay1
TL;DR: In this paper, the authors describe a general-purpose representation-independent method for the accurate and computationally efficient registration of 3D shapes including free-form curves and surfaces, based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point.
Abstract: The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >

17,598 citations

Book
01 Jan 2000
TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
Abstract: From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly.

15,558 citations