scispace - formally typeset
Search or ask a question

Showing papers by "Paul Newman published in 2017"


Journal ArticleDOI
TL;DR: By frequently traversing the same route over the period of a year, this dataset enables research investigating long-term localization and mapping for autonomous vehicles in real-world, dynamic urban environments to be investigated.
Abstract: We present a challenging new dataset for autonomous driving: the Oxford RobotCar Dataset. Over the period of May 2014 to December 2015 we traversed a route through central Oxford twice a week on av...

1,285 citations


Journal ArticleDOI
TL;DR: A set of five ethical principles, together with seven high-level messages, as a basis for responsible robotics.
Abstract: This paper proposes a set of five ethical principles, together with seven high-level messages, as a basis for responsible robotics. The Principles of Robotics were drafted in 2010 and published online in 2011. Since then the principles have influenced, and continue to influence, a number of initiatives in robot ethics but have not, to date, been formally published. This paper remedies that omission.

125 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work proposes a direct monocular SLAM algorithm based on the Normalised Information Distance (NID) metric, which provides comparable localisation accuracy to state-of-the-art photometric methods but significantly outperforms both direct and feature-based methods in robustness to appearance changes.
Abstract: We propose a direct monocular SLAM algorithm based on the Normalised Information Distance (NID) metric. In contrast to current state-of-the-art direct methods based on photometric error minimisation, our information-theoretic NID metric provides robustness to appearance variation due to lighting, weather and structural changes in the scene. We demonstrate successful localisation and mapping across changes in lighting with a synthetic indoor scene, and across changes in weather (direct sun, rain, snow) using real-world data collected from a vehicle-mounted camera. Our approach runs in real-time on a consumer GPU using OpenGL, and provides comparable localisation accuracy to state-of-the-art photometric methods but significantly outperforms both direct and feature-based methods in robustness to appearance changes.

51 citations


Posted Content
TL;DR: A novel method for fitting multiple geometric models to multi-structural data via convex relaxation that results in an energy minimisation that is as much as two orders of magnitude faster on comparable architectures thus bringing real-time, robust performance to a wider set of geometric multi-model fitting problems.
Abstract: We propose a novel method to fit and segment multi-structural data via convex relaxation. Unlike greedy methods --which maximise the number of inliers-- this approach efficiently searches for a soft assignment of points to models by minimising the energy of the overall classification. Our approach is similar to state-of-the-art energy minimisation techniques which use a global energy. However, we deal with the scaling factor (as the number of models increases) of the original combinatorial problem by relaxing the solution. This relaxation brings two advantages: first, by operating in the continuous domain we can parallelize the calculations. Second, it allows for the use of different metrics which results in a more general formulation. We demonstrate the versatility of our technique on two different problems of estimating structure from images: plane extraction from RGB-D data and homography estimation from pairs of images. In both cases, we report accurate results on publicly available datasets, in most of the cases outperforming the state-of-the-art.

19 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: This paper demonstrates distraction suppression as a front-end process to large scale localiser by incrementally adding 50km of error data to the base map and shows that robustness is improved over the base system with a further 10km of urban driving.
Abstract: This paper addresses a difficulty in large-scale long term laser localisation — how to deal with scene change. We pose this as a distraction suppression problem. Urban driving environments are frequently subject to large dynamic outliers, such as buses, trucks etc. These objects can mask the static elements of the prior map that we rely on for localisation. At the same time some objects change shape in a way that is less dramatic but equally pernicious during localisation — for example trees over seasons and in wind, shop fronts and doorways. In this paper, we show how we can learn in high resolution, the areas of our map that are subject to such distractions (low value data) in a place-dependent approach. We demonstrate how to utilise this model to select individual laser measurements for localisation. Specifically, by leveraging repeated operation over weeks and months, for each point in our map pointcloud we build distributions of the errors associated with that point for multiple localisation passes. These distributions are then used to determine the legitimacy of laser measurements prior to their use in localisation. We demonstrate distraction suppression as a front-end process to large scale localiser by incrementally adding 50km of error data to our base map and show that robustness is improved over the base system with a further 10km of urban driving.

10 citations


Journal Article
TL;DR: This paper presents a mercantile framework for the decentralised sharing of navigation expertise amongst a fleet of robots which perform regular missions into a common but variable environment and suggests some obligatory properties that a formalisation of the distributed versioning of experience maps should exhibit.
Abstract: This paper presents a mercantile framework for the decentralised sharing of navigation expertise amongst a fleet of robots which perform regular missions into a common but variable environment. We build on our earlier work and allow individual agents to intermittently initiate trades based on a real-time assessment of the nature of their missions or demand for localisation capability, and to choose trading partners with discrimination based on an internally evolving set of beliefs in the expected value of trading with each other member of the team. To this end, we suggest some obligatory properties that a formalisation of the distributed versioning of experience maps should exhibit, to ensure the eventual convergence in the state of each agent's map under a sequence of pairwise exchanges, as well as the uninterrupted integrity of the representation under versioning operations. To mitigate limitations in hardware and network resources, the "data market" is catalogued by distinct sections of the world, which the agents treat as "products" for appraisal and purchase. To this end, we demonstrate and evaluate our system using the publicly available Oxford RobotCar Dataset, the hand-labelled data market catalogue (approaching 446km of fully indexed sections-of-interest) for which we plan to release alongside the existing raw stereo imagery. We show that, by refining market policies over time, agents achieve improved localisation in a directed and accelerated manner.

5 citations