scispace - formally typeset
Search or ask a question
Author

Sebastian Thrun

Other affiliations: University of Pittsburgh, ETH Zurich, Carnegie Mellon University  ...read more
Bio: Sebastian Thrun is an academic researcher from Stanford University. The author has contributed to research in topics: Mobile robot & Robot. The author has an hindex of 146, co-authored 434 publications receiving 98124 citations. Previous affiliations of Sebastian Thrun include University of Pittsburgh & ETH Zurich.


Papers
More filters
01 Jan 2006
TL;DR: A set of discrete search algorithms that efficiently repair the current solution when new information is received concerning the environment and an approach that is able to provide better solutions through reasoning about the uncertainty in the initial information held by the agent are introduced.
Abstract: As autonomous agents make the transition from solving simple, well-behaved problems to being useful entities in the real world, they must deal with the added complexity and uncertainty inherent in real environments. In particular, agents navigating through the real world can be confronted with imperfect information (e.g. when prior maps are absent or incomplete), limited deliberation time (e.g. when agents need to act quickly), and dynamic elements (e.g. when there are humans or other agents in the environment). This thesis addresses the problem of path planning and replanning in realistic scenarios involving these three challenges. For single agent planning we present a set of discrete search algorithms that efficiently repair the current solution when new information is received concerning the environment. We also introduce an approach that is able to provide better solutions through reasoning about the uncertainty in the initial information held by the agent. To cope with both imperfect information and limited deliberation time, we provide an additional algorithm that is able to improve the solution while time allows and repair the solution when new information is received. We further show how this algorithm can be used to plan and replan paths in dynamic environments. For multi-agent planning we present a set of sampling-based search algorithms that provide similar behavior to the above approaches but that can handle much higher dimensional search spaces. These sampling-based algorithms extend current approaches to perform efficient repair when new information is received and to provide higher quality solutions given limited deliberation time. We show how our culminating algorithm, which is able to both improve and repair its solution over time, can be used for multi-agent planning and replanning in dynamic environments. Together, the collection of planning algorithms introduced in this thesis enable single agents and multi-agent teams to navigate and coordinate in a wide range of realistic scenarios.

8 citations

Proceedings ArticleDOI
01 Jan 1996
TL;DR: A selective approach to lifelong learning is described, the task clustering (TC) algorithm, which transfers knowledge across multiple tasks by adjusting the distance metric in nearest neighbour generalization and is more robust than its unselective counterpart.
Abstract: Learning more accurate functions from less data is a key issue in robot learning. This paper investigates robot learning in a lifelong learning framework. In lifelong learning, the learner faces an entire collection of learning tasks, not just a single one. Thus, it provides the opportunity for synergy among multiple tasks. To obtain this synergy, the central question in lifelong learning is how can the learner transfer knowledge across multiple tasks. In this paper we describe a selective approach to lifelong learning, the task clustering (TC) algorithm. TC transfers knowledge across multiple tasks by adjusting the distance metric in nearest neighbour generalization. To increase robustness to unrelated tasks, TC arranges all learning tasks hierarchically. When a new learning task arrives, TC relates it to the task hierarchy, in order to transfers knowledge selectively from the most related tasks only. As a result, TC is more robust than its unselective counterpart. Thus far, TC has been successfully applied to perception tasks involving visual and ultrasonic input, using our mobile robot XAVIER. (3 pages)

8 citations

01 Jan 2004
TL;DR: The Groundhog robot as mentioned in this paper explored and mapped a main corridor of the abandoned Mathies mine near Courtney, Pennsylvania, using a setof software tools, enabling robots to acquire mapsof unprecedented size and accuracy.
Abstract: bandoned mines pose significant threats to society,yet a large fraction of them lack accurate maps.This article discusses the software architecture ofan autonomous robotic system designed toexplore and map abandoned mines. A new setof software tools is presented, enabling robots to acquire mapsof unprecedented size and accuracy. On 30 May 2003, ourrobot “Groundhog” successfully explored and mapped a maincorridor of the abandoned Mathies mine near Courtney,Pennsylvania. This article also discusses some of the challengesthat arise in the subterranean environments and some the dif-ficulties of building truly autonomous robots.In recent years, the quest to find and explore new, unex-plored terrain has led to the deployment of more and moresophisticated robotic systems, designed to traverse increasinglyremote locations. Robotic systems have successfully exploredvolcanoes [5], searched meteorites in Antarctica [1], [44], tra-versed deserts [3], explored and mapped the sea bed [12], andeven explored other planets [26]. This article presents a robotsystem designed to explore spaces much closer to us: aban-doned underground mines.According to a recent survey [6], “tens of thousands, per-haps even hundreds of thousands, of abandoned mines existtoday in the United States. Not even the U.S. Bureau ofMines knows the exact number, because federal recording ofmining claims was not required until 1976.” Shockingly, weare unaware of the location of many mines; despite the factthat most mines were built just a few generations ago! A

8 citations

Book ChapterDOI
01 Jan 2009
TL;DR: A novel method for answering count queries from a large database approximately and quickly, which implements an approximate DataCube of the application domain, which can be used to answer any conjunctive count query that can be formed by the user.
Abstract: We present a novel method for answering count queries from a large database approximately and quickly. Our method implements an approximate DataCube of the application domain, which can be used to answer any conjunctive count query that can be formed by the user. The DataCube is a conceptual device that in principle stores the number of matching records for all possible such queries. However, because its size and generation time are inherently exponential, our approach uses one or more Bayesian networks to implement it approximately. Bayesian networks are statistical graphical models that can succinctly represent the underlying joint probability distribution of the domain, and can therefore be used to calculate approximate counts for any conjunctive query combination of attribute values and “don’t cares.” The structure and parameters of these networks are learned from the database in a preprocessing stage. By means of such a network, the proposed method, called NetCube, exploits correlations and independencies among attributes to answer a count query quickly without accessing the database. Our preprocessing algorithm scales linearly on the size of the database, and is thus scalable; it is also parallelizable with a straightforward parallel implementation. We give an algorithm for estimating the count result of arbitrary queries that is fast (constant) on the database size. Our experimental results show that NetCubes

8 citations

Book ChapterDOI
01 Jan 2003
TL;DR: The core of this system is a real-time version of the popular expectation algorithm, developed for extracting scalar surfaces from sets of range scans, which consists of a small number of planar rectangular surfaces, which are augmented by fine-grained polygons for non-flat environmental features.
Abstract: This paper summarizes recent research on developing autonomous robot systems than can acquire volumetric 3D maps with mobile robots in real-time. The core of our system is a real-time version of the popular expectation algorithm, developed for extracting scalar surfaces from sets of range scans (Martin, Thrun, 2002). Maps generated by this algorithm consists of a small number of planar rectangular surfaces, which are augmented by fine-grained polygons for non-flat environmental features. Experimental results obtained in a corridor-type environment illustrate that compact and accurate maps can be acquired in real-time from range and camera data.

8 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations

Proceedings Article
03 Jan 2001
TL;DR: This paper proposed a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI).
Abstract: We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification.

25,546 citations

Book
25 Oct 1999
TL;DR: This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining.
Abstract: Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization

20,196 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations