scispace - formally typeset
Search or ask a question
Author

Sebastian Thrun

Other affiliations: University of Pittsburgh, ETH Zurich, Carnegie Mellon University  ...read more
Bio: Sebastian Thrun is an academic researcher from Stanford University. The author has contributed to research in topics: Mobile robot & Robot. The author has an hindex of 146, co-authored 434 publications receiving 98124 citations. Previous affiliations of Sebastian Thrun include University of Pittsburgh & ETH Zurich.


Papers
More filters
Proceedings ArticleDOI
06 Nov 2014
TL;DR: This paper presents an automatic method for the synchronization and calibration of RGBD and thermal cameras in arbitrary environments and demonstrates that it consistently converges to the correct transform and results in high-quality RGBDT data.
Abstract: Reasoning about a scene's thermal signature, in addition to its visual appearance and spatial configuration, would facilitate significant advances in perceptual systems. Applications involving the segmentation and tracking of persons, vehicles, and other heat-emitting objects, for example, could benefit tremendously from even coarsely accurate relative temperatures. With the increasing affordability of commercially available thermal cameras, as well as the imminent introduction of new, mobile form factors, such data will be readily and widely accessible. However, in order for thermal processing to complement existing methods in RGBD, there must be an effective procedure for calibrating RGBD and thermal cameras to create RGBDT (red, green, blue, depth, and thermal) data. In this paper, we present an automatic method for the synchronization and calibration of RGBD and thermal cameras in arbitrary environments. While traditional calibration methods fail in our multimodal setting, we leverage invariant features visible by both camera types. We first synchronize the streams with a simple optimization procedure that aligns their motion statistic time series. We then find the relative poses of the cameras by minimizing an objective that measures the alignment between edge maps from the two streams. In contrast to existing methods that use special calibration targets with key points visible to both cameras, our method requires nothing more than some edges visible to both cameras, such as those arising from humans. We evaluate our method and demonstrate that it consistently converges to the correct transform and that it results in high-quality RGBDT data.

16 citations

Book ChapterDOI
01 Jan 2009
TL;DR: The notion of motion evidence is presented, which allows the algorithm to overcome the low signal-to-noise ratio that arises during rapid detection of moving vehicles in noisy urban environments.
Abstract: Fast detection of moving vehicles is crucial for safe autonomous urban driving. We present the vehicle detection algorithm developed for our entry in the Urban Grand Challenge, an autonomous driving race organized by the U.S. Government in 2007. The algorithm provides reliable detection of moving vehicles from a high-speed moving platform using laser range finders. We present the notion of motion evidence, which allows us to overcome the low signal-to-noise ratio that arises during rapid detection of moving vehicles in noisy urban environments. We also present and evaluate an array of optimization techniques that enable accurate detection in real time. Experimental results show empirical validation on data from the most challenging situations presented at the Urban Grand Challenge as well as other urban settings.

15 citations

Proceedings ArticleDOI
18 Apr 2005
TL;DR: The ability of activity-based models to improve the performance of an object motion tracker as well as their applicability to global registration of video sequences is demonstrated.
Abstract: We present a method for learning activity-based ground models based on a multiple particle filter approach to motion tracking in video acquired from a moving aerial platform. Such models offer a number of potential benefits. In this paper we demonstrate the ability of activity-based models to improve the performance of an object motion tracker as well as their applicability to global registration of video sequences.

15 citations

Patent
01 Jun 2010
TL;DR: In this paper, a Markov Random Field (MRF) model is defined for estimating a number of mobile devices being used within a geographic area, and the estimated population density can then be used to provide location-based services.
Abstract: The population density for a geographic area is predicted using a Markov Random Field (MRF) model. A MRF model is defined for estimating a number of mobile devices being used within a geographic area. The MRF model includes a set of rules describing how to use current data describing mobile devices currently observed in the area, and historical data describing mobile devices historically observed in the area to produce the estimate. Values of weight parameters in the MRF model are learned using the historical data. The current and historical data are applied to the MRF model having the learned weight parameters, and cost minimization is used to estimate of the number of mobile devices currently being used within the area. This estimate is used to predict the population density for the area. The predicted population density can then be used to provide location-based services.

15 citations

01 Jan 2002
TL;DR: The Sum-of-Gaussian (SOG) method is used to approximate more general (arbitrary) probability distributions, which permits the generalizations made possible by Monte-Carlo methods, while inheriting the real-time computational advantages of the Kalman filter.
Abstract: This paper describes a Bayesian formulation of the Simultaneous Localisation and Mapping (SLAM) problem Previously, the SLAM problem could only be solved in real time through the use of the Kalman Filter This generally restricts the application of SLAM methods to domains with straight-forward (analytic) environment and sensor models In this paper the Sum-of-Gaussian (SOG) method is used to approximate more general (arbitrary) probability distributions This representation permits the generalizations made possible by Monte-Carlo methods, while inheriting the real-time computational advantages of the Kalman filter The method is demonstrated by its application to sub-sea field data The sub-sea data consists of both sonar and visual information of near-field landmarks This is a particularly challenging problem incorporating diverse sensing modalities, amorphous environment features, and poorly known vehicle dynamics; none of which can be easily handled by Kalman filter-based SLAM algorithms

14 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This work proposes a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hofmann's aspect model.
Abstract: We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.

30,570 citations

Proceedings Article
03 Jan 2001
TL;DR: This paper proposed a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams, and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI).
Abstract: We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification.

25,546 citations

Book
25 Oct 1999
TL;DR: This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining.
Abstract: Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors. Witten, Frank, and Hall include both tried-and-true techniques of today as well as methods at the leading edge of contemporary research. *Provides a thorough grounding in machine learning concepts as well as practical advice on applying the tools and techniques to your data mining projects *Offers concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methods *Includes downloadable Weka software toolkit, a collection of machine learning algorithms for data mining tasks-in an updated, interactive interface. Algorithms in toolkit cover: data pre-processing, classification, regression, clustering, association rules, visualization

20,196 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations