S
Sebastian Thrun
Researcher at Stanford University
Publications - 437
Citations - 108035
Sebastian Thrun is an academic researcher from Stanford University. The author has contributed to research in topics: Mobile robot & Robot. The author has an hindex of 146, co-authored 434 publications receiving 98124 citations. Previous affiliations of Sebastian Thrun include University of Pittsburgh & ETH Zurich.
Papers
More filters
Proceedings ArticleDOI
A campaign in autonomous mine mapping
Christopher R. Baker,A. Morris,Dave Ferguson,Scott M. Thayer,Chuck Whittaker,Zachary Omohundro,Carlos F. Reverte,William Whittaker,Dirk Hähnel,Sebastian Thrun +9 more
TL;DR: The mechanisms, algorithms, and analysis tools that enable autonomous mine exploration and mapping along with extensive experimental results from eight successful deployments into the abandoned Mathies coal mine near Pittsburgh, PA are presented.
Proceedings ArticleDOI
Active mobile robot localization by entropy minimization
TL;DR: The approach provides rational criteria for setting the robot's motion direction (exploration), and determining the pointing direction of the sensors so as to most efficiently localize the robot.
Journal ArticleDOI
Reverse Optical Flow for Self-Supervised Adaptive Autonomous Robot Navigation
TL;DR: Improvements are demonstrated by augmenting an existing self-supervised image segmentation procedure with an additional supervisory input that provides representations of this region at multiple scales and allows the robot to better determine where more examples of this class appear in the image.
Proceedings ArticleDOI
Probabilistic mapping of an environment by a mobile robot
TL;DR: A statistical approach is proposed that describes the map building problem as a constrained maximum-likelihood estimation problem, for which it devises a practical algorithm that illustrates the appropriateness of the approach.
ReportDOI
Using EM to Classify Text from Labeled and Unlabeled Documents
TL;DR: It is shown that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents, and the use of unlabelled data reduces classification error by up to 30%.