scispace - formally typeset
Search or ask a question
Author

Philip Fong

Bio: Philip Fong is an academic researcher from Stanford University. The author has contributed to research in topics: Background noise & Random walker algorithm. The author has an hindex of 6, co-authored 7 publications receiving 2098 citations. Previous affiliations of Philip Fong include Lawrence Livermore National Laboratory.

Papers
More filters
Journal ArticleDOI
TL;DR: The robot Stanley, which won the 2005 DARPA Grand Challenge, was developed for high‐speed desert driving without manual intervention and relied predominately on state‐of‐the‐art artificial intelligence technologies, such as machine learning and probabilistic reasoning.
Abstract: This article describes the robot Stanley, which won the 2005 DARPA Grand Challenge. Stanley was developed for high-speed desert driving without human intervention. The robot’s software system relied predominately on state-of-the-art AI technologies, such as machine learning and probabilistic reasoning. This article describes the major components of this architecture, and discusses the results of the Grand Challenge race.

2,011 citations

Journal IssueDOI
TL;DR: The robot Stanley, which won the 2005 DARPA Grand Challenge, was developed for high-speed desert driving without manual intervention using state-of-the-art artificial intelligence technologies, such as machine learning and probabilistic reasoning.
Abstract: This article describes the robot Stanley, which won the 2005 DARPA Grand Challenge. Stanley was developed for high-speed desert driving without manual intervention. The robot's software system relied predominately on state-of-the-art artificial intelligence technologies, such as machine learning and probabilistic reasoning. This paper describes the major components of this architecture, and discusses the results of the Grand Challenge race. © 2006 Wiley Periodicals, Inc.

306 citations

Journal ArticleDOI
Philip Fong1
TL;DR: An automated system to build models of elastic deformable objects and to render these models in an interactive virtual environment to enable haptic model acquisition through active probing with a haptic device.
Abstract: In this paper we describe the design and implementation of an automated system to build models of elastic deformable objects and to render these models in an interactive virtual environment. By automating model creation, a greater number of models and models that are more realistic can be included in these interactive simulations. Model geometry is acquired by a novel range sensor. Techniques for contact detection, slip detection on deformable surfaces, and enhanced open-loop force control enable haptic model acquisition through active probing with a haptic device. For model rendering, the force field is approximated from these measured samples.

41 citations

Proceedings ArticleDOI
TL;DR: This algorithm was designed, optimized and is in daily use for the accurate and rapid inspection of optics from a large laser system, which includes images with background noise, ghost reflections, different illumination and other sources of variation.
Abstract: Many automated image-based applications have need of finding small spots in a variably noisy image. For humans, it is relatively easy to distinguish objects from local surroundings no matter what else may be in the image. We attempt to capture this distinguishing capability computationally by calculating a measurement that estimates the strength of signal within an object versus the noise in its local neighborhood. First, we hypothesize various sizes for the object and corresponding background areas. Then, we compute the Local Area Signal to Noise Ratio (LASNR) at every pixel in the image, resulting in a new image with LASNR values for each pixel. All pixels exceeding a pre-selected LASNR value become seed pixels, or initiation points, and are grown to include the full area extent of the object. Since growing the seed is a separate operation from finding the seed, each object can be any size and shape. Thus, the overall process is a 2-stage segmentation method that first finds object seeds and then grows them to find the full extent of the object. This algorithm was designed, optimized and is in daily use for the accurate and rapid inspection of optics from a large laser system (National Ignition Facility (NIF), Lawrence Livermore National Laboratory, Livermore, CA), which includes images with background noise, ghost reflections, different illumination and other sources of variation.

34 citations

Proceedings ArticleDOI
05 Dec 2005
TL;DR: This paper presents a technique to compute high-resolution range maps from single images of moving and deforming objects based on observing the deformation of a projected light pattern that combines a set of parallel colored stripes and a perpendicular set of sinusoidal intensity stripes.
Abstract: In applications like motion capture, high speed collision testing and robotic manipulation of deformable objects there is a critical need for capturing the 3D geometry of fast moving and/or deforming objects. Although there exists many 3D sensing techniques, most cannot deal with dynamic scenes (e.g., laser scanning). Others, like stereovision, require that object surfaces be appropriately textured. Few, if any, build high-resolution 3D models of dynamic scenes. This paper presents a technique to compute high-resolution range maps from single images of moving and deforming objects. This method is based on observing the deformation of a projected light pattern that combines a set of parallel colored stripes and a perpendicular set of sinusoidal intensity stripes. While the colored stripes allow the sensor to compute absolute depths at coarse resolution, the sinusoidal intensity stripes give dense relative depths. This twofold pattern makes it possible to extract a high-resolution range map from each image in a video sequence. The sensor has been implemented and tested on several deforming objects.

21 citations


Cited by
More filters
Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations

Journal ArticleDOI
13 Jun 2016
TL;DR: In this article, the authors present a survey of the state of the art on planning and control algorithms with particular regard to the urban environment, along with a discussion of their effectiveness.
Abstract: Self-driving vehicles are a maturing technology with the potential to reshape mobility by enhancing the safety, accessibility, efficiency, and convenience of automotive transportation. Safety-critical tasks that must be executed by a self-driving vehicle include planning of motions through a dynamic environment shared with other vehicles and pedestrians, and their robust executions via feedback control. The objective of this paper is to survey the current state of the art on planning and control algorithms with particular regard to the urban setting. A selection of proposed techniques is reviewed along with a discussion of their effectiveness. The surveyed approaches differ in the vehicle mobility model used, in assumptions on the structure of the environment, and in computational requirements. The side by side comparison presented in this survey helps to gain insight into the strengths and limitations of the reviewed approaches and assists with system level design choices.

1,437 citations

01 Jan 2009
TL;DR: This dissertation aims to provide a history of web exceptionalism from 1989 to 2002, a period chosen in order to explore its roots as well as specific cases up to and including the year in which descriptions of “Web 2.0” began to circulate.
Abstract: Boss is an autonomous vehicle that uses on-board sensors (global positioning system, lasers, radars, and cameras) to track other vehicles, detect static obstacles, and localize itself relative to a road model. A three-layer planning system combines mission, behavioral, and motion planning to drive in urban environments. The mission planning layer considers which street to take to achieve a mission goal. The behavioral layer determines when to change lanes and precedence at intersections and performs error recovery maneuvers. The motion planning layer selects actions to avoid obstacles while making progress toward local goals. The system was developed from the ground up to address the requirements of the DARPA Urban Challenge using a spiral system development process with a heavy emphasis on regular, regressive system testing. During the National Qualification Event and the 85-km Urban Challenge Final Event, Boss demonstrated some of its capabilities, qualifying first and winning the challenge. © 2008 Wiley Periodicals, Inc.

1,275 citations

Journal IssueDOI
TL;DR: Boss is an autonomous vehicle that uses on-board sensors to track other vehicles, detect static obstacles, and localize itself relative to a road model using a spiral system development process with a heavy emphasis on regular, regressive system testing.
Abstract: Boss is an autonomous vehicle that uses on-board sensors (global positioning system, lasers, radars, and cameras) to track other vehicles, detect static obstacles, and localize itself relative to a road model. A three-layer planning system combines mission, behavioral, and motion planning to drive in urban environments. The mission planning layer considers which street to take to achieve a mission goal. The behavioral layer determines when to change lanes and precedence at intersections and performs error recovery maneuvers. The motion planning layer selects actions to avoid obstacles while making progress toward local goals. The system was developed from the ground up to address the requirements of the DARPA Urban Challenge using a spiral system development process with a heavy emphasis on regular, regressive system testing. During the National Qualification Event and the 85-km Urban Challenge Final Event, Boss demonstrated some of its capabilities, qualifying first and winning the challenge. © 2008 Wiley Periodicals, Inc.

1,201 citations

Journal ArticleDOI
TL;DR: The effectiveness of the proposed MPC formulation is demonstrated by simulation and experimental tests up to 21 m/s on icy roads, and two approaches with different computational complexities are presented.
Abstract: In this paper, a model predictive control (MPC) approach for controlling an active front steering system in an autonomous vehicle is presented. At each time step, a trajectory is assumed to be known over a finite horizon, and an MPC controller computes the front steering angle in order to follow the trajectory on slippery roads at the highest possible entry speed. We present two approaches with different computational complexities. In the first approach, we formulate the MPC problem by using a nonlinear vehicle model. The second approach is based on successive online linearization of the vehicle model. Discussions on computational complexity and performance of the two schemes are presented. The effectiveness of the proposed MPC formulation is demonstrated by simulation and experimental tests up to 21 m/s on icy roads

1,184 citations