scispace - formally typeset
Search or ask a question
Author

Manfred Lau

Bio: Manfred Lau is an academic researcher from City University of Hong Kong. The author has contributed to research in topics: Computer science & Sketch. The author has an hindex of 16, co-authored 41 publications receiving 1562 citations. Previous affiliations of Manfred Lau include Carnegie Mellon University & Hong Kong University of Science and Technology.

Papers
More filters
Proceedings ArticleDOI
18 Apr 2005
TL;DR: A footstep planner for the Honda ASIMO humanoid robot is presented that plans a sequence of footstep positions to navigate toward a goal location while avoiding obstacles.
Abstract: Despite the recent achievements in stable dynamic walking for many humanoid robots, relatively little navigation autonomy has been achieved. In particular, the ability to autonomously select foot placement positions to avoid obstacles while walking is an important step towards improved navigation autonomy for humanoids. We present a footstep planner for the Honda ASIMO humanoid robot that plans a sequence of footstep positions to navigate toward a goal location while avoiding obstacles. The possible future foot placement positions are dependent on the current state of the robot. Using a finite set of state-dependent actions, we use an A* search to compute optimal sequences of footstep locations up to a time-limited planning horizon. We present experimental results demonstrating the robot navigating through both static and dynamic known environments that include obstacles moving on predictable trajectories.

417 citations

Proceedings ArticleDOI
22 Jan 2010
TL;DR: The concepts and details of SketchChair are presented, and both miniature and full-sized chairs are designed using the application, and a workshop in which novice users designed their own model chairs is held.
Abstract: SketchChair is an application that allows novice users to control the entire process of designing and building their own chairs. Chairs are designed using a simple 2D sketch-based interface and design validation tools, and are then fabricated from sheet materials, cut by a laser cutter or CNC milling machine. This paper presents the concepts and details of SketchChair, and both miniature and full-sized chairs are designed using the application. We conclude with results and insights from a workshop in which novice users designed their own model chairs.

194 citations

Proceedings ArticleDOI
29 Jul 2005
TL;DR: This paper explores a behavior planning approach to automatically generate realistic motions for animated characters and shows results of synthesized animations involving up to one hundred human and animal characters planning simultaneously in both static and dynamic environments.
Abstract: This paper explores a behavior planning approach to automatically generate realistic motions for animated characters. Motion clips are abstracted as high-level behaviors and associated with a behavior finite-state machine (FSM) that defines the movement capabilities of a virtual character. During runtime, motion is generated automatically by a planning algorithm that performs a global search of the FSM and computes a sequence of behaviors for the character to reach a user-designated goal position. Our technique can generate interesting animations using a relatively small amount of data, making it attractive for resource-limited game platforms. It also scales efficiently to large motion databases, because the search performance is primarily dependent on the complexity of the behavior FSM rather than on the amount of data. Heuristic cost functions that the planner uses to evaluate candidate motions provide a flexible framework from which an animator can control character preferences for certain types of behavior. We show results of synthesized animations involving up to one hundred human and animal characters planning simultaneously in both static and dynamic environments.

171 citations

Proceedings ArticleDOI
26 Apr 2014
TL;DR: This paper describes the design and implementation of MixFab, a mixed-reality environment for personal fabrication that lowers the barrier for users to engage in personal fabrication, and describes a user study evaluating the system's prototype.
Abstract: Personal fabrication machines, such as 3D printers and laser cutters, are becoming increasingly ubiquitous. However, designing objects for fabrication still requires 3D modeling skills, thereby rendering such technologies inaccessible to a wide user-group. In this paper, we introduce MixFab, a mixed-reality environment for personal fabrication that lowers the barrier for users to engage in personal fabrication. Users design objects in an immersive augmented reality environment, interact with virtual objects in a direct gestural manner and can introduce existing physical objects effortlessly into their designs. We describe the design and implementation of MixFab, a user-defined gesture study that informed this design, show artifacts designed with the system and describe a user study evaluating the system's prototype.

151 citations

Proceedings ArticleDOI
02 Sep 2006
TL;DR: A novel approach for interactively synthesizing motions for characters navigating in complex environments by precompute search trees of motion clips that can be applied to arbitrary environments through a series of table lookups.
Abstract: We present a novel approach for interactively synthesizing motions for characters navigating in complex environments. We focus on the runtime efficiency for motion generation, thereby enabling the interactive animation of a large number of characters simultaneously. The key idea is to precompute search trees of motion clips that can be applied to arbitrary environments. Given a navigation goal relative to a current body position, the best available solution paths and motion sequences can be efficiently extracted during runtime through a series of table lookups. For distant start and goal positions, we first use a fast coarse-level planner to generate a rough path of intermediate sub-goals to guide each iteration of the runtime lookup phase.We demonstrate the efficiency of our technique across a range of examples in an interactive application with multiple autonomous characters navigating in dynamic environments. Each character responds in real-time to arbitrary user changes to the environment obstacles or navigation goals. The runtime phase is more than two orders of magnitude faster than existing planning methods or traditional motion synthesis techniques. Our technique is not only useful for autonomous motion generation in games, virtual reality, and interactive simulations, but also for animating massive crowds of characters offline for special effects in movies.

113 citations


Cited by
More filters
MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Proceedings Article
01 Jan 1999

2,010 citations

Proceedings ArticleDOI
25 Jul 2011
TL;DR: A novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization is introduced that demonstrates that compelling 3D facial dynamics can be reconstructed in realtime without the use of face markers, intrusive lighting, or complex scanning hardware.
Abstract: This paper presents a system for performance-based character animation that enables any user to control the facial expressions of a digital avatar in realtime. The user is recorded in a natural environment using a non-intrusive, commercially available 3D sensor. The simplicity of this acquisition device comes at the cost of high noise levels in the acquired data. To effectively map low-quality 2D images and 3D depth maps to realistic facial expressions, we introduce a novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization. Formulated as a maximum a posteriori estimation in a reduced parameter space, our method implicitly exploits temporal coherence to stabilize the tracking. We demonstrate that compelling 3D facial dynamics can be reconstructed in realtime without the use of face markers, intrusive lighting, or complex scanning hardware. This makes our system easy to deploy and facilitates a range of new applications, e.g. in digital gameplay or social interactions.

580 citations

Journal ArticleDOI
TL;DR: This paper aims to learn a variety of environment-aware locomotion skills with a limited amount of prior knowledge by adopting a two-level hierarchical control framework and training both levels using deep reinforcement learning.
Abstract: Learning physics-based locomotion skills is a difficult problem, leading to solutions that typically exploit prior knowledge of various forms. In this paper we aim to learn a variety of environment-aware locomotion skills with a limited amount of prior knowledge. We adopt a two-level hierarchical control framework. First, low-level controllers are learned that operate at a fine timescale and which achieve robust walking gaits that satisfy stepping-target and style objectives. Second, high-level controllers are then learned which plan at the timescale of steps by invoking desired step targets for the low-level controller. The high-level controller makes decisions directly based on high-dimensional inputs, including terrain maps or other suitable representations of the surroundings. Both levels of the control policy are trained using deep reinforcement learning. Results are demonstrated on a simulated 3D biped. Low-level controllers are learned for a variety of motion styles and demonstrate robustness with respect to force-based disturbances, terrain variations, and style interpolation. High-level controllers are demonstrated that are capable of following trails through terrains, dribbling a soccer ball towards a target location, and navigating through static or dynamic obstacles.

518 citations