scispace - formally typeset
Search or ask a question

Showing papers by "Nancy M. Amato published in 2008"


Book ChapterDOI
20 Oct 2008
TL;DR: Automatic motion planning has applications ranging from traditional robotics to computer-aided design to computational biology and chemistry, and there are still many scenarios in which there need better methods, e.g., problems involving narrow passages or which contain multiple regions that are best suited to different planners.
Abstract: Automatic motion planning has applications ranging from traditional robotics to computer-aided design to computational biology and chemistry. While randomized planners, such as probabilistic roadmap methods (prms) or rapidly-exploring random trees (rrt), have been highly successful in solving many high degree of freedom problems, there are still many scenarios in which we need better methods, e.g., problems involving narrow passages or which contain multiple regions that are best suited to different planners.

80 citations


Journal ArticleDOI
TL;DR: This paper explores an alternative partitioning strategy that decomposes a given model into ''approximately convex'' pieces that may provide similar benefits as convex components, while the resulting decomposition is both significantly smaller (typically by orders of magnitude) and can be computed more efficiently.

65 citations


Journal ArticleDOI
TL;DR: The method accurately computes the kinetics-based functional rates of wild-type and mutant ColE1 RNAII and MS2 phage RNAs showing excellent agreement with experiment and compares favorably with other computational methods that begin with a comprehensive free-energy landscape.

56 citations


Book ChapterDOI
20 Oct 2008
TL;DR: This paper proposes a new prm-based framework called Incremental Map Generation (img), and proposes some general evaluation criteria and shows how to apply them to construct different types of roadmaps, e.g., roadmaps that coarsely or more finely map the space.
Abstract: Probabilistic roadmap methods (prms) have been highly successful in solving many high degree of freedom motion planning problems arising in diverse application domains such as traditional robotics, computer-aided design, and computational biology and chemistry. One important practical issue with prms is that they do not provide an automated mechanism to determine how large a roadmap is needed for a given problem. Instead, users typically determine this by trial and error and as a consequence often construct larger roadmaps than are needed. In this paper, we propose a new prm-based framework called Incremental Map Generation (img) to address this problem. Our strategy is to break the map generation into several processes, each of which generates samples and connections, and to continue adding the next increment of samples and connections to the evolving roadmap until it stops improving. In particular, the process continues until a set of evaluation criteria determine that the planning strategy is no longer effective at improving the roadmap. We propose some general evaluation criteria and show how to apply them to construct different types of roadmaps, e.g., roadmaps that coarsely or more finely map the space. In addition, we show how img can be integrated with previously proposed adaptive strategies for selecting sampling methods. We provide results illustrating the power of img.

22 citations



Book ChapterDOI
28 Nov 2008
TL;DR: A methodology that enables third party libraries to be used with stapl is described, which allows a developer to specify when these specialized libraries can correctly be used, and provides mechanisms to transparently invoke them when appropriate.
Abstract: The Standard Template Adaptive Parallel Library ( stapl ) is a high-productivity parallel programming framework that extends C++ and stl with unified support for shared and distributed memory parallelism. stapl provides distributed data structures ( pContainers ) and parallel algorithms ( pAlgorithms ) and a generic methodology for extending them to provide customized functionality. To improve productivity and performance, it is essential for stapl to exploit third party libraries, including those developed in programming languages other than C++. In this paper we describe a methodology that enables third party libraries to be used with stapl . This methodology allows a developer to specify when these specialized libraries can correctly be used, and provides mechanisms to transparently invoke them when appropriate. It also provides support for using stapl pAlgorithms and pContainers in external codes. As a concrete example, we illustrate how third party libraries, namely BLAS and PBLAS, can be transparently embedded into stapl to provide efficient linear algebra algorithms for the stapl pMatrix , with negligible slowdown with respect to the optimized libraries themselves.

9 citations


Proceedings ArticleDOI
25 Jun 2008
TL;DR: This work introduces a filtering strategy for the Probabilistic Roadmap Methods (PRM) with the aim to improve roadmap construction performance by selecting only the samples that are likely to produce roadmap structure improvement.
Abstract: Sampling based motion planning methods have been highly successful in solving many high degree of freedom motion planning problems arising in diverse application domains such as traditional robotics, computer-aided design, and computational biology and chemistry. Recent work in metrics for sampling based planners provide tools to analyze the model building process at three levels of detail: sample level, region level, and global level. These tools are useful for comparing the evolution of sampling methods, and have shown promise to improve the process altogether [15], [17], [24]. Here, we introduce a filtering strategy for the Probabilistic Roadmap Methods (PRM) with the aim to improve roadmap construction performance by selecting only the samples that are likely to produce roadmap structure improvement. By measuring a new sample’s maximum potential structural improvement with respect to the current roadmap, we can choose to only accept samples that have an adequate potential for improvement. We show how this approach can improve the standard PRM framework in a variety of motion planning situations using popular sampling techniques.

9 citations


01 Jan 2008
TL;DR: It is shown that dimensionality reduction can be effectively applied, even to discrete conformation spaces (as for RNA secondary structure) that do not typically lend themselves to reduction techniques.
Abstract: Molecular motions, including both protein and RNA, play an essential role in many biochemical processes. Simulations have attempted to study these detailed large-scale molecular motions, but they are often limited by the expense of representing complex molecular structures. For example, enumerating all possible RNA conformations with valid contacts is an exponential endeavor, and the complexity of protein motion increases with the model’s detail and protein length. In this paper, we explore the use of dimensionality reduction techniques to better approximate protein and RNA motions. We present two new methods to study motions: (1) an evaluation technique to compare different distributions of conformations and (2) a way to identify likely local motion transitions. We combine these two methods in an existing motion framework to study large-scale motions for both proteins and RNA. We show that dimensionality reduction can be effectively applied, even to discrete conformation spaces (as for RNA secondary structure) that do not typically lend themselves to reduction techniques.

6 citations


Journal ArticleDOI
TL;DR: This special issue contains expanded versions of 12 papers that were presented at the Seventh WAFR which was held at the Tribeca Grand Hotel in New York City in July 2006 and 32 contributed papers, each of which underwent a rigorous selection process.
Abstract: Algorithms are a fundamental component of robotic systems – they control or reason about motion and perception in the physical world, they receive input from noisy sensors, consider geometric and physical constraints, and operate on the world through imprecise actuators. Increasingly, robotics algorithms are finding use in areas far beyond the traditional scope of robots such as computer animation and gaming, virtual environments, sensor networks, manufacturing, medical robotics, and computational biology. The International Workshop on the Algorithmic Foundations of Robotics (WAFR) is a multi-disciplinary single-track workshop with submitted and invited papers on advances on algorithmic problems in robotics. WAFR has been held every other year since 1994 and has an established reputation as one of the most important venues for presenting algorithmic work related to robotics. Previous WAFRs have been held at Zeist, The Netherlands (2004), Nice, France (2002), Hanover, NH, USA (2000), Houston, TX, USA (1998), Toulouse, France (1996), and Stanford, CA, USA (1994). This special issue contains expanded versions of 12 papers that were presented at the Seventh WAFR which was held at the Tribeca Grand Hotel in New York City in July 2006. WAFR 2006 had a record number of submissions and attendees, and a very strong and interesting technical program with six stellar invited speakers – James Gimzewski (UCLA), Jessica Hodgins (CMU), Jean-Claude Latombe (Stanford), Tomas Lozano-Perez (MIT), Jacob Schwartz (NYU), and Sebastian Thrun (Stanford) – and 32 contributed papers, each of which underwent a rigorous selection process where each submission was reviewed by at least three members of the program committee. The authors of some of the best and most interesting papers were invited to submit expanded versions of their WAFR 2006 papers to this special issue. These papers underwent the rigorEditorial