scispace - formally typeset
Search or ask a question

Showing papers by "Nancy M. Amato published in 2007"


Proceedings ArticleDOI
04 Jun 2007
TL;DR: This paper explores an alternative partitioning strategy that decomposes a given model into "approximately convex" pieces that may provide similar benefits as convex components, while the resulting decomposition is both significantly smaller (typically by orders of magnitude) and can be computed more efficiently.
Abstract: Decomposition is a technique commonly used to partition complex models into simpler components. While decomposition into convex components results in pieces that are easy to process, such decompositions can be costly to construct and can result in representations with an unmanageable number of components. In this paper we explore an alternative partitioning strategy that decomposes a given model into "approximately convex" pieces that may provide similar benefits as convex components, while the resulting decomposition is both significantly smaller (typically by orders of magnitude) and can be computed more efficiently. Indeed, for many applications, an approximate convex decomposition (ACD) can more accurately represent the important structural features of the model by providing a mechanism for ignoring less significant features, such as surface texture. We describe a technique for computing ACDs of three-dimensional polyhedral solids and surfaces of arbitrary genus. We provide results illustrating that our approach results in high quality decompositions with very few components and applications showing that comparable or better results can be obtained using ACD decompositions in place of exact convex decompositions (ECD) that are several orders of magnitude larger.

133 citations


Journal ArticleDOI
TL;DR: A novel method based on rigidity theory to sample conformation space more effectively is proposed and extensions of the framework to automate the process and to map transitions between specified conformations are described.
Abstract: Protein motions, ranging from molecular flexibility to large-scale conformational change, play an essential role in many biochemical processes. Despite the explosion in our knowledge of structural and functional data, our understanding of protein movement is still very limited. In previous work, we developed and validated a motion planning based method for mapping protein folding pathways from unstructured conformations to the native state. In this paper, we propose a novel method based on rigidity theory to sample conformation space more effectively, and we describe extensions of our framework to automate the process and to map transitions between specified conformations. Our results show that these additions both improve the accuracy of our maps and enable us to study a broader range of motions for larger proteins. For example, we show that rigidity-based sampling results in maps that capture subtle folding differences between protein G and its mutants, NuG1 and NuG2, and we illustrate how our technique can be used to study large-scale conformational changes in calmodulin, a 148 residue signaling protein known to undergo conformational changes when binding to Ca(2+). Finally, we announce our web-based protein folding server which includes a publicly available archive of protein motions: (http://parasol.tamu.edu/foldingserver/).

73 citations


Proceedings ArticleDOI
10 Apr 2007
TL;DR: This paper presents a general framework for biasing samplers that is easily extendable to new distributions and can handle an arbitrary number of parent distributions by chaining them together and shows that by combining distributions, it can out-perform existing planners.
Abstract: With the success of randomized sampling-based motion planners such as probabilistic roadmap methods, much work has been done to design new sampling techniques and distributions. To date, there is no sampling technique that outperforms all other techniques for all motion planning problems. Instead, each proposed technique has different strengths and weaknesses. However, little work has been done to combine these techniques to create new distributions. In this paper, we propose to bias one sampling distribution with another such that the resulting distribution out-performs either of its parent distributions. We present a general framework for biasing samplers that is easily extendable to new distributions and can handle an arbitrary number of parent distributions by chaining them together. Our experimental results show that by combining distributions, we can out-perform existing planners. Our results also indicate that not one single distribution combination performs the best in all problems, and we identify which perform better for the specific application domains studied

27 citations


Proceedings ArticleDOI
10 Dec 2007
TL;DR: A heuristic approach to planning in an environment with moving obstacles that assumes that the robot has no knowledge of the future trajectory of the moving objects and distinguishes between two types of moving objects in the environment: hard and soft objects.
Abstract: In this paper we present a heuristic approach to planning in an environment with moving obstacles. Our approach assumes that the robot has no knowledge of the future trajectory of the moving objects. Our framework also distinguishes between two types of moving objects in the environment: hard and soft objects. We distinguish between the two types of objects in the environment as varying application domains could allow for some collision between some types of moving objects. For example, a robot planning a path in an environment with people could have the people modeled as circular disks with a safe zone surrounding each person. Although the robot may try to stay out of each safe zone, violating that criteria would not necessarily result in planning failure. We will show the effectiveness of our planner in general dynamic environments with the soft objects having varying behaviors.

27 citations


Journal ArticleDOI
01 Jul 2007
TL;DR: Two new techniques are presented, map-based master equation solution and map- based Monte Carlo simulation, to study protein kinetics through folding rates and population kinetics from approximate folding landscapes, models called maps.
Abstract: Motivation: Protein motions play an essential role in many biochemical processes. Lab studies often quantify these motions in terms of their kinetics such as the speed at which a protein folds or the population of certain interesting states like the native state. Kinetic metrics give quantifiable measurements of the folding process that can be compared across a group of proteins such as a wild-type protein and its mutants. Results: We present two new techniques, map-based master equation solution and map-based Monte Carlo simulation, to study protein kinetics through folding rates and population kinetics from approximate folding landscapes, models called maps. From these two new techniques, interesting metrics that describe the folding process, such as reaction coordinates, can also be studied. In this article we focus on two metrics, formation of helices and structure formation around tryptophan residues. These two metrics are often studied in the lab through circular dichroism (CD) spectra analysis and tryptophan fluorescence experiments, respectively. The approximated landscape models we use here are the maps of protein conformations and their associated transitions that we have presented and validated previously. In contrast to other methods such as the traditional master equation and Monte Carlo simulation, our techniques are both fast and can easily be computed for full-length detailed protein models. We validate our map-based kinetics techniques by comparing folding rates to known experimental results. We also look in depth at the population kinetics, helix formation and structure near tryptophan residues for a variety of proteins. Availability: We invite the community to help us enrich our publicly available database of motions and kinetics analysis by submitting to our server: http://parasol.tamu.edu/foldingserver/ Contact: [email protected]

23 citations


Book ChapterDOI
21 Apr 2007
TL;DR: This work provides a new sampling strategy called Probabilistic Boltzmann Sampling (PBS) that enables us to approximate the folding landscape with much smaller maps, typically by several orders of magnitude, and describes a new analysis technique, Map-based Monte Carlo (MMC) simulation, to stochastically extract folding pathways from the map.
Abstract: It has recently been found that some RNA functions are determined by the actual folding kinetics and not just the RNA's nucleotide sequence or its native structure We present new computational tools for simulating and analyzing RNA folding kinetic metrics such as population kinetics, folding rates, and the folding of particular subsequences Our method first builds an approximate representation (called a map) of the RNA's folding energy landscape, and then uses specialized analysis techniques to extract folding kinetics from the map We provide a new sampling strategy called Probabilistic Boltzmann Sampling (PBS) that enables us to approximate the folding landscape with much smaller maps, typically by several orders of magnitude We also describe a new analysis technique, Map-based Monte Carlo (MMC) simulation, to stochastically extract folding pathways from the map We demonstrate that our technique can be applied to large RNA (eg, 200+ nucleotides), where representing the full landscape is infeasible, and that our tools provide results comparable to other simulation methods that work on complete energy landscapes We present results showing that our approach computes the same relative functional rates as seen in experiments for the relative plasmid replication rates of ColE1 RNAII and its mutants, and for the relative gene expression rates of MS2 phage RNA and its mutants

18 citations


Proceedings ArticleDOI
10 Apr 2007
TL;DR: This paper proposes planning with reachable distance (PRD) to overcome this challenge by first precomputing the subspace satisfying the closure constraints, then directly sampling in it, representing the chain as a hierarchy of sub-chains.
Abstract: Motion planning for closed-chain systems is particularly difficult due to additional closure constraints placed on the system. In fact, the probability of randomly selecting a set of joint angles that satisfy the closure constraints is zero. We propose planning with reachable distance (PRD) to overcome this challenge by first precomputing the subspace satisfying the closure constraints, then directly sampling in it. To do so, we represent the chain as a hierarchy of sub-chains. Then we calculate the "closure" sub-space as appropriate reachable distance ranges of sub-chains satisfying the closure constraints. This provides two distinct advantages over traditional approaches: (1) configurations are quickly sampled and converted to joint angles using basic trigonometry functions instead of more expensive inverse kinematics solvers, and (2) configurations are guaranteed to be closed. In this paper, we describe this hierarchical chain representation and give a sampling algorithm with complexity linear in the number of links. We provide the necessary motion planning primitives for most sampling-based motion planners. Our experimental results show our method is fast, making sampling closed configurations comparable to sampling open chain configurations that ignore closure constraints. Our method is general, easy to implement, and also extends to other distance-related constraints besides the ones demonstrated here

13 citations


Proceedings ArticleDOI
10 Apr 2007
TL;DR: This work proposes the use of local metrics that provide insight into the complexity of the different regions in the model and global metrics that describe the process as a whole and shows how these metrics model the efficiency of C-space exploration and help to identify different modeling stages.
Abstract: Many sampling methods for motion planning explore the robot's configuration space (C-space) starting from a set of configuration(s) and incrementally explore surrounding areas to produce a growing model of the space. Although there is a common understanding of the strengths and weaknesses of these techniques, metrics for analyzing the incremental exploration process and for evaluating the performance of incremental samplers have been lacking. We propose the use of local metrics that provide insight into the complexity of the different regions in the model and global metrics that describe the process as a whole. These metrics only require local information and can be efficiently computed. We illustrate the use of our proposed metrics to analyze representative incremental strategies including the rapidly-exploring random trees, expansive space trees, and the original randomized path planner. We show how these metrics model the efficiency of C-space exploration and help to identify different modeling stages. In addition, these metrics are ideal for adapting space exploration to improve performance.

13 citations


Proceedings ArticleDOI
16 Sep 2007
TL;DR: This work presents the STAPL pArray, the parallel equivalent of the sequential STL valarray, a fixed-size data structure optimized for storing and accessing data based on one-dimensional indices, and describes the pArray design and shows how it can support a variety of underlying data distribution policies currently available inSTAPL.
Abstract: The Standard Template Adaptive Parallel Library (STAPL) is a parallel programming framework that extends C++ and STL with support for parallelism. STAPL provides parallel data structures (pContainers) and generic parallel algorithms (pAlgorithms), and a methodology for extending them to provide customized functionality. STAPL pContainers are thread-safe, concurrent objects, i.e., shared objects that provide parallel methods that can be invoked concurrently. They provide views as a generic means to access data that can be passed as input to generic pAlgorithms.In this work, we present the STAPL pArray, the parallel equivalent of the sequential STL valarray, a fixed-size data structure optimized for storing and accessing data based on one-dimensional indices. We describe the pArray design and show how it can support a variety of underlying data distribution policies currently available in STAPL, such as blocked or blocked cyclic. We provide experimental results showing that pAlgorithms using the pArray scale well to more than 2,000 processors. We also provide results using different data distributions that illustrate that the performance of pAlgorithms and pArray methods is usually sensitive to the underlying data distribution, and moreover, that there is no one data distribution that performs best for all pAlgorithms, processor counts, or machines.

11 citations


Book ChapterDOI
01 Oct 2007
TL;DR: The design and implementation of the stapl associative pContainers, a collection of parallel data structures and algorithms that provide optimal insert, search, and delete operations for a distributed collection of elements based on keys, are presented.
Abstract: The Standard Template Adaptive Parallel Library ( stapl ) is a parallel programming framework that extends C++ and stl with support for parallelism. stapl provides a collection of parallel data structures ( pContainers ) and algorithms ( pAlgorithms ) and a generic methodology for extending them to provide customized functionality. stapl pContainers are thread-safe, concurrent objects, i.e., shared objects that provide parallel methods that can be invoked concurrently. They also provide appropriate interfaces that can be used by generic pAlgorithms . In this work, we present the design and implementation of the stapl associative pContainers : pMap , pSet , pMultiMap , pMultiSet , pHashMap , and pHashSet . These containers provide optimal insert, search, and delete operations for a distributed collection of elements based on keys. Their methods include counterparts of the methods provided by the stl associative containers, and also some asynchronous (non-blocking) variants that can provide improved performance in parallel. We evaluate the performance of the stapl associative pContainers on an IBM Power5 cluster, an IBM Power3 cluster, and on a linux-based Opteron cluster, and show that the new pContainer asynchronous methods, generic pAlgorithms (e.g., pfind ) and a sort application based on associative pContainers , all provide good scalability on more than 103processors.

11 citations


Book
01 Jan 2007
TL;DR: Novel metrics for the analysis of problem features and planner performance at multiple levels: node level, global level, and region level are introduced, and a set of general metrics that can be applied in both graph-based and tree-based planners are shown.
Abstract: A motion planner finds a sequence of potential motions for a robot to transit from an initial to a goal state. To deal with the intractability of this problem, a class of methods known as sampling-based planners build approximate representations of potential motions through random sampling. This selective random exploration of the space has produced many remarkable results, including solving many previously unsolved problems. Sampling-based planners usually represent the motions as a graph (e.g., the Probabilistic Roadmap Methods or PRMs), or as a tree (e.g., the Rapidly exploring Random Tree or RRT). Although many sampling-based planners have been proposed, we do not know how to select among them because their different sampling biases make their performance depend on the features of the planning space. Moreover, since a single problem can contain regions with vastly different features, there may not exist a simple exploration strategy that will perform well in every region. Unfortunately, we lack quantitative tools to analyze problem features and planners performance that would enable us to match planners to problems. We introduce novel metrics for the analysis of problem features and planner performance at multiple levels: node level, global level, and region level. At the node level, we evaluate how new samples improve coverage and connectivity of the evolving model. At the global level, we evaluate how new samples improve the structure of the model. At the region level, we identify groups or regions that share similar features. This is a set of general metrics that can be applied in both graph-based and tree-based planners. We show several applications for these tools to compare planners, to decide whether to stop planning or to switch strategies, and to adjust sampling in different regions of the problem.