scispace - formally typeset
Search or ask a question
Author

Michael H. Goldwasser

Bio: Michael H. Goldwasser is an academic researcher from Saint Louis University. The author has contributed to research in topics: Python (programming language) & Scheduling (computing). The author has an hindex of 18, co-authored 37 publications receiving 1324 citations. Previous affiliations of Michael H. Goldwasser include Loyola University Chicago & Stanford University.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors studied the complexity of tile self-assembly under various generalizations of the tile selfassembly model and provided a lower bound of Ω( √ n 1/k) for the standard model.
Abstract: In this paper, we study the complexity of self-assembly under models that are natural generalizations of the tile self-assembly model. In particular, we extend Rothemund and Winfree's study of the tile complexity of tile self-assembly [Proceedings of the 32nd Annual ACM Symposium on Theory of Computing, Portland, OR, 2000, pp. 459--468]. They provided a lower bound of $\Omega(\frac{\log N}{\log\log N})$ on the tile complexity of assembling an $N\times N$ square for almost all N. Adleman et al. [Proceedings of the 33rd Annual ACM Symposium on Theory of Computing, Heraklion, Greece, 2001, pp. 740--748] gave a construction which achieves this bound. We consider whether the tile complexity for self-assembly can be reduced through several natural generalizations of the model. One of our results is a tile set of size $O(\sqrt{\log N})$ which assembles an $N\times N$ square in a model which allows flexible glue strength between nonequal glues. This result is matched for almost all N by a lower bound dictated by Kolmogorov complexity. For three other generalizations, we show that the $\Omega(\frac{\log N}{\log\log N})$ lower bound applies to $N\times N$ squares. At the same time, we demonstrate that there are some other shapes for which these generalizations allow reduced tile sets. Specifically, for thin rectangles with length N and width k, we provide a tighter lower bound of $\Omega(\frac{N^{1/k}}{k})$ for the standard model, yet we also give a construction which achieves $O(\frac{\log N}{\log\log N})$ complexity in a model in which the temperature of the tile system is adjusted during assembly. We also investigate the problem of verifying whether a given tile system uniquely assembles into a given shape; we show that this problem is NP-hard for three of the generalized models.

225 citations

Journal ArticleDOI
TL;DR: A detailed survey of the field of buffer management policies in the context of packet transmission for network switches is provided, describing various models of the problem that have been studied, and summarizing the known results.
Abstract: Over the past decade, there has been great interest in the study of buffer management policies in the context of packet transmission for network switches. In a typical model, a switch receives packets on one or more input ports, with each packet having a designated output port through which it should be transmitted. An online policy must consider bandwidth limits on the rate of transmission, memory constraints impacting the buffering of packets within a switch, and variations in packet properties used to differentiate quality of service. With so many constraints, a switch may not be able to deliver all packets, in which case some will be droppedIn the online algorithms community, researchers have used competitive analysis to evaluate the quality of an online policy in maximizing the value of those packets it is able to transmit. In this article, we provide a detailed survey of the field, describing various models of the problem that have been studied, and summarizing the known results.

139 citations

Proceedings ArticleDOI
17 Sep 1995
TL;DR: A software system which can automatically determine how to assemble a product from its parts, given only a geometric description of the assembly, the Stanford Assembly Analysis Tool (STAAT), could thus provide immediate feedback to a team of product designers about the complexity of assembling the product being designed.
Abstract: In this paper, we present a software system which can automatically determine how to assemble a product from its parts, given only a geometric description of the assembly. Incorporated into a larger CAD tool, this system, the Stanford Assembly Analysis Tool (STAAT), could thus provide immediate feedback to a team of product designers about the complexity of assembling the product being designed. This would be particularly useful in complex assemblies where each designer may not be fully aware of the impact of his design changes on the assemblability of the product as a whole. STAAT’s underlying data structure is an efficient version of the non-directional blocking graph (NDBG), a compact representation of the blocking relationships in an assembly. STAAT implements several techniques using this structure, under a unified approach in which the same software “machinery” can analyze the product under different assembly constraints. In initial experiments conducted on relatively small polyhedral assemblies of 20 to 40 parts and 500 to 1500 faces, using one-step translational motions, STAAT generated assembly sequences much more quickly than did previous NDBG-based systems. We are working now on extending both these results and the underlying theory to more sophisticated cases.

129 citations

Proceedings ArticleDOI
01 Feb 2000
TL;DR: A new external memory data structure, the buffered repository tree, is described, and it is used to provide the first non-trivial external memory algorithm for directed breadth-first search (BFS) and an improved external algorithms for directed depth- first search.
Abstract: We describe a new external memory data structure, the buffered repository tree, and use it to provide the first non-trivial external memory algorithm for directed breadth-first search (BFS) and an improved external algorithm for directed depth-first search. We also demonstrate the equivalence of various formulations of external undirected BFS, and we use these to give the first I/O-optimal BFS algorithm for undirected trees.

127 citations

Proceedings ArticleDOI
27 Feb 2002
TL;DR: Experiences in which students of a programming course were asked to submit both an implementation as well as a test set are discussed, which introduces implicit principles of software testing together with a bit of fun competition.
Abstract: We discuss our experiences in which students of a programming course were asked to submit both an implementation as well as a test set. A portion of a student's grade was then devoted both to the validity of a student's program on others' test sets, as well as how that student's test set performed in uncovering flaws in others' programs. The advantages are many, as this introduces implicit principles of software testing together with a bit of fun competition. The major complication is that such an all-pairs execution of tests grows quadratically with the number of participants, necessitating a fully automated scoring system.

87 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This work developed, implemented, and thoroughly tested rapid bootstrap heuristics in RAxML (Randomized Axelerated Maximum Likelihood) that are more than an order of magnitude faster than current algorithms and can contribute to resolving the computational bottleneck and improve current methodology in phylogenetic analyses.
Abstract: Despite recent advances achieved by application of high-performance computing methods and novel algorithmic techniques to maximum likelihood (ML)-based inference programs, the major computational bottleneck still consists in the computation of bootstrap support values. Conducting a probably insufficient number of 100 bootstrap (BS) analyses with current ML programs on large datasets—either with respect to the number of taxa or base pairs—can easily require a month of run time. Therefore, we have developed, implemented, and thoroughly tested rapid bootstrap heuristics in RAxML (Randomized Axelerated Maximum Likelihood) that are more than an order of magnitude faster than current algorithms. These new heuristics can contribute to resolving the computational bottleneck and improve current methodology in phylogenetic analyses. Computational experiments to assess the performance and relative accuracy of these heuristics were conducted on 22 diverse DNA and AA (amino acid), single gene as well as multigene, real-world alignments containing 125 up to 7764 sequences. The standard BS (SBS) and rapid BS (RBS) values drawn on the best-scoring ML tree are highly correlated and show almost identical average support values. The weighted RF (Robinson-Foulds) distance between SBS- and RBS-based consensus trees was smaller than 6% in all cases (average 4%). More importantly, RBS inferences are between 8 and 20 times faster (average 14.73) than SBS analyses with RAxML and between 18 and 495 times faster than BS analyses with competing programs, such as PHYML or GARLI. Moreover, this performance improvement increases with alignment size. Finally, we have set up two freely accessible Web servers for this significantly improved version of RAxML that provide access to the 200-CPU cluster of the Vital-IT unit at the Swiss Institute of Bioinformatics and the 128-CPU cluster of the CIPRES project at the San Diego Supercomputer Center. These Web servers offer the possibility to conduct large-scale phylogenetic inferences to a large part of the community that does not have access to, or the expertise to use, high-performance computing resources. (Maximum likelihood; phylogenetic inference; rapid bootstrap; RAxML; support values.)

6,585 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

05 Mar 2013
TL;DR: For many applications, a randomized algorithm is either the simplest or the fastest algorithm available, and sometimes both. as discussed by the authors introduces the basic concepts in the design and analysis of randomized algorithms and provides a comprehensive and representative selection of the algorithms that might be used in each of these areas.
Abstract: For many applications, a randomized algorithm is either the simplest or the fastest algorithm available, and sometimes both. This book introduces the basic concepts in the design and analysis of randomized algorithms. The first part of the text presents basic tools such as probability theory and probabilistic analysis that are frequently used in algorithmic applications. Algorithmic examples are also given to illustrate the use of each tool in a concrete setting. In the second part of the book, each chapter focuses on an important area to which randomized algorithms can be applied, providing a comprehensive and representative selection of the algorithms that might be used in each of these areas. Although written primarily as a text for advanced undergraduates and graduate students, this book should also prove invaluable as a reference for professionals and researchers.

785 citations