scispace - formally typeset
Search or ask a question

Showing papers in "Sigact News in 1999"


Journal ArticleDOI
TL;DR: There is no mathematical language for discussing the semantics of object oriented programming languages (e .g . Java or Smalltalk) that is fully satisfying and the goal of this book is to fill this void.
Abstract: When a new programming language is designed it is sometimes useful to have a formal languag e in which we can describe with mathematical rigor what a program in that language is doing . This mathematical language can be thought of as a semantics of the programming language . If the semantics is sufficiently well developed we can use it to prove that a program does what we thin k it does. No actual programmer ever does this, but it is comforting to know that it can be done . There are several different mathematical languages for giving the semantics of imperative programming languages (e .g . C or Pascal) . Control flow graphs, or logical invariants can be used fo r this . There are also several different A—calculi for describing the semantics of functional program ming languages (e .g . Lisp or ML) . Likewise we can use classical logic and resolution as a semantics of logical languages (e .g . Prolog) . However, there is no mathematical language for discussing th e semantics of object oriented programming languages (e .g . Java or Smalltalk) that is fully satisfying . The goal of this book is to fill this void, or at least to make a start . The authors come from the functional programming and formal semantics side of the field , rather than the software engineering side, therefore they propose a calculus that is based on th e A—calculus, rather than any actual programming language in use . However, in this new calculus , which they call the s—calculus, instead of functions being primitive, objects are . The authors produce a family of c—calculi in order to hopefully represent more and more complicated objec t oriented techniques and structures .

85 citations


Journal ArticleDOI
TL;DR: Computing Reviews is a monthly journal that publishes critical reviews on a broad range of computing subjects, including models of computation, formal languages, computational complexity theory, analysis of algorithms, and logics and semantics of programs.
Abstract: As a service to our readers, SIGACT News has an agreement with Computing Reviews to reprint reviews of books and articles of interest to the theoretical computer science community. Computing Reviews is a monthly journal that publishes critical reviews on a broad range of computing subjects, including models of computation, formal languages, computational complexity theory, analysis of algorithms, and logics and semantics of programs. ACM members can receive a subscription to Computing Reviews for $45 per year by writing to ACM headquarters.

74 citations


Journal ArticleDOI
TL;DR: The quantum analogue of classical communication complexity, the quantum communication complexity model, was defined and studied, and some of the main results in the area are presented.
Abstract: Classical communication complexity has been intensively studied since its conception two decades ago. Recently, its quantum analogue, the quantum communication complexity model, was defined and studied. In this paper we present some of the main results in the area.

29 citations


Journal ArticleDOI
TL;DR: This work analyzes WWW hits recorded on the Stony Brook Algorithms Repository to determine the relative level of interest among 75 algorithmic problems and the extent to which publicly available algorithm implementations satisfy this demand.
Abstract: We present "market research" for the field of combinatorial algorithms and algorithm engineering, attempting to determine which algorithmic problems are most in demand in applications. We analyze 1,503,135 WWW hits recorded on the Stony Brook Algorithms Repository (http://www.cs.sunysb.edu/~algorith), to determine the relative level of interest among 75 algorithmic problems and the extent to which publicly available algorithm implementations satisfy this demand.

27 citations


Journal ArticleDOI
TL;DR: Inspired by Ian Parberry's "How to present a paper in theoretical computer science," (SIGACT News 19, 2 (1988), pp 42-47), some advice on how to present results from experimental and empirical research on algorithm~ is provided.
Abstract: Inspired by Ian Parberry's \"How to present a paper in theoretical computer science,\" (SIGACT News 19, 2 (1988), pp. 42-47), we provide some advice on how to present results from experimental and empirical research on algorithm~. 1 I n t r o d u c t i o n This note is written primarily for researchers in algorithms who find themselves called upon to present the results of computat ional experiments. While there has been much recent growth in the amount and quality of experimental research on algorithms, there is still some uncer ta inty about how to describe the research and present the conclusions. For general advice on presenting papers in theoretical computer science, read Ian Parberry 's excellent paper [6]. Here we focus on aspects directly relevant to experimentat ion and data analysis. Of course, the quali ty of the talk depends on the quali ty of the research. For advice on conducting respectable experimental research on algorithms, read McGeoch [5], or Barr et a/. [1], or the articles on methodology to appear in [4] and available on-line.

20 citations


Journal ArticleDOI
Jacob Lurie1
TL;DR: This book defines the Laplacian of a graph, a matrix closely related to the adjacency matrix, in analogy with the continuous case and studies the eigenvalues of this Laplace, which are related to many other more "discrete" invariants.
Abstract: Specifying a graph is equivalent to specifying its adjacency relation, which may be encoded in the form of a matrix. This suggests that study of the adjacency matrix from a linear-algebraic point of view might yield valuable information about graphs. In particular, any invariant associated to the matrix is also an invariant associated to the graph, and might have combinatorial meaning. Spectral graph theory is the study of the relationship between a graph and the eigenvalues of matrices (such as the adjacency matrix) naturally associated to that graph. This book looks at the subject from a geometric point of view, exploiting an analogy between a graph and a Riemannian manifold: Chung defines the Laplacian of a graph, a matrix closely related to the adjacency matrix, in analogy with the continuous case and studies the eigenvalues of this Laplacian.There are several reasons that these eigenvalues may be of interest. On the purely mathematical level, the eigenvalues have the advantage of being an extremely natural invariant which behaves nicely under operations such as Cartesian product and disjoint union. From a combinatorial point of view, the eigenvalues of a graph are related to many other more "discrete" invariants. From a geometric point of view, there are many respects in which the eigenvalues of a graph behave like the spectrum of a compact Riemannian manfiold. For the computationally-minded, the eigenvalues of a graph are easy to compute, and their relationship to other invariants can often yields good approximations to less tractible computations.

19 citations


Journal ArticleDOI
TL;DR: The purpose of this article is to help the reader uncompress the CPS transform by way of a rational reconstruction from jumps, as it brings out the commonalities of continuation-passing style (CPS).
Abstract: Practically all programming languages have some form of control structure or jumping. The more advanced forms of control structure tend to resemble function calls, so much so that they are usually not even described as jumps. Consider for example the library function exit in C. Its use is much like a function, in that it may be called with an argument; but the whole point of exit is of course that its behaviour is utterly non-functional, in that it jumps out of arbitrarily many surrounding blocks and pending function calls. Such a "non-returning function" or "jump with arguments" is an example of a continuation in the sense which we are interested in here.On the other hand, a simple but fundamental idea in compiling is that a function call is broken down into two jumps: one from the caller to the callee for the call itself, and another jump back from the callee to the caller upon returning. (The return statement in C is in fact listed among the "jump statements" [5].) This is most obvious for void-accepting and -returning functions, but it generalizes to other functions as well, if one is willing to understand "jump" in the broader sense of jump with arguments, i.e. continuation.In this view, then, continuations are everywhere. Continuations have been used in many different settings, in which they appear in different guises, ranging from mathematical functions to machine addresses. Rather than confine ourselves to a definition of what a continuation is, we will focus on continuation-passing style (CPS), as it brings out the commonalities. The CPS transform compresses a great deal of insight into three little equations in λ-calculus. Making sense of it intuitively, however, requires some background knowledge and a certain fluency. The purpose of this article, therefore, is to help the reader uncompress the CPS transform by way of a rational reconstruction from jumps.In the sequel, we will attempt to illustrate the correspondence between continuations and jumps (even in the guise of the abhorred goto). The intent is partly historical, to retrace some of the analysis of jumps that led to the discoveries of continuations. At the same time, the language of choice for many researchers during the (pre)history of continuations, ALGOL 60, is not so different from today's mainstream languages (i.e. C); we hope that occasional snippets of C may be more easily accessible to many readers than a functional language would be. So in each of the four sections (Sections 2-5 below) that make up the body of this paper, some C code will be used to give a naive but concrete example of the issue under consideration, before generalizing to a more abstract setting.

18 citations


Journal ArticleDOI
Joseph O'Rourke1
TL;DR: In this article, the subquadratic algorithm of Kapoor for finding shortest paths on a polyhedron is described, and the algorithm is shown to work on polyhedra.
Abstract: The subquadratic algorithm of Kapoor for finding shortest paths on a polyhedron is described.

18 citations


Journal ArticleDOI
TL;DR: Five open problems that are concerned with the design of efficient algorithms an d computational complexity such as sequence comparison, the reconstruction of evolutionary trees, physical mapping, and genetic drug target search are included.
Abstract: Computational molecular biology has emerged as one of the most exciting interdisciplinary fields in recent rears, riding on the success of the ongoing Tin man. Genomc Project . The field has not only benefited from the many concepts and techniques developed in theoreticaI computer science, but also provided many interesting research problems . In this column . we include five open problems that are concerned with the design of efficient algorithms an d computational complexity. Some of these problems have existed in the literature for a whil e but most are relatively new . The topics covered span several main branches of computational molecular biology such as sequence comparison . the reconstruction of evolutionary trees. physical mapping . and genetic drug target search . To save space, for each problem . we will give only the necessary mathematical definition s and a brief description of some relevant existing results . The reader is referred to appropriate references for more detailed information about the problems such as their background . motivation, and relation to other (solved or unsolved) problems . For a general treatment of' algorithmic issues in computational molecular biology . see [9. 21 . 18] .

15 citations


Journal ArticleDOI
TL;DR: Results from generalized operators in the context of polynomial-time machines, and gates computing arbitrary groupoidal functions in the contexts of Boolean circuits are surveyed, and relationships to a generalized quantifier concept from finite model theory are presented.
Abstract: In the past few years, generalized operators (a. k. a. leaf languages) in the context of polynomial-time machines, and gates computing arbitrary groupoidal functions in the context of Boolean circuits, have raised much interest. We survey results from both areas, point out connections between them, and present relationships to a generalized quantifier concept from finite model theory.

12 citations


Journal ArticleDOI
TL;DR: Until recently, it was an open question whether one- way functions having the algebraic and security properties that these protocols require could be created from any given one-way function, and recently, Hemaspaandra and Rothe resolved this open issue in the affirmative, by showing that one-Way functions exist if and only if strong, total, commutative, associative one-ways exist.

Journal ArticleDOI
TL;DR: "Logic for Applications" is a rare kind of a textbook which tries to combine together classical and non-classical logics and logic programming to provide a flexible textbook which could be used for teaching a course with a stress on logic programming.
Abstract: Among the books on foundations of mathematics and theoretical computer science, textbooks on logic and logic programming are usually quite distinct . The former introduce propositional and predicate logic, study their properties and prove the soundness and completeness results togethe r with some more advanced results, such as the NP-completeness of the satisfiability problem fo r propositional case and undecidability of this problem for first order logic . The proof system s discussed in these books are usually either Hilbertor Gentzen-style . The latter category of the textbooks usually start with the introduction of the Horn fragment of propositional (and firs t order) logics. Herbrand models are described as the semantics and SLD-resolution – as the proo f procedure for this fragment . There is also a third category of textbooks, those devoted to non-classical logics, such as modal , intuitionistic, temporal or linear . \"Logic for Applications\", the book under review is a rare kind of a textbook which tries t o combine together classical and non-classical logics and logic programming . This in fact was one of the two main reasons for writing the book as stated by the authors . The other reason (closely connected with the first one) was to provide a flexible textbook which could be used for teaching a course with a stress on logic programming as well as a pure introduction into classical mathematica l logic or an introduction in non-classical logics . The book starts with the description of propositional and first-order predicate logics, providin g all the regular results (soundness, completeness, compactness) and introducing the concepts whic h are usually associated with logic programming (such as Herbrand models and resolution) . Then a more formal and detailed description of logic programming and PROLOG follows . Two parts of the book that follow are devoted to two non-classical logical frameworks : modal logic and intuitionistic logic, with the remainder of the book occupied by a chapter devoted to the introduction int o axiomatic set theory and a historical overview of logic starting from Ancient Greece and ending a t present times .

Journal ArticleDOI
TL;DR: This thesis constructs algorithms for stable partitioning and stableselection which are the first linear-time algorithms being bothstable and in-place concurrently, and presents in- place algorithms for unstable and stable merging.
Abstract: An algorithm is said to operate in-place if it uses only a constant amount of extra memory for storing local variables besides the memory reserved for the input elements. In other words, the size of the extra memory does not grow as the number of input elements, n, gets larger, but it is bounded by a constant. An algorithm reorders the input elements stably if the original relative order of equal elements is retained.In this thesis, we devise in-place algorithms for sorting and related problems. We measure the efficiency of the algorithms by calculating the number of element comparisons and element moves performed in the worst case in the following. The amount of index manipulation operations is closely related to these quantities, so it is omitted in our calculations. When no precise figures are needed, we denote the sum of all operations by a general expression "time". The thesis consists of five separate articles, the main contributions of which are described below.We construct algorithms for stable partitioning and stable selection which are the first linear-time algorithms being both stable and in-place concurrently. Moreover, we define problems stable unpartitioning and restoring selection and devise linear-time algorithms for these problems. The algorithm for stable un-partitioning is in-place while that for restoring selection uses O(n) extra bits. By using these algorithms as subroutines we construct an adaption of Quicksort that sorts a multiset stably in O(Σki = 1 mi log(n/mi)) time where mi is the multiplicity of ith distinct element for i = 1,.., k. This is the first in-place algorithm that sorts a multiset stably in asymptotically optimal time.We present in-place algorithms for unstable and stable merging. The algorithms are asymptotically more efficient than earlier ones: the number of moves is 3(n + m)+o(m) for the unstable algorithm, 5n+12m+o(m) for the stable algorithm, and the number of comparisons at most m(t + 1) + n/2t + o(m) comparisons where m ≤ n and t = [log(n/m)]. The previous best results were 1.125(n + m) + o(n) comparisons and 5(n + in) + o(n) moves for unstable merging, and 16.5(n + in) + o(n) moves for stable merging.Finally, we devise two in-place algorithms for sorting. Both algorithms are adaptions of Mergesort. The first performs n log2 n + O(n) comparisons and e n loge n + O(n log log n) moves for any fixed 0 n loge n + O(n) comparisons and fewer than O(n log n/log log n) moves. This is the first in-place sorting algorithm that performs o(n log n) moves in the worst case while guaranteeing O(n log n) comparisons.

Journal ArticleDOI
TL;DR: This book uses the stable marriage problem as motivation to look at some mathematics of inter est and introduces a new calculus to formalize the semantics of object-oriented languages.
Abstract: 1. Stable Marriage and its Relation to Other Combinatorial Problems : An Introduction to Algorithm Analysis by Donald Knuth . Reviewed by Tim McNichol. This book uses the stable marriage problem as motivation to look at some mathematics of inter est . It would be useful for undergrads ; however, for a serious study of matching there ar e more advanced and more up-to-date books available . 2. The Limits of Mathematics by Gregory J . Chaitin . Reviewed by Vladimir Tasic . This book is on algorithmic information theory and randomness as they relate to Berr y 's Paradox (\"the shortest number that requires less than 1000 characters to describe it\" has just bee n described by that phrase in quotes, yet that phrase was less than 1000 characters . ) 3. Privacy on the Line by Whitfield Diffie and Susan Landau . Reviewed by Joseph Maklevitch . This book is about the balance between the citizen's need for privacy and the government's need to intrude to prevent or solve crimes . These issues are relevant now because o f crytography and computers . The authors are respected theorists who have worked in cryptog raphy, hence their comments are worthy considering . This book has caused some controvers y in the math community— see the June-July 1998 issue of Notices of the AMS, also available at http://www.ams .org/notices . Or, better yet, read the book ! 4. A Theory of Objects by Authors : Martin Abadi and Luca Cardelli . Reviewed by Brian Postow . This book is about formalizing the semantics of object-oriented languages . To do this, a new calculus is introduced .

Journal ArticleDOI
TL;DR: The current column may save you some time on this!
Abstract: And so summer comes around again. What a good time to relax on the beach (with your theorem notebook), to go on that long-planned family vacation (certainly with your theorem notebook, and don't forget to bring the family) and, above all, to catch up on all the papers and progress that have been piling up during the academic year on your "if only I had the time" list.The current column may save you some time on this! In 1997, Mitsunori Ogihara and Animesh Ray, jointly with Kimberly Smith, wrote a guest column on biomolecular computing. Of course, this is an area of intense activity, and in the current issue, they have contributed a delightful column catching us all up on some recent advances in biomolecular computing. (P.S. Coming soon to a future issue will be a guest column by Amnon Ta-Shma on classical versus quantum communication complexity.)

Journal ArticleDOI
Leonid Libkin1
TL;DR: A new type of expressivity bounds collapse results are given and explained and it is explained how they can be applied in the set of constraint databases.
Abstract: Can we store an infinite set in a database? Clearly not, but instead we can store a finite representation of an infinite set and write queries as if the entire infinite set were stored. This is the key idea behind constraint databases, which emerged relatively recently as a very active area of database research. The pr imary motivat ion comes f~om geographical and temporal databases: how does one store a region in a database? More important ly, how does one design a query language t h a t makes the user view a region as it if were an infinite collection of points stored in the database? Fini te representat ions used in constraint databases are first-order formulae; in geographical applications, one often uses Boolean combinations of linear or polynomial inequalities. One of the most challenging questions in the development of the theory of constraint databases was that of the expressive power: what are the l imitat ions of query languages for constraint databases? These questions were easily reduced to those on the expressiveness of query languages over ordinary relational databases, wi th addi t ional condi t ion tha t databases may store numbers and ar i thmet ic operat ions may be used in queries. It tu rned out that the classical techniques for analyzing the expressive power of relational query languages no longer work in this new setting. In the past several years, however, most questions on the expressive power have been settled, by using new techniques tha t mix the finite and the infinite, and bring together results from a number of fields such as model theory, algebraic geometry and symbolic computat ion. In this column we briefly survey of some of the results on expressiveness of query languages for constraint databases. Mathematically, these can be viewed as results on expressiveness of logics over finite or definable sets embedded in certain structures. We first deal wi th the finite case, that is formalized by embedded finite models. We give a new type of expressivity bounds collapse results and explain how they can be applied in the set t ing of constraint databases.

Journal ArticleDOI
TL;DR: The tool JFLAP is described, describing the tool and its interactive use in experimenting with automata, grammars, and regular expressions, and special features include building and executing nondeterministic machines andudying the proofs of theorems tha t focus on conversions of languages from one form to another.
Abstract: An au toma ta theory course can be taught in an interactive manner using tools, allowing students to receive immediate feedback. We describe the tool JFLAP [4] and i ts interactive use in experimenting with automata , grammars, and regular expressions. Special features include building and executing nondeterminist ic machines and s tudying the proofs of theorems tha t focus on conversions of languages from one form to another. We also ment ion several other tools for increasing the interact ion in this course.

Journal ArticleDOI
TL;DR: This book by Borodin and EI-Yaniv is the first comprehensive guide to this body of research in competitive analysis of online algorithms, and considers competitive analysis in the general context of decision making under uncertainty.
Abstract: Since roughly 1985, theoretical research in the competitive analysis of online algorithms (a subfield of theoretical computer science) has grown tremendously. This book by Borodin and EI-Yaniv is the first comprehensive guide to this body of research. The book covers some of the fund~.mental problems that have been considered during this period: list accessing, paging, the k-server problem, metrical task systems, as well as some more application-oriented topics: in scheduling (bin packing, virtual circuit routing, load balancing) and in investment (portfolio selection). The authors use these problems to summarize the important results in the field (including proofs), as well as to introduce general methods, including the use of potential functions, randomized algorithms (very important in competit ive analysis!), and g~.rne theory. The final chapter considers competitive analysis in the general context of decision making under uncertainty and formally compares competit ive analysis to other theoretical approaches. Each of the 15 chapters contains a few (typically 5-10) exercises, usually having the reader expand on, or prove some detail of, the analysis presented in the text. The end of each chapter also presents historical notes and several interesting open problems. The book has an extensive bibliography and index, and presents some, but not many, empirical results.

Journal ArticleDOI
TL;DR: Two results in "computational origami" are illustrated and it is shown how the model can be modified for different levels of complexity.
Abstract: Two results in "computational origami" are illustrated.

Journal ArticleDOI
TL;DR: This volume is a "snapshot" of this active area of research that collects overview and research papers from the special year, as well as including five papers following the associated DIMACS Algorithm Implementation Challenge, which aimed to bridge the gaps between theory, computer implementation, and practical application.
Abstract: Mathematics has grown and developed through its association with the natural sciences, of which, historically, physics has been most influential. Currently new scientific knowledge and new mathematical inspiration is coming from the more recent collaborations between mathematics and the biological sciences. This is exemplified by growing field of computational molecular biology which relies on discrete mathematics and theoretical computer science for its analyses, especially of the evolution and properties of biological sequences such as those of DNA (and its cousin I=tNA-which are each comprised of sequences of four possible nucleotides) and proteins (amino acid chains, abstractable for some purposes as sequences over a 20-letter alphabet). This important domain is addressed by the volume under review, but the reader should know that scope of mathematical contact with biology (bioinformt~tics or mathematical biology) is much broader including, for instance, mathematical modelling of morphogenesis [5, 4], population genetics, ecology and spatial effects [1, 7], regulatory control within medicine and biological systems [4, 2], mathematical hierarchies in evolution and in the structure of biological systems [3, 5], or areas of engineered sequences (such as DNA Computing, e.g. [8]), among many other areas. This volume features fourteen papers from the \"DIMACS Special Year on Mathematical Support for Molecular Biology\". Interestingly this \"special year\" spans the five years 1994-1998. The NSF Science mad Technology Center in Discrete Mathematics and Theoretical Computer Science (DIMACS) hosted it to expose discrete mathematicians and theoretical computer scientists to problems of molecular biology that their methods could usefully address , to make molecular scientists aware of mathematical scientists eager to help them solve their problems, to support concentrated exposure of young researchers to this interdisci-plinary area, and to forge long-lasting collaborative links between the biological (especially molecular) and mathematical scientific communities via a series of workshops, v.istor programs , and postdoctoral fellowships. This volume is a \"snapshot\" of this active area of research that collects overview and research papers from the special year, as well as including five papers following the associated DIMACS Algorithm Implementation Challenge, which aimed to bridge the gaps between theory, computer implementation, and practical application. W. M. Fitch's contribution, \"An introduction to molecular biology for mathematicians and computer programmers\" provides a valuable introduction to biological concepts for understanding the problems of constructing evolutionary trees from linear biological sequences,

Journal ArticleDOI
TL;DR: This book consists of a series of introductory survey articles on topics in probablistic combinatorics and its applications, including random and random-like graphs, discrete isoperimetric inequalities, rapidly mixing Markov chains, and finite Fourier methods.
Abstract: Probablistic arguments have been applied in many areas of combinatorics and theoretical computer science ever since Erdos first used one to prove bounds on Ramsey numbers. Applications range from constructing graphs with properties useful in building communication networks to almost uniform generation of random structures for the purpose of approximately solving intractable counting problems. This book consists of a series of introductory survey articles on topics in probablistic combinatorics and its applications. (The articles are derived from lectures given in one of a series of short courses sponsored by the American Mathematical Society.) The topics covered include random and random-like graphs, discrete isoperimetric inequalities, rapidly mixing Markov chains, and finite Fourier methods. The emphasis throughout the book is on techniques, with sufficient examples to show their usefulness.

Journal ArticleDOI
TL;DR: An integrated set of sites, the Theory Web, is described, that in a manner similar to a site like Yahoo! or Excite! collects and classifies links to theoretical compute science.
Abstract: There are now n11merons sites on the web that contain useful information for theoretical compute scientists. Among the more popular sites are those maintaining bibliographies and surveys fo specific subject areas; many journals and conferences now maintain an online presence as well allowing easy access to published papers. It is now common to perform literature searches directl~ on the web, as well as on specialized databases like INSPEC. There are sites that maintain link for specific subject areas [3, 5, 7], as well as sites that maintain information about confereno announcements and deadlines [24, 7, 13]. In addition, there are paper and bibliography database like the Hypertext Bibliography Project [14], the Computing Research Repository [17], and th, Computer Science Research Paper Search Engine [191. Searching for relevant material however is still a time-consl,ming task, given the volume o information available and the lack of contextual precision of most general-purpose search engines A natural solution to this problem is to maintaln sites that act as hubs, collecting in one plac, difTerent sites that pertain to specific subject areas. Yet, the collections are generally withou structure, and the choice of subjects represented is essentially ad hoc. In this proposal, we describe an integrated set of sites, the Theory Web, that in a manner aki] to a site like Yahoo! [2] or Excite! [1] collects and classifies links pertainiug to theoretical compute science. There are many benefits of such a presence: