scispace - formally typeset
Search or ask a question
Author

Juris Hartmanis

Bio: Juris Hartmanis is an academic researcher from Cornell University. The author has contributed to research in topics: Structural complexity theory & Computational complexity theory. The author has an hindex of 46, co-authored 171 publications receiving 10705 citations. Previous affiliations of Juris Hartmanis include National Research Council & General Electric.


Papers
More filters
BookDOI
01 Jan 2000
TL;DR: Amortized fully-dynamic polylogarithmic algorithms for connectivity, minimum spanning trees (MST), 2-edgeand biconnectivity, and improved static algorithms for finding unique matchings in graphs are reviewed.
Abstract: First we review amortized fully-dynamic polylogarithmic algorithms for connectivity, minimum spanning trees (MST), 2-edgeand biconnectivity. Second we discuss how they yield improved static algorithms: connectivity for constructing a tree from homeomorphic subtrees, 2-edge connectivity for finding unique matchings in graphs, and MST for packing spanning trees in graphs. The application of MST for spanning tree packing is new and when boot-strapped, it yields a fully-dynamic polylogarithmic algorithm for approximating general edge connectivity within a factor √ 2 + o(1). Finally, on the more practical side, we will discuss how output sensitive algorithms for dynamic shortest paths have been applied successfully to speed up local search algorithms for improving routing on the internet, roughly doubling the capacity. 1 Dynamic Graph Algorithms In this talk, we will discuss some simple dynamic graph algorithms and their applications within static graph problems. As a new result, we will derive a fully dynamic polylogarithmic algorithm approximating the edge connectivity λ within a factor √ 2 + o(1), that is, the algorithm will output a value between λ/ √ 2 + o(1) and λ ×2 + o(1). The talk is not intended as a general survey of dynamic graph algorithms and their applications. Rather its goal is just to present a few nice illustrations of the potent relationship between dynamic graph algorithms and their applications in static graph problems, showing contexts in which dynamic graph algorithms play a role similar to that played by priority queues for greedy algorithms. In a fully dynamic graph problem, we are considering a graph G over a fixed vertex set V , |V | = n. The graph G may be updated by insertions and deletions of edges. Unless otherwise stated, we assume that we start with an empty edge set. We will review the fully dynamic graph algorithms of Holm et al. [11] for connectivity, minimum spanning trees (MST), 2-edge, and biconnectivity in undirected graphs. For the connectivity type problems, the updates may be interspersed by queries on (2-edge-/bi-) connectivity of the graph or between specified vertices. For MST, the fully dynamic algorithm should update the MST in connection with each update to the graph: an inserted edge might have to go into the MST, and if an MST edge is deleted, we should replace with the lightest edge possible. M.M. Halldórsson (Ed.): SWAT 2000, LNCS 1851, pp. 1–9, 2000. c © Springer-Verlag Berlin Heidelberg 2000 2 M. Thorup and D.R. Karger Both updates and queries are presented on-line, meaning that we have to respond to an update or query without knowing anything about the future. The time bounds for these operations are polylogarithmic but amortized meaning that we only bound the average operation time over any sequence of operations, starting with no edges. In our later applications for static graph problems, we only care about the total amount of time spent over all dynamic graph operations, and hence the amortized time bounds suffice. The above mentioned results are all for undirected graphs. For directed graphs there are very few results. In a recent break-through, King [16] showed how to maintain the full transitive closure of a graph in Õ(n2) amortized time per update. Further, she showed how to maintain all pairs shortest paths in O(n2.5 √ log C) time per update if C is the maximum weight in the graph. However, if one is is just interested in maintaining whether t can be reached from s for two fixed vertices s and t, nobody knows how to do this in o(m) time. On the more practical side, Ramalingan and Reps [24] have suggested a lazy implementation of Dijkstra’s [4] single source shortest paths algorithm for a dynamic directed graph. If X is the number of vertices that change distance from the source s in connection with an arc insertion or deletion, they can update a shortest path tree from s in Õ( ∑ v∈X degree(v)) time. Although this does not in general improve over the Õ(m) time it takes to compute a single source shortest path tree from scratch, there has been experimental evidence suggesting that this kind of laziness is worthwhile in connection with internet like topologies [7].

1 citations

Book ChapterDOI
TL;DR: How the major computational complexity classes, P, NP and PSPACE, capture different computational properties of mathematical proofs and reveal new quantitative aspects of mathematics are discussed.
Abstract: This paper discusses howthe major computational complexity classes, P, NP and PSPACE, capture different computational properties of mathematical proofs and reveal newq uantitative aspects of mathematics.
BookDOI
01 Jan 1998
TL;DR: This paper presents scheduling of data retrieval techniques which are capable of dynamically resolving congestion which occurs on the storage subsystem due to expected and unexpected changes in I/O bandwidth demand for a scalable video server in a multi-disk environment.
Abstract: In this paper, we consider a Video-on-Demand storage server which operates in a heterogeneous environment and faces (a) fluctuations in workload, (b) network congestion, and (c) failure of server and/or network components. network). We present scheduling of data retrieval techniques which are capable of dynamically resolving congestion which occurs on the storage subsystem due to expected and unexpected changes in I/O bandwidth demand for a scalable video server in a multi-disk environment. techniques.
Proceedings ArticleDOI
11 Nov 1964
TL;DR: This paper defines and discusses generalizations of partition pairs on sequential machines to suggest a unified approach to problems of information flow and machine structure.
Abstract: In this paper, we define and discuss generalizations of partition pairs on sequential machines. The object of these generalizations is to suggest a unified approach to problems of information flow and machine structure.

Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
01 Jan 1974
TL;DR: This text introduces the basic data structures and programming techniques often used in efficient algorithms, and covers use of lists, push-down stacks, queues, trees, and graphs.
Abstract: From the Publisher: With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. 0201000296B04062001

9,262 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered factoring integers and finding discrete logarithms on a quantum computer and gave an efficient randomized algorithm for these two problems, which takes a number of steps polynomial in the input size of the integer to be factored.
Abstract: A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.

7,427 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations

Journal ArticleDOI
Gerard J. Holzmann1
01 May 1997
TL;DR: An overview of the design and structure of the verifier, its theoretical foundation, and an overview of significant practical applications are given.
Abstract: SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. The paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications.

4,159 citations