scispace - formally typeset
Search or ask a question
Author

Juris Hartmanis

Bio: Juris Hartmanis is an academic researcher from Cornell University. The author has contributed to research in topics: Structural complexity theory & Computational complexity theory. The author has an hindex of 46, co-authored 171 publications receiving 10705 citations. Previous affiliations of Juris Hartmanis include National Research Council & General Electric.


Papers
More filters
Journal ArticleDOI
TL;DR: Techniques for studying complexity classes that are not covered by known recursive enumerations of machines are developed by using them to examine the probabilistic class BPP and it is shown that there is a relativized world where BPPA has no complete languages.

94 citations

Journal ArticleDOI
TL;DR: It is shown that the deterministic computation time for sets in NP can depend on their density if and only if there is a collapse or partial collapse of the corresponding higher nondeterministic and deterministic time bounded complexity classes.

94 citations

Book
11 Jun 1999
TL;DR: The Definition of RML was defined as follows: Compiling Pattern Matching, Reducing Nondeterminism, Compiling Continuations, and Simulating Tailcalls in C.
Abstract: 1 Introduction.- 2 Preliminaries.- 3 The Design of RML.- 4 Examples.- 5 Implementation Overview.- 6 Reducing Nondeterminism.- 7 Compiling Pattern Matching.- 8 Compiling Continuations.- 9 Simulating Tailcalls in C.- 10 Performance Evaluation.- 11. Concluding Remarks.- A. The Definition of RML.

93 citations

Book
01 Jan 1996
TL;DR: Lower bounds are presented showing that a number of problems, including graph reachability, dataflow analysis, and algebraic path problems, are unbounded with respect to a model of computation called the sparsely-aliasing pointer machine model.
Abstract: Incremental computation concerns the re-computation of output after a change in the input. Incremental algorithms, also called dynamic or on-line algorithms, process a change in the input by identifying the "affected output", that is, the part of the previous output that is no longer "correct", and "updating" it. This thesis presents results--upper bound results, lower bound results, and experimental results--for several incremental computation problems. The common theme in all these results is that the complexity of an algorithm or problem is analyzed in terms of a parameter $\Vert\delta\Vert$ that measures the size of the change in the input and output. An incremental algorithm is said to be bounded if the time it takes to update the output depends only on the size of the change in the input and output (i.e., $\Vert\delta\Vert),$ and not on the size of the entire current input. A problem is said to be bounded (unbounded) if it has (does not have) a bounded incremental algorithm. The results in this thesis, summarized below, illustrate a complexity hierarchy for incremental computation from this point of view. We present $O(\Vert\delta\Vert\log\Vert\delta\Vert)$ incremental algorithms for several shortest-path problems and a generalization of the shortest-path problem, establishing that these problems are polynomially bounded. We present an $O(2\sp{\Vert\delta\Vert})$ incremental algorithm for the circuit value annotation problem, which matches a previous $\Omega(2\sp{\Vert\delta\Vert})$ lower bound for this problem and establishes that the circuit value annotation problem is exponentially bounded. We also present experimental results that show that our algorithm, in spite of a worst-case complexity of $\Theta(2\sp{\Vert\delta\Vert}),$ appears to work well in practice. We present lower bounds showing that a number of problems, including graph reachability, dataflow analysis, and algebraic path problems, are unbounded with respect to a model of computation called the sparsely-aliasing pointer machine model. We present an $O(\Vert\delta\Vert\log\ n)$ incremental algorithm for the reachability problem in reducible flowgraphs, and an algorithm for maintaining the dominator tree of a reducible flowgraph.

91 citations

Proceedings ArticleDOI
01 Dec 1983
TL;DR: The paper exploits the recently discovered upward separation method and uses relativization techniques to determine logical possibilities, limitations of these proof techniques, and, for the first time, to exhibit structural differences between relativized NP and CoNP.
Abstract: This paper investigates the structural properties of sets in NP-P and shows that the computational difficulty of lower density sets in NP depends explicitly on the relations between higher deterministic and nondeterministic time-bounded complexity classes. The paper exploits the recently discovered upward separation method, which shows for example that there exist sparse sets in NP-Pif and only if EXPTIME

88 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
01 Jan 1974
TL;DR: This text introduces the basic data structures and programming techniques often used in efficient algorithms, and covers use of lists, push-down stacks, queues, trees, and graphs.
Abstract: From the Publisher: With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. 0201000296B04062001

9,262 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered factoring integers and finding discrete logarithms on a quantum computer and gave an efficient randomized algorithm for these two problems, which takes a number of steps polynomial in the input size of the integer to be factored.
Abstract: A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored.

7,427 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations

Journal ArticleDOI
Gerard J. Holzmann1
01 May 1997
TL;DR: An overview of the design and structure of the verifier, its theoretical foundation, and an overview of significant practical applications are given.
Abstract: SPIN is an efficient verification system for models of distributed software systems. It has been used to detect design errors in applications ranging from high-level descriptions of distributed algorithms to detailed code for controlling telephone exchanges. The paper gives an overview of the design and structure of the verifier, reviews its theoretical foundation, and gives an overview of significant practical applications.

4,159 citations