scispace - formally typeset
Search or ask a question
Author

John E. Savage

Bio: John E. Savage is an academic researcher from Brown University. The author has contributed to research in topics: Memory hierarchy & Very-large-scale integration. The author has an hindex of 23, co-authored 90 publications receiving 1970 citations. Previous affiliations of John E. Savage include Bell Labs & California Institute of Technology.


Papers
More filters
Book
01 Jan 1998
TL;DR: In Models of Computation, John Savage re-examines theoretical computer science, offering a fresh approach that gives priority to resource tradeoffs and complexity classifications over the structure of machines and their relationships to languages.
Abstract: From the Publisher: Your book fills the gap which all of us felt existed too long. Congratulations on this excellent contribution to our field." --Jan van Leeuwen, Utrecht University "This is an impressive book. The subject has been thoroughly researched and carefully presented. All the machine models central to the modern theory of computation are covered in depth; many for the first time in textbook form. Readers will learn a great deal from the wealth of interesting material presented." --Andrew C. Yao, Professor of Computer Science, Princeton University "Models of Computation" is an excellent new book that thoroughly covers the theory of computation including significant recent material and presents it all with insightful new approaches. This long-awaited book will serve as a milestone for the theory community." --Akira Maruoka, Professor of Information Sciences, Tohoku University "This is computer science." --Elliot Winard, Student, Brown University In Models of Computation: Exploring the Power of Computing, John Savage re-examines theoretical computer science, offering a fresh approach that gives priority to resource tradeoffs and complexity classifications over the structure of machines and their relationships to languages. This viewpoint reflects a pedagogy motivated by the growing importance of computational models that are more realistic than the abstract ones studied in the 1950s, '60s and early '70s. Assuming onlysome background in computer organization, Models of Computation uses circuits to simulate machines with memory, thereby making possible an early discussion of P-complete and NP-complete problems. Circuits are also used to demonstrate that tradeoffs between parameters of computation, such as space and time, regulate all computations by machines with memory. Full coverage of formal languages and automata is included along with a substantive treatment of computability. Topics such as space-time tradeoffs, memory hierarchies, parallel computation, and circuit complexity, are integrated throughout the text with an emphasis on finite problems and concrete computational models FEATURES: Includes introductory material for a first course on theoretical computer science. Builds on computer organization to provide an early introduction to P-complete and NP-complete problems. Includes a concise, modern presentation of regular, context-free and phrase-structure grammars, parsing, finite automata, pushdown automata, and computability. Includes an extensive, modern coverage of complexity classes. Provides an introduction to the advanced topics of space-time tradeoffs, memory hierarchies, parallel computation, the VLSI model, and circuit complexity, with parallelism integrated throughout. Contains over 200 figures and over 400 exercises along with an extensive bibliography. ** Instructor's materials are available from your sales rep. If you do not know your local sales representative, please call 1-800-552-2499 for assistance, or use the Addison Wesley Longman rep-locator at ...

311 citations

Journal ArticleDOI
TL;DR: It is shown that if coded nanowires are chosen at random from a sufficiently large population, it can ensure that a large fraction of the selectednanowires have unique addresses and an O(N/sup 2/) procedure to discover the addresses which are present is given.
Abstract: We describe a technique for addressing individual nanoscale wires with microscale control wires without using lithographic-scale processing to define nanoscale dimensions. Such a scheme is necessary to exploit sublithographic nanoscale storage and computational devices. Our technique uses modulation doping to address individual nanowires and self-assembly to organize them into nanoscale-pitch decoder arrays. We show that if coded nanowires are chosen at random from a sufficiently large population, we can ensure that a large fraction of the selected nanowires have unique addresses. For example, we show that N lines can be uniquely addressed over 99% of the time using no more than /spl lceil/2.2log/sub 2/(N)/spl rceil/+11 address wires. We further show a hybrid decoder scheme that only needs to address N=O(W/sub litho-pitch//W/sub nano-pitch/) wires at a time through this stochastic scheme; as a result, the number of unique codes required for the nanowires does not grow with decoder size. We give an O(N/sup 2/) procedure to discover the addresses which are present. We also demonstrate schemes that tolerate the misalignment of nanowires which can occur during the self-assembly process.

229 citations

Journal ArticleDOI
John E. Savage1
TL;DR: In this paper, measures of the computational work and computational delay required by ms chines to compute functions are given and many e~ change inequalities involving storage, time, and other important parameters of computation are developed.
Abstract: Measures of the computational work and computational delay required by machines to compute functions are given. Exchange inequalities are developed for random access, tape, and drum machines to show that product inequalities between storage and time, number of drum tracks and time, number of bits in an address and time, etc., must be satisfied to compute finite functions on bounded machines.

98 citations

Journal ArticleDOI
TL;DR: Two types of self-synchronizing digital data scramblers and descramblers are introduced and examined, and they have application in common carrier systems where short-period data sequences produce high-level tones in the transmission band and, as a consequence, interchannel interference.
Abstract: Two types of self-synchronizing digital data scramblers and descramblers are introduced and examined. The descramblers recover synchronization quickly after the insertion or deletion of channel bits, and they are relatively insensitive to channel errors. The scramblers act to increase the period of periodic data sequences, and the periodic channel sequences produced have approximately half as many transitions in one period as there are bits in a period. These circuits find application in common carrier systems where short-period data sequences produce high-level tones in the transmission band and, as a consequence, interchannel interference. And they have application when receiver clocks derive synchronization from transitions in the channel signal. A number of variations and modifications of the scramblers which affect their cost and size are considered. The scramblers and descramblers are similar in construction and consist of linear sequential filters with either feed-forward or feedback paths, counters, storage elements and peripheral logic. The counters, storage elements and peripheral logic monitor the channel sequence but react infrequently so that the scramblers and descramblers behave principally as linear sequential filters.

97 citations

Book ChapterDOI
John E. Savage1
24 Aug 1995
TL;DR: The Memory Hierarchy Game is introduced, a multi-level pebble game that simulates data movement in memory hierarchies in terms of which to study space-time tradeoffs.
Abstract: The speed of CPUs is accelerating rapidly, outstripping that of peripheral storage devices and making it increasingly difficult to keep CPUs busy. Consequently multi-level memory hierarchies, scaled to simulate single-level memories, are increasing in importance. In this paper we introduce the Memory Hierarchy Game, a multi-level pebble game that simulates data movement in memory hierarchies in terms of which we study space-time tradeoffs.

92 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.
Abstract: The probability of error in decoding an optimal convolutional code transmitted over a memoryless channel is bounded from above and below as a function of the constraint length of the code. For all but pathological channels the bounds are asymptotically (exponentially) tight for rates above R_{0} , the computational cutoff rate of sequential decoding. As a function of constraint length the performance of optimal convolutional codes is shown to be superior to that of block codes of the same length, the relative improvement increasing with rate. The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.

6,804 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations

Journal ArticleDOI
TL;DR: A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry.
Abstract: A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real time. We propose a new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a task-dependent construction of neural circuits. Instead, it is based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the high-dimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in real time from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous computational model, the liquid state machine, that, unlike Turing machines, does not require sequential transitions between well-defined discrete internal states. It is supported, as the Turing machine is, by rigorous mathematical results that predict universal computational power under idealized conditions, but for the biologically more realistic scenario of real-time processing of time-varying inputs. Our approach provides new perspectives for the interpretation of neural coding, the design of experiments and data analysis in neurophysiology, and the solution of problems in robotics and neurotechnology.

3,446 citations

MonographDOI
20 Apr 2009
TL;DR: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory and can be used as a reference for self-study for anyone interested in complexity.
Abstract: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory. Requiring essentially no background apart from mathematical maturity, the book can be used as a reference for self-study for anyone interested in complexity, including physicists, mathematicians, and other scientists, as well as a textbook for a variety of courses and seminars. More than 300 exercises are included with a selected hint set.

2,965 citations

Journal ArticleDOI
TL;DR: The results show that the proposed multiuser detectors afford important performance gains over conventional single-user systems, in which the signal constellation carries the entire burden of complexity required to achieve a given performance level.
Abstract: Consider a Gaussian multiple-access channel shared by K users who transmit asynchronously independent data streams by modulating a set of assigned signal waveforms. The uncoded probability of error achievable by optimum multiuser detectors is investigated. It is shown that the K -user maximum-likelihood sequence detector consists of a bank of single-user matched filters followed by a Viterbi algorithm whose complexity per binary decision is O(2^{K}) . The upper bound analysis of this detector follows an approach based on the decomposition of error sequences. The issues of convergence and tightness of the bounds are examined, and it is shown that the minimum multiuser error probability is equivalent in the Iow-noise region to that of a single-user system with reduced power. These results show that the proposed multiuser detectors afford important performance gains over conventional single-user systems, in which the signal constellation carries the entire burden of complexity required to achieve a given performance level.

2,300 citations