scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Lower bounds for the complexity of reliable Boolean circuits with noisy gates

01 Mar 1994-IEEE Transactions on Information Theory (IEEE)-Vol. 40, Iss: 2, pp 579-583
TL;DR: It is proved that the reliable computation of any Boolean function with sensitivity s requires /spl Omega/(s log s) gates if the gates fail independently with a fixed positive probability.
Abstract: Proves that the reliable computation of any Boolean function with sensitivity s requires /spl Omega/(s log s) gates if the gates fail independently with a fixed positive probability. This theorem was stated by Dobrushin and Ortyukov (1977), but their proof was found by Pippenger, Stamoulis, and Tsitsiklis (1991) to contain some errors. >
Citations
More filters
Journal ArticleDOI
TL;DR: Working in a common model of fault-tolerance, it is shown that in the asymptotic limit of large circuits, the ratio of physical qubits to logical qubits can be a constant.
Abstract: What is the minimum number of extra qubits needed to perform a large fault-tolerant quantum circuit? Working in a common model of fault-tolerance, I show that in the asymptotic limit of large circuits, the ratio of physical qubits to logical qubits can be a constant. The construction makes use of quantum low-density parity check codes, and the asymptotic overhead of the protocol is equal to that of the family of quantum error-correcting codes underlying the fault-tolerant protocol.

154 citations

Journal ArticleDOI
TL;DR: The results show that the optimized three-input majority multiplexing (MAJ-3 MUX) outperforms the latest scheme presented in the literature, known as parallel restitution (PAR-REST), by a factor between two and four, for 48/spl les/R/ spl les/100.
Abstract: Motivated by the need for economical fault-tolerant designs for nanoarchitectures, we explore a novel multiplexing-based redundant design scheme at small (/spl les/100) and very small (/spl les/10) redundancy factors. In particular, we adapt a strategy known as von Neumann multiplexing to circuits of majority gates with three inputs and for the first time exactly analyze the performance of a multiplexing scheme for very small redundancies, using combinatorial arguments. We also develop an extension of von Neumann multiplexing that further improves performance by excluding unnecessary restorative stages in the computation. Our results show that the optimized three-input majority multiplexing (MAJ-3 MUX) outperforms the latest scheme presented in the literature, known as parallel restitution (PAR-REST), by a factor between two and four, for 48/spl les/R/spl les/100. Our scheme performs extremely well at very small redundancies, for which our analysis is the only accurate one. Finally, we determine an upper bound on the maximum tolerable failure probability when any redundancy factor may be used. This bound clearly indicates the advantage of using three-input majority gates in terms of reliable operation.

130 citations

Journal ArticleDOI
TL;DR: An achievability result for reliable memory systems constructed from unreliable components is provided by investigating the effect of noise on standard iterative decoders for low-density parity-check (LDPC) codes.
Abstract: Departing from traditional communication theory where decoding algorithms are assumed to perform without error, a system where noise perturbs both computational devices and communication channels is considered here. This paper studies limits in processing noisy signals with noisy circuits by investigating the effect of noise on standard iterative decoders for low-density parity-check (LDPC) codes. Concentration of decoding performance around its average is shown to hold when noise is introduced into message-passing and local computation. Density evolution equations for simple faulty iterative decoders are derived. In one model, computing nonlinear estimation thresholds shows that performance degrades smoothly as decoder noise increases, but arbitrarily small probability of error is not achievable. Probability of error may be driven to zero in another system model; the decoding threshold again decreases smoothly with decoder noise. As an application of the methods developed, an achievability result for reliable memory systems constructed from unreliable components is provided.

128 citations

Journal ArticleDOI
TL;DR: A novel, highly noise-tolerant computer architecture based on the work of von Neumann that may enable the construction of reliable nanocomputers comprised of noisy gates, and a thermodynamic theory of noisy computation that might set fundamental physical limits on scaling classical computation to the nanoscale.
Abstract: Nanoelectronic devices are anticipated to become exceedingly noisy as they are scaled towards thermodynamic limits. Hence the development of nanoscale classical information systems will require optimal schemes for reliable information processing in the presence of noise. We present a novel, highly noise-tolerant computer architecture based on the work of von Neumann that may enable the construction of reliable nanocomputers comprised of noisy gates. The fundamental principles of this technique of parallel restitution are parallel processing by redundant logic gates, parallelism in the interconnects between gate resources and intermittent signal restitution performed in parallel. The results of our mathematical model, verified by Monte Carlo simulations, show that nanoprocessors consisting of gates incorporating this technique can be made 90% reliable over 10 years of continuous operation with a gate error probability per actuation of and a redundancy of . This compares very favourably with corresponding results utilizing modular redundant architectures of with , and with no noise tolerance. Arbitrary reliability is possible within a noise limit of , with massive redundancy. We show parallel restitution to be a general paradigm applicable to different kinds of information processing, including neural communication. Significantly, we show how our treatment of para-restituted computation as a statistical ensemble coupled to a heat bath allows consideration of the computation entropy of logic gates, and tentatively sketch a thermodynamic theory of noisy computation that might set fundamental physical limits on scaling classical computation to the nanoscale. Our preliminary work indicates that classical computation may be confined to the macroscale by noise, quantum computation possibly being the only information processing possible at the extreme nanoscale.

92 citations


Cites background from "Lower bounds for the complexity of ..."

  • ...Their essentially correct arguments were later reproved by G` acs and G` al [ 41 ]....

    [...]

  • ...One of the shortcomings of von Neumann’s analysis and that of many subsequent investigators has been the assumption of perfect transmission of bits along interconnects [40, 41 ]....

    [...]

Journal ArticleDOI
TL;DR: A Boolean function of n variables that has sensitivity O(\sqrt n ) and block sensitivity Ω(n) is exhibited, which demonstrates a quadratic separation of the two measures.
Abstract: Senstivity and block sensitivity are important measures of complexity of Boolean functions. In this note we exhibit a Boolean function ofn variables that has sensitivity $$O(\sqrt n )$$ and block sensitivity Ω(n). This demonstrates a quadratic separation of the two measures.

85 citations

References
More filters
Journal ArticleDOI
TL;DR: A basic part of the general synthesis problem is the design of a two-terminal network with given operating characteristics, and this work shall consider some aspects of this problem.
Abstract: THE theory of switching circuits may be divided into two major divisions, analysis and synthesis. The problem of analysis, determining the manner of operation of a given switching circuit, is comparatively simple. The inverse problem of finding a circuit satisfying certain given operating conditions, and in particular the best circuit is, in general, more difficult and more important from the practical standpoint. A basic part of the general synthesis problem is the design of a two-terminal network with given operating characteristics, and we shall consider some aspects of this problem.

774 citations

Journal ArticleDOI
TL;DR: This paper gives a full characterization of the time needed to compute a boolean function on a CREW PRAM with an unlimited number of processors.
Abstract: This paper gives a full characterization of the time needed to compute a boolean function on a CREW PRAM with an unlimited number of processors.The characterization is given in terms of a new compl...

213 citations


"Lower bounds for the complexity of ..." refers background in this paper

  • ...This measure of complexity was introduced by Nisan in [10]....

    [...]

  • ...It is shown in [10] that for all monotone functions, the sensitivity equals the block sensitivity, but for non-monotone functions the inequality may be strict....

    [...]

Proceedings ArticleDOI
21 Oct 1985
TL;DR: It is shown that many Boolean functions (including, in a certain sense, "almost all" Boolean functions) have the property that the number of noisy gates needed to compute them differs from the numberof noiseless gates by at most a constant factor.
Abstract: We show that many Boolean functions (including, in a certain sense, "almost all" Boolean functions) have the property that the number of noisy gates needed to compute them differs from the number of noiseless gates by at most a constant factor. This may be contrasted with results of von Neumann, Dobrushin and Ortyukov to the effect that (1) for every Boolean function, the number of noisy gates needed is larger by at most a logarithmic factor, and (2) for some Boolean functions, it is larger by at least a logarithmic factor.

196 citations


"Lower bounds for the complexity of ..." refers background or result in this paper

  • ...It has been argued (Pippenger [11]) that for proving lower bounds this is the best model to consider, as opposed to proving upper bounds, where the assumption that the gates fail independently with probability at most ε ∈ (0, 1/2) is more appropriate....

    [...]

  • ...Pippenger [11] proved that any function depending on n variables can be computed by O(2n/n) noisy gates....

    [...]

  • ...Pippenger [11] also exhibited specific functions with constant redundancy....

    [...]

  • ...Pippenger, Stamoulis and Tsitsiklis [12] pointed out the questionable arguments in the proof, and suggested that part of the strategy seemed hopelessly flawed....

    [...]

  • ...It is natural to ask whether there exist functions with nonconstant redundancy or whether the O(L log L) upper bound of [9],[3],[11] is tight for some functions....

    [...]

Proceedings ArticleDOI
01 Feb 1989
TL;DR: The results imply that changes in the instruction set of the processors or in the capacity of the shared memory cells do not change by more than a constant factor the time required by a CREW PRAM to compute any Boolean function.
Abstract: This paper gives a full characterization of the time needed to compute a Boolean function on a CREW PRAM with an unlimited number of processors.The characterization is given in terms of a new complexity measure of Boolean functions: the “block sensitivity”. This measure is a generalization of the well know “critical sensitivity” measure (see [W], [CDR], [Si]). The block sensitivity is also shown to relate to the Boolean decision tree complexity, and the implication is that the decision tree complexity also fully characterizes the CREW PRAM complexity. This solves an open problem of [W].Our results imply that changes in the instruction set of the processors or in the capacity of the shared memory cells do not change by more than a constant factor the time required by a CREW PRAM to compute any Boolean function. Moreover, we even show that a seemingly weaker version of a CREW PRAM, the CROW PRAM ([DR]), can compute functions as quickly as a general CREW PRAM. This solves an open problem of [DR].Finally, our results have implications regarding the power of randomization in the Boolean decision tree model. We show that in this model, randomization may only achieve a polynomial speedup over deterministic computation. This was known for Las-Vegas randomized computation; we prove it also for 1-sided error computation (a quadratic bound) and 2-sided error (a cubic bound).

152 citations