scispace - formally typeset
Search or ask a question
Journal ArticleDOI

On the inherent intractability of certain coding problems (Corresp.)

TL;DR: The fact that the general decoding problem for linear codes and the general problem of finding the weights of a linear code are both NP-complete is shown strongly suggests, but does not rigorously imply, that no algorithm for either of these problems which runs in polynomial time exists.
Abstract: MEMBER, IEEE, AND HENK C. A. V~ TILBORG The fact that the general decoding problem for linear codes and the general problem of finding the weights of a linear code are both NP-complete is shown. This strongly suggests, but does not rigorously imply, that no algorithm for either of these problems which runs in polynomial time exists.
Citations
More filters
Book
01 Jan 1996
TL;DR: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols.
Abstract: From the Publisher: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper.

13,597 citations

Book
16 Dec 2008
TL;DR: The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.
Abstract: The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building large-scale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide variety of algorithms — among them sum-product, cluster variational methods, expectation-propagation, mean field methods, max-product and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.

4,335 citations


Cites background from "On the inherent intractability of c..."

  • ...The problem of maximum likelihood (ML) decoding is an integer program that is well known to be computationally intractable [16]....

    [...]

Journal ArticleDOI
29 Jun 1997
TL;DR: It is proved that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit, and experimental results for binary-symmetric channels and Gaussian channels demonstrate that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: We study two families of error-correcting codes defined in terms of very sparse matrices "MN" (MacKay-Neal (1995)) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties The decoding of both codes can be tackled with a practical sum-product algorithm We prove that these codes are "very good", in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes

3,842 citations

Proceedings ArticleDOI
01 Nov 1999
TL;DR: Because the fuzzy commitment scheme is tolerant of error, it is capable of protecting biometric data just as conventional cryptographic techniques, like hash functions, are used to protect alphanumeric passwords.
Abstract: We combine well-known techniques from the areas of error-correcting codes and cryptography to achieve a new type of cryptographic primitive that we refer to as a fuzzy commitment scheme. Like a conventional cryptographic commitment scheme, our fuzzy commitment scheme is both concealing and binding: it is infeasible for an attacker to learn the committed value, and also for the committer to decommit a value in more than one way. In a conventional scheme, a commitment must be opened using a unique witness, which acts, essentially, as a decryption key. By contrast, our scheme is fuzzy in the sense that it accepts a witness that is close to the original encrypting witness in a suitable metric, but not necessarily identical.This characteristic of our fuzzy commitment scheme makes it useful for applications such as biometric authentication systems, in which data is subject to random noise. Because the scheme is tolerant of error, it is capable of protecting biometric data just as conventional cryptographic techniques, like hash functions, are used to protect alphanumeric passwords. This addresses a major outstanding problem in the theory of biometric authentication. We prove the security characteristics of our fuzzy commitment scheme relative to the properties of an underlying cryptographic hash function.

1,744 citations

Journal ArticleDOI
TL;DR: Stagewise Orthogonal Matching Pursuit (StOMP) successively transforms the signal into a negligible residual, and numerical examples showing that StOMP rapidly and reliably finds sparse solutions in compressed sensing, decoding of error-correcting codes, and overcomplete representation are given.
Abstract: Finding the sparsest solution to underdetermined systems of linear equations y = Φx is NP-hard in general. We show here that for systems with “typical”/“random” Φ, a good approximation to the sparsest solution is obtained by applying a fixed number of standard operations from linear algebra. Our proposal, Stagewise Orthogonal Matching Pursuit (StOMP), successively transforms the signal into a negligible residual. Starting with initial residual r0 = y, at the s -th stage it forms the “matched filter” ΦTrs-1, identifies all coordinates with amplitudes exceeding a specially chosen threshold, solves a least-squares problem using the selected coordinates, and subtracts the least-squares fit, producing a new residual. After a fixed number of stages (e.g., 10), it stops. In contrast to Orthogonal Matching Pursuit (OMP), many coefficients can enter the model at each stage in StOMP while only one enters per stage in OMP; and StOMP takes a fixed number of stages (e.g., 10), while OMP can take many (e.g., n). We give both theoretical and empirical support for the large-system effectiveness of StOMP. We give numerical examples showing that StOMP rapidly and reliably finds sparse solutions in compressed sensing, decoding of error-correcting codes, and overcomplete representation.

1,416 citations

References
More filters
Journal ArticleDOI
01 Jan 1975-Networks
TL;DR: A large class of classical combinatorial problems, including most of the difficult problems in the literature of network flows and computational graph theory, are shown to be equivalent, in the sense that either all or none of them can be solved in polynomial time.
Abstract: A large class of classical combinatorial problems, including most of the difficult problems in the literature of network flows and computational graph theory, are shown to be equivalent, in the sense that either all or none of them can be solved in polynomial time. Moreover, they are solvable in polynomial time if and only if every problem solvable by a polynomial-depth backtrack search is also solvable by a polynomial-time algorithm. The technique of deriving these results through problem transformations is explained, and some comments are made as to the probable effect of these results on research in the field of combinatorial algorithms.

613 citations