scispace - formally typeset
Search or ask a question
Topic

Binary symmetric channel

About: Binary symmetric channel is a research topic. Over the lifetime, 1618 publications have been published within this topic receiving 53414 citations.


Papers
More filters
Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations

Book
01 Jan 1968
TL;DR: This chapter discusses Coding for Discrete Sources, Techniques for Coding and Decoding, and Source Coding with a Fidelity Criterion.
Abstract: Communication Systems and Information Theory. A Measure of Information. Coding for Discrete Sources. Discrete Memoryless Channels and Capacity. The Noisy-Channel Coding Theorem. Techniques for Coding and Decoding. Memoryless Channels with Discrete Time. Waveform Channels. Source Coding with a Fidelity Criterion. Index.

6,684 citations

Journal ArticleDOI
TL;DR: The results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon.
Abstract: We present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.

3,393 citations

Journal ArticleDOI
TL;DR: By partitioning the range of the received signal-to-noise ratio into a finite number of intervals, FSMC models can be constructed for Rayleigh fading channels and the validity and accuracy of the model are confirmed by the state equilibrium equations and computer simulation.
Abstract: The authors first study the behavior of a finite-state channel where a binary symmetric channel is associated with each state and Markov transitions between states are assumed. Such a channel is referred to as a finite-state Markov channel (FSMC). By partitioning the range of the received signal-to-noise ratio into a finite number of intervals, FSMC models can be constructed for Rayleigh fading channels. A theoretical approach is conducted to show the usefulness of FSMCs compared to that of two-state Gilbert-Elliott channels. The crossover probabilities of the binary symmetric channels associated with its states are calculated. The authors use the second-order statistics of the received SNR to approximate the Markov transition probabilities. The validity and accuracy of the model are confirmed by the state equilibrium equations and computer simulation. >

1,742 citations

Journal ArticleDOI
TL;DR: A simple algorithm for computing channel capacity is suggested that consists of a mapping from the set of channel input probability vectors into itself such that the sequence of probability vectors generated by successive applications of the mapping converges to the vector that achieves the capacity of the given channel.
Abstract: By defining mutual information as a maximum over an appropriate space, channel capacities can be defined as double maxima and rate-distortion functions as double minima. This approach yields valuable new insights regarding the computation of channel capacities and rate-distortion functions. In particular, it suggests a simple algorithm for computing channel capacity that consists of a mapping from the set of channel input probability vectors into itself such that the sequence of probability vectors generated by successive applications of the mapping converges to the vector that achieves the capacity of the given channel. Analogous algorithms then are provided for computing rate-distortion functions and constrained channel capacities. The algorithms apply both to discrete and to continuous alphabet channels or sources. In addition, a formalization of the theory of channel capacity in the presence of constraints is included. Among the examples is the calculation of close upper and lower bounds to the rate-distortion function of a binary symmetric Markov source.

1,472 citations


Network Information
Related Topics (5)
Fading
55.4K papers, 1M citations
86% related
Base station
85.8K papers, 1M citations
83% related
Wireless network
122.5K papers, 2.1M citations
81% related
Network packet
159.7K papers, 2.2M citations
81% related
Upper and lower bounds
56.9K papers, 1.1M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202317
202231
202141
202045
201934
201837