scispace - formally typeset
Search or ask a question
Author

Matthew C. Davey

Bio: Matthew C. Davey is an academic researcher from University of Cambridge. The author has contributed to research in topics: Low-density parity-check code & Raptor code. The author has an hindex of 4, co-authored 5 publications receiving 2181 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A significant improvement over the performance of the binary codes is found, including a rate 1/4 code with bit error probability <10/sup -5/ at E/sub b//N/sub 0/=0.2 dB.
Abstract: Gallager's (1962) low-density binary parity check codes have been shown to have near-Shannon limit performance when decoded using a probabilistic decoding algorithm. We report the empirical results of error-correction using the analogous codes over GF(q) for q>2, with binary symmetric channels and binary Gaussian channels. We find a significant improvement over the performance of the binary codes, including a rate 1/4 code with bit error probability <10/sup -5/ at E/sub b//N/sub 0/=0.2 dB.

1,284 citations

Proceedings ArticleDOI
22 Jun 1998
TL;DR: The results of Monte Carlo simulations of the decoding of infinite LDPC codes which can be used to obtain good constructions for finite codes and empirical results for the Gaussian channel are presented.
Abstract: Binary low density parity check (LDPC) codes have been shown to have near Shannon limit performance when decoded using a probabilistic decoding algorithm. The analogous codes defined over finite fields GF(q) of order q>2 show significantly improved performance. We present the results of Monte Carlo simulations of the decoding of infinite LDPC codes which can be used to obtain good constructions for finite codes. We also present empirical results for the Gaussian channel including a rate 1/4 code with bit error probability of 10/sup -4/ at E/sub b//N/sub 0/=-0.05 dB.

502 citations

Book ChapterDOI
01 Jan 2001
TL;DR: This paper first explores the theoretical properties of binary Gallager codes with very high rates and observes that Gallager code of any rate offer runlength-limiting properties at no additional cost.
Abstract: Gallager codes with large block length and low rate (e.g., N ≃ 10,000–40,000, R ≃ 0.25–0.5) have been shown to have record-breaking performance for low signal-to-noise applications. In this paper we study Gallager codes at the other end of the spectrum. We first explore the theoretical properties of binary Gallager codes with very high rates and observe that Gallager codes of any rate offer runlength-limiting properties at no additional cost.

359 citations

01 Jan 1998
TL;DR: A significant improvement over the performance of the binary codes, including a rate 1/4 code with bit error probability <10 at dB and the empirical results of error-correction using the analogous codes over for with binary symmetric channels and binary Gaussian channels.
Abstract: Gallager's low-density binary parity check codes have been shown to have near-Shannon limit performance when decoded using a probabilistic decoding algorithm. We report the empirical results of error-correction using the analogous codes over for with binary symmetric channels and binary Gaussian channels. We find a significant improvement over the performance of the binary codes, including a rate 1/4 code with bit error probability <10 at dB.

122 citations

Book ChapterDOI
01 Jan 2001
TL;DR: A pair of Gallager codes with rate R = 1/3 and transmitted blocklength N = 1920 are presented as candidates for the proposed international standard for cellular telephones.
Abstract: We present a pair of Gallager codes with rate R = 1/3 and transmitted blocklength N = 1920 as candidates for the proposed international standard for cellular telephones

4 citations


Cited by
More filters
Book
01 Jan 2005

9,038 citations

Book
06 Oct 2003
TL;DR: A fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.
Abstract: Fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.

8,091 citations

Proceedings Article
01 Jan 2005
TL;DR: This book aims to provide a chronology of key events and individuals involved in the development of microelectronics technology over the past 50 years and some of the individuals involved have been identified and named.
Abstract: Alhussein Abouzeid Rensselaer Polytechnic Institute Raviraj Adve University of Toronto Dharma Agrawal University of Cincinnati Walid Ahmed Tyco M/A-COM Sonia Aissa University of Quebec, INRSEMT Huseyin Arslan University of South Florida Nallanathan Arumugam National University of Singapore Saewoong Bahk Seoul National University Claus Bauer Dolby Laboratories Brahim Bensaou Hong Kong University of Science and Technology Rick Blum Lehigh University Michael Buehrer Virginia Tech Antonio Capone Politecnico di Milano Javier Gómez Castellanos National University of Mexico Claude Castelluccia INRIA Henry Chan The Hong Kong Polytechnic University Ajit Chaturvedi Indian Institute of Technology Kanpur Jyh-Cheng Chen National Tsing Hua University Yong Huat Chew Institute for Infocomm Research Tricia Chigan Michigan Tech Dong-Ho Cho Korea Advanced Institute of Science and Tech. Jinho Choi University of New South Wales Carlos Cordeiro Philips Research USA Laurie Cuthbert Queen Mary University of London Arek Dadej University of South Australia Sajal Das University of Texas at Arlington Franco Davoli DIST University of Genoa Xiaodai Dong, University of Alberta Hassan El-sallabi Helsinki University of Technology Ozgur Ercetin Sabanci University Elza Erkip Polytechnic University Romano Fantacci University of Florence Frank Fitzek Aalborg University Mario Freire University of Beira Interior Vincent Gaudet University of Alberta Jairo Gutierrez University of Auckland Michael Hadjitheodosiou University of Maryland Zhu Han University of Maryland College Park Christian Hartmann Technische Universitat Munchen Hossam Hassanein Queen's University Soong Boon Hee Nanyang Technological University Paul Ho Simon Fraser University Antonio Iera University "Mediterranea" of Reggio Calabria Markku Juntti University of Oulu Stefan Kaiser DoCoMo Euro-Labs Nei Kato Tohoku University Dongkyun Kim Kyungpook National University Ryuji Kohno Yokohama National University Bhaskar Krishnamachari University of Southern California Giridhar Krishnamurthy Indian Institute of Technology Madras Lutz Lampe University of British Columbia Bjorn Landfeldt The University of Sydney Peter Langendoerfer IHP Microelectronics Technologies Eddie Law Ryerson University in Toronto

7,826 citations

Journal ArticleDOI
29 Jun 1997
TL;DR: It is proved that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit, and experimental results for binary-symmetric channels and Gaussian channels demonstrate that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: We study two families of error-correcting codes defined in terms of very sparse matrices "MN" (MacKay-Neal (1995)) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties The decoding of both codes can be tackled with a practical sum-product algorithm We prove that these codes are "very good", in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes

3,842 citations

Journal ArticleDOI
TL;DR: The results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon.
Abstract: We present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.

3,393 citations