scispace - formally typeset
Search or ask a question
Topic

Concatenation

About: Concatenation is a research topic. Over the lifetime, 2830 publications have been published within this topic receiving 53484 citations. The topic is also known as: string concatenation.


Papers
More filters
Proceedings ArticleDOI
23 May 1993
TL;DR: In this article, a new class of convolutional codes called turbo-codes, whose performances in terms of bit error rate (BER) are close to the Shannon limit, is discussed.
Abstract: A new class of convolutional codes called turbo-codes, whose performances in terms of bit error rate (BER) are close to the Shannon limit, is discussed. The turbo-code encoder is built using a parallel concatenation of two recursive systematic convolutional codes, and the associated decoder, using a feedback decoding rule, is implemented as P pipelined identical elementary decoders. >

5,963 citations

Journal ArticleDOI
TL;DR: A new family of convolutional codes, nicknamed turbo-codes, built from a particular concatenation of two recursive systematic codes, linked together by nonuniform interleaving appears to be close to the theoretical limit predicted by Shannon.
Abstract: This paper presents a new family of convolutional codes, nicknamed turbo-codes, built from a particular concatenation of two recursive systematic codes, linked together by nonuniform interleaving. Decoding calls on iterative processing in which each component decoder takes advantage of the work of the other at the previous step, with the aid of the original concept of extrinsic information. For sufficiently large interleaving sizes, the correcting performance of turbo-codes, investigated by simulation, appears to be close to the theoretical limit predicted by Shannon.

3,003 citations

Journal ArticleDOI
29 Jun 1997
TL;DR: In this article, the authors derived upper bounds to the average maximum likelihood bit error probability of serially concatenated block and convolutional codes with interleaver, and derived design guidelines for the outer and inner encoders that maximize the interleavers gain and the asymptotic slope of the error probability curves.
Abstract: A serially concatenated code with interleaver consists of the cascade of an outer encoder, an interleaver permuting the outer codewords bits, and an inner encoder whose input words are the permuted outer codewords. The construction can be generalized to h cascaded encoders separated by h-1 interleavers. We obtain upper bounds to the average maximum-likelihood bit error probability of serially concatenated block and convolutional coding schemes. Then, we derive design guidelines for the outer and inner encoders that maximize the interleaver gain and the asymptotic slope of the error probability curves. Finally, we propose a new, low-complexity iterative decoding algorithm. Throughout the paper, extensive comparisons with parallel concatenated convolutional codes known as "turbo codes" are performed, showing that the new scheme can offer superior performance.

1,361 citations

Proceedings ArticleDOI
01 Jun 2018
TL;DR: Deep Back-Projection Networks (DBPN) as discussed by the authors exploit iterative up-and downsampling layers, providing an error feedback mechanism for projection errors at each stage, and construct mutually-connected up and down-sampling stages each of which represents different types of image degradation and high-resolution components.
Abstract: The feed-forward architectures of recently proposed deep super-resolution networks learn representations of low-resolution inputs, and the non-linear mapping from those to high-resolution output. However, this approach does not fully address the mutual dependencies of low- and high-resolution images. We propose Deep Back-Projection Networks (DBPN), that exploit iterative up- and downsampling layers, providing an error feedback mechanism for projection errors at each stage. We construct mutually-connected up- and down-sampling stages each of which represents different types of image degradation and high-resolution components. We show that extending this idea to allow concatenation of features across up- and downsampling stages (Dense DBPN) allows us to reconstruct further improve super-resolution, yielding superior results and in particular establishing new state of the art results for large scaling factors such as 8A— across multiple data sets.

1,269 citations

Journal ArticleDOI
TL;DR: It is shown that Pearl's algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager's low-density parity-check codes, serially concatenated codes, and product codes.
Abstract: We describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. (1993) and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pearl's (1982) belief propagation algorithm. We see that if Pearl's algorithm is applied to the "belief network" of a parallel concatenation of two or more codes, the turbo decoding algorithm immediately results. Unfortunately, however, this belief diagram has loops, and Pearl only proved that his algorithm works when there are no loops, so an explanation of the experimental performance of turbo decoding is still lacking. However, we also show that Pearl's algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager's (1962) low-density parity-check codes, serially concatenated codes, and product codes. Thus, belief propagation provides a very attractive general methodology for devising low-complexity iterative decoding algorithms for hybrid coded systems.

989 citations


Network Information
Related Topics (5)
Deep learning
79.8K papers, 2.1M citations
84% related
Optimization problem
96.4K papers, 2.1M citations
83% related
Artificial neural network
207K papers, 4.5M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Node (networking)
158.3K papers, 1.7M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20224
2021125
2020115
2019147
2018230
2017173