scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 1998"


Journal ArticleDOI
01 May 1998
TL;DR: In this paper, a review of error control and concealment in video communication is presented, which are described in three categories according to the roles that the encoder and decoder play in the underlying approaches.
Abstract: The problem of error control and concealment in video communication is becoming increasingly important because of the growing interest in video delivery over unreliable channels such as wireless networks and the Internet. This paper reviews the techniques that have been developed for error control and concealment. These techniques are described in three categories according to the roles that the encoder and decoder play in the underlying approaches. Forward error concealment includes methods that add redundancy at the source end to enhance error resilience of the coded bit streams. Error concealment by postprocessing refers to operations at the decoder to recover the damaged areas based on characteristics of image and video signals. Last, interactive error concealment covers techniques that are dependent on a dialogue between the source and destination. Both current research activities and practice in international standards are covered.

1,611 citations


Journal ArticleDOI
TL;DR: A significant improvement over the performance of the binary codes is found, including a rate 1/4 code with bit error probability <10/sup -5/ at E/sub b//N/sub 0/=0.2 dB.
Abstract: Gallager's (1962) low-density binary parity check codes have been shown to have near-Shannon limit performance when decoded using a probabilistic decoding algorithm. We report the empirical results of error-correction using the analogous codes over GF(q) for q>2, with binary symmetric channels and binary Gaussian channels. We find a significant improvement over the performance of the binary codes, including a rate 1/4 code with bit error probability <10/sup -5/ at E/sub b//N/sub 0/=0.2 dB.

1,284 citations


Journal ArticleDOI
16 Aug 1998
TL;DR: A previously unrecognized connection between Golay complementary sequences and second-order Reed-Muller codes over alphabets Z/sub 2/h is found to give an efficient decoding algorithm involving multiple fast Hadamard transforms.
Abstract: We present a range of coding schemes for OFDM transmission using binary, quaternary, octary, and higher order modulation that give high code rates for moderate numbers of carriers. These schemes have tightly bounded peak-to-mean envelope power ratio (PMEPR) and simultaneously have good error correction capability. The key theoretical result is a previously unrecognized connection between Golay complementary sequences and second-order Reed-Muller codes over alphabets Z/sub 2/h. We obtain additional flexibility in trading off code rate, PMEPR, and error correction capability by partitioning the second-order Reed-Muller code into cosets such that codewords with large values of PMEPR are isolated. For all the proposed schemes we show that encoding is straightforward and give an efficient decoding algorithm involving multiple fast Hadamard transforms. Since the coding schemes are all based on the same formal generator matrix we can deal adaptively with varying channel constraints and evolving system requirements.

1,030 citations


Journal ArticleDOI
TL;DR: In this article, the first experimental implementations of quantum error correction and confirm the expected state stabilization were performed in alanine and trichloroethylene NMR, and the experiment implemented the three-bit code for phase errors using liquid state NMR.
Abstract: Quantum error correction is required to compensate for the fragility of the state of a quantum computer. We report the first experimental implementations of quantum error correction and confirm the expected state stabilization. A precise analysis of the decay behavior is performed in alanine and a full implementation of the error correction procedure is realized in trichloroethylene. In NMR computing, however, a net improvement in the signal to noise would require very high polarization. The experiment implemented the three-bit code for phase errors using liquid state NMR.

493 citations


Proceedings ArticleDOI
29 Mar 1998
TL;DR: This work explores dynamic sizing of the MAC layer frame, the atomic unit that is sent through the radio channel, and describes the implementation of the adaptive MAC frame length control mechanism in combination with adaptive hybrid FEC/ARQ error control in a reconfigurable wireless link layer packet processing architecture for a low-power adaptive wireless multimedia node.
Abstract: Wireless network links are characterized by rapidly time varying channel conditions and battery energy limitations at the wireless mobile user nodes. Therefore static link control techniques that make sense in comparatively well behaved wired links do not necessarily apply to wireless links. New adaptive link layer control techniques are needed to provide robust and energy efficient operation even in the presence of orders of magnitude variations in bit error rates and other radio channel conditions. For example, research has advocated adaptive link layer techniques such as adaptive error control, channel state dependent protocols, and variable spreading gain. We explore dynamic sizing of the MAC layer frame, the atomic unit that is sent through the radio channel. A trade-off exists between the desire to reduce the header and physical layer overhead by making frames large, and the need to reduce frame error rates in the noisy channel by using small frame lengths. Clearly the optimum depends on the channel conditions. Through analysis supported by physical measurements with Lucent's WaveLAN radio we show that adaptive sizing of the MAC layer frame in the presence of varying channel noise indeed has a large impact on the user seen throughput (goodput). In addition, we show how that adaptive frame length control can be exploited to improve the energy efficiency for a desired level of goodput, and to extend the usable radio range with graceful throughput degradation. We describe the implementation of the adaptive MAC frame length control mechanism in combination with adaptive hybrid FEC/ARQ error control in a reconfigurable wireless link layer packet processing architecture for a low-power adaptive wireless multimedia node.

392 citations


Journal ArticleDOI
TL;DR: The sensitivity of decoder performance to misestimation of the SNR is studied, and a simple online scheme is proposed that estimates the unknown SNR from each code block, prior to decoding, to not appreciably degrade the performance.
Abstract: Iterative decoding of turbo codes, as well as other concatenated coding schemes of similar nature, requires knowledge of the signal-to-noise ratio (SNR) of the channel so that proper blending of the a posteriori information of the separate decoders is achieved. We study the sensitivity of decoder performance to misestimation of the SNR, and propose a simple online scheme that estimates the unknown SNR from each code block, prior to decoding. We show that this scheme is sufficiently adequate in accuracy to not appreciably degrade the performance.

339 citations


Journal ArticleDOI
TL;DR: The fundamental limits of "systematic" communication are analyzed and it is shown that if nonnegligible bit-error rate is tolerated, systematic encoding is strictly suboptimal.
Abstract: The fundamental limits of "systematic" communication are analyzed. In systematic transmission, the decoder has access to a noisy version of the uncoded raw data (analog or digital). The coded version of the data is used to reduce the average reproduced distortion D below that provided by the uncoded systematic link and/or increase the rate of information transmission. Unlike the case of arbitrarily reliable error correction (D/spl rarr/0) for symmetric sources/channels, where systematic codes are known to do as well as nonsystematic codes, we demonstrate that the systematic structure may degrade the performance for nonvanishing D. We characterize the achievable average distortion and we find necessary and sufficient conditions under which systematic communication does not incur loss of optimality. The Wyner-Ziv (1976) rate distortion theorem plays a fundamental role in our setting. The general result is applied to several scenarios. For a Gaussian bandlimited source and a Gaussian channel, the invariance of the bandwidth-signal-to-noise ratio (SNR, in decibels) product is established, and the optimality of systematic transmission is demonstrated. Bernoulli sources transmitted over binary-symmetric channels and over certain Gaussian channels are also analyzed. It is shown that if nonnegligible bit-error rate is tolerated, systematic encoding is strictly suboptimal.

288 citations


Journal ArticleDOI
TL;DR: This paper proposes a low complexity method for decoding the resulting inner code (due to the spreading sequence), which allows iterative (turbo) decoding of the serially-concatenated code pair.
Abstract: A code-division multiple-access system with channel coding may be viewed as a serially-concatenated coded system. In this paper we propose a low complexity method for decoding the resulting inner code (due to the spreading sequence), which allows iterative (turbo) decoding of the serially-concatenated code pair. The per-bit complexity of the proposed decoder increases only linearly with the number of users. Performance within a fraction of a dB of the single user bound for heavily loaded asynchronous CDMA is shown both by simulation and analytically.

275 citations


Journal ArticleDOI
TL;DR: This work uses an error correction coding algorithm for continuous quantum variables to construct a highly efficient 5-wave-packet code which can correct arbitrary single wave-packets errors.
Abstract: We propose an error correction coding algorithm for continuous quantum variables. We use this algorithm to construct a highly efficient 5-wave-packet code which can correct arbitrary single wave-packet errors. We show that this class of continuous variable codes is robust against imprecision in the error syndromes. A potential implementation of the scheme is presented.

188 citations


Patent
01 Sep 1998
TL;DR: In this paper, a method and apparatus for performing error correction on data read from a multistate memory is described, where each cell in a memory device is read to generate a read voltage determined by a state of the cell and one of an ordered succession of encoded signals is selected based on the read voltage.
Abstract: A method and apparatus for performing error correction on data read from a multistate memory are disclosed. According to one embodiment of the present invention a cell in a memory device is read to generate a read voltage determined by a state of the cell and one of an ordered succession of encoded signals is selected based on the read voltage, where each encoded signal corresponds to a field of bits and adjacent encoded signals correspond to respective fields of bits that are different only in a single bit. The selected encoded signal is decoded to generate a field of uncorrected bits and a field of syndrome bits indicating any error in the uncorrected bits. The syndrome bits are decoded to generate correction bits to correct the error, and the uncorrected bits are combined with the correction bits to generate a field of corrected bits.

187 citations


Patent
Stefan Ott1
22 May 1998
TL;DR: In this article, the authors proposed a dynamic error correction system for a bi-directional digital data transmission system, where a receiver receives the signal and decodes the information encoded thereon.
Abstract: A dynamic error correction system for a bi-directional digital data transmission system. The transmission system of the present invention includes a transmitter adapted to encode information into a signal. A receiver receives the signal and decodes the information encoded thereon. The signal is transmitted from the transmitter to the receiver via a communications channel. A signal quality/error rate detector is coupled to the receiver and is adapted to detect a signal quality and/or an error rate in the information transmitted from the transmitter. The receiver is adapted to implement at least a first and second error correction process, depending upon the detected signal quality/error rate. The first error correction process is more robust and more capable than the second error correction process. The receiver coordinates the implemented error correction process with the transmitter via a feedback channel. The receiver dynamically selects the first or second error correction process for implementation in response to the detected signal quality/error rate and coordinates the selection with the transmitter such that error correction employed by the receiver and transmitter is tailored to the condition of the communications channel.

Patent
13 Oct 1998
TL;DR: In this article, a method for applying forward error correction in a transmission network includes the steps of choosing one of a plurality of possible error correction codes, using an appropriate field to encode a complete forward-error-correcting (FEC) code algorithm in each FEC packet to be transmitted.
Abstract: A method for applying forward error correction in a transmission network includes the steps of choosing one of a plurality of possible error correction codes, using an appropriate field to encode a complete forward-error-correcting (FEC) code algorithm in each FEC packet to be transmitted. The packet stream, consisting of media packets and FEC packets can be sent to both FEC-capable and FEC-incapable receivers. Decoding methods are independent of the forward-error-correcting code transmitted. The sender can adapt the forward-error-correction code algorithm and the degree of error correction provided on a one-time basis or even more dynamically. Decoding and recovery at the receiver require no prior notification from the sender. Applying the FEC code algorithm to decode includes interrogating the bits in an offset bit mask in each FEC packet to yield links with media packets, and applying other fields of the FEC header to obtain instructions to recover lost data in one of the media packets. Based thereon, reiterative decoding of media packets ensures that lost data recoverable with combinations of media packets and FEC packets are recovered.

Journal ArticleDOI
TL;DR: In this paper, a method for topology error identification based on the use of normalized Lagrange multipliers is proposed, which models circuit breakers as network switching branches whose statuses are treated as operational constraints in the state estimation problem.
Abstract: This paper introduces a method for topology error identification based on the use of normalized Lagrange multipliers. The proposed methodology models circuit breakers as network switching branches whose statuses are treated as operational constraints in the state estimation problem. The corresponding Lagrange multipliers are then normalized and used as a tool for topology error identification, in the same fashion as measurement normalized residuals are conventionally employed for analog bad data processing. Results of tests performed with the proposed algorithm for different types of topology errors are reported.

Journal ArticleDOI
02 Sep 1998
TL;DR: The performance of these codes for spectrum spreading in a CDMA system is evaluated and shown to outperform that of orthogonal and super-orthogonal codes as well as conventionally coded and spread systems.
Abstract: In code division multiple-access (CDMA) systems, maximum total throughput assuming a matched filter receiver can be obtained by spreading with low-rate error control codes. Previously, orthogonal, bi-orthogonal and super-orthogonal codes have been proposed for this purpose. We present in this paper a family of rate-compatible low-rate convolutional codes with maximum free distance. The performance of these codes for spectrum spreading in a CDMA system is evaluated and shown to outperform that of orthogonal and super-orthogonal codes as well as conventionally coded and spread systems. We also show that the proposed low rate codes will give simple encoder and decoder implementations. With these codes, any rate 1/n, n/spl les/512, are obtained for constraint lengths up to 11, resulting in a more flexible and powerful scheme than those previously proposed.

Patent
15 Dec 1998
TL;DR: In this article, the authors propose a method of data transfer between a transmitter and a receiver over a communications link achieving maximum throughput by dynamically adapting a coding rate and specifically an error correction encoder, as a function of a measured reverse channel signal parameter.
Abstract: A method of data transfer between a transmitter and a receiver over a communications link achieves maximum throughput by dynamically adapting a coding rate, and specifically an error correction encoder, as a function of a measured reverse channel signal parameter. The method comprises the steps of transmitting a signal from the transmitter to the receiver, the receiver receiving and measuring the signal to noise ratio of the transmitted signal. The receiver determines an appropriate code rate and encoding technique as a function of the measured signal to noise ratio and transmits an encoding identifier of the determined encoder to the transmitter. The transmitter encodes its data according to the encoding identifier and transmits the encoded message to the receiver. The receiver receives the encoded message and decodes the message according to the determined code rate and encoding technique.

Patent
29 Dec 1998
TL;DR: In this article, a steganographic method is disclosed to embed an invisible watermark into an image, which can be used for copyright protection, content authentication, or content annotation.
Abstract: A steganographic method is disclosed to embed an invisible watermark into an image. It can be used for copyright protection, content authentication or content annotation. The technique is mainly based on K-L transform. Firstly a block and cluster step (106) and cluster selection step (108) are performed to enhance the optimization of K-L transform (110) for a given image. Then a watermark is embedded (114) into the selected eigen-clusters. ECC (Error Correction Code) can be employed to reduce the embedded code error rate. The proposed method is characterized by robustness despite the degradation or modification on the watermarked content. Furthermore, the method can be extended to video, audio or other multimedia especially for multimedia databases in which the stored multimedia are categorized by their contents or classes.

Proceedings ArticleDOI
01 Oct 1998
TL;DR: Through the results of extensive Internet experiments, the paper shows that layered coding can be very effective when combined with the retransmission-based error control technique for low-bit rate transmission over best effort networks where no network-level mechanism exists for protecting high priority data from packet loss.
Abstract: A new retransmission-based error control technique is presented that does not incur any additional latency in frame playout times, and hence are suitable for interactive applications. It takes advantage of the motion prediction loop employed in most motion compensation-based codecs. By correcting errors in a reference frame caused by earlier packet loss, it prevents error propagation. The technique rearranges the temporal dependency of frames so that a displayed frame is referenced for the decoding of its succeeding dependent frames much later than its display time. Thus, the delay in repairing lost packets can be effectively masked out. The developed technique is combined with layered video coding to maintain consistently good video quality even under heavy packet loss. Through the results of extensive Internet experiments, the paper shows that layered coding can be very effective when combined with the retransmission-based error control technique for low-bit rate transmission over best effort networks where no network-level mechanism exists for protecting high priority data from packet loss.

Proceedings ArticleDOI
29 Mar 1998
TL;DR: This work builds on previous landmark works with a systematic study of FEC for packet audio that characterizes the aggregate performance across all audio sources in the network and finds that optimal signal quality is achieved when sources react to network congestion not by blindly adding FEC, but by adding FEC in a controlled fashion that simultaneously constrains the source-coding rate.
Abstract: Real-time audio over a best-effort network, such as the Internet, frequently suffers from packet loss. To mitigate the impact of such packet loss, several research efforts and implementation studies advocate the use of forward error correction (FEC) coding. Although these prior works have pioneered promising and novel applications of FEC to Internet audio, they do not definitively demonstrate the advantages of FEC because they do not evaluate aggregate performance that results from multiplexing many like flows. We build on previous landmark works with a systematic study of FEC for packet audio that characterizes the aggregate performance across all audio sources in the network. We refine the novel but ad hoc coding techniques proposed by Hardman, Sasse, Handley and Watson (see Proc. INET, 1995) into a formal framework that we call "signal processing-based FEC" (SFEC) and use our framework to more rigorously evaluate the relative merits of this approach. Through extensive simulation, we evaluate the "scalability" of SFEC for packet audio-i.e., the ability for a coding algorithm to improve aggregate performance when used by all sources in the network-and find that optimal signal quality is achieved when sources react to network congestion not by blindly adding FEC, but rather by adding FEC in a controlled fashion that simultaneously constrains the source-coding rate. As a result, packet loss is mitigated without introducing more congestion, thus admitting a more scalable and effective approach than successively adding redundancy to a constant bit-rate source. While this result may seem intuitive, it has not been previously suggested in the context of Internet audio, and until now, has not been systematically studied.

Journal ArticleDOI
01 Jun 1998
TL;DR: A new hierarchical three-dimensional graphic-compression scheme that progressively compresses an arbitrary polygonal mesh into a single bitstream, which can be widely used in robust error control, progressive transmission and display, level-of-detail control, etc.
Abstract: Based on state-of-the-art graphic-simplification techniques and progressive image-coding schemes, we propose a new hierarchical three-dimensional graphic-compression scheme in this research. This scheme progressively compresses an arbitrary polygonal mesh into a single bitstream. Along the encoding process, every output bit contributes to the reduction of coding distortion, and the contribution of bits decreases according to their order of position in the bitstream. At the receiver end, the decoder can stop at any point while giving a reconstruction of the original model with the best rate-distortion tradeoff. A series of models of continuous varying resolution can thus be constructed from the single bitstream. This property, which is referred to as the embedding property since the coding of a coarser model is embedded in the coding of a finer model, can be widely used in robust error control, progressive transmission and display, level-of-detail control, etc. It is demonstrated by experiments that an acceptable quality level can be achieved at a compression ratio of 20 to 1 for several test graphic models.

Proceedings ArticleDOI
13 Oct 1998
TL;DR: It is argued and demonstrated that protocol-independent link-level local error control can achieve high communication efficiency even in a highly variable error environment, that adaptation is important to achieve this efficiency, and that inter-layer coexistence is achievable.
Abstract: Wireless links can exhibit high error rates due to attenuation, fading, or interfering active radiation sources. To make matters worse, error rates can be highly variable due to changes in the wireless environment. Researchers and developers have explored a wide range of solutions to optimize communication in this difficult error environment, including traditional end-to-end solutions, link-layer solutions, and solutions involving layer four processing inside the network. A significant challenge is ensuring that systems with multiple layers of error control avoid compromising performance by duplication of effort. We argue and demonstrate that protocol-independent link-level local error control can achieve high communication efficiency even in a highly variable error environment, that adaptation is important to achieve this efficiency, and that inter-layer coexistence is achievable. The logical link control layer of our WaveLAN-based experimental LAN includes three error control mechanisms: local retransmission, adaptive packet shrinking, and adaptive error coding. Measurements generated on a variety of network topologies and trace-based error environments demonstrate the TCP performance improvements and good coexistence with TCP's end-to-end retransmission strategy.

Proceedings ArticleDOI
07 Jun 1998
TL;DR: This work discusses the applicability of high rate turbo codes for magnetic recording, citing in particular the attractiveness of interleaver gain as opposed to coding gain, and examines the performance of rate 4/5, 8/9, and 16/17 turbo codes on a PRA-equalized magnetic recording channel at a user density of S/sub u/=2.0.
Abstract: We discuss the applicability of high rate turbo codes for magnetic recording, citing in particular the attractiveness of interleaver gain as opposed to coding gain. We then examine the performance of rate 4/5, 8/9, and 16/17 turbo codes on a PRA-equalized magnetic recording channel at a user density of S/sub u/=2.0. Simulation results show that a gain of 7.1 dB relative to the uncoded situation is attainable at an error rate of 10/sup -5/ for the rate 4/5 and 8/9 codes, whereas the rate 16/17 code achieves a gain of 6.5 dB.

Journal ArticleDOI
26 Apr 1998
TL;DR: An efficient scheme for concurrent error detection in sequential circuits with no constraint on the state encoding is presented and its cost is reduced significantly compared to other methods based on other codes.
Abstract: This paper presents a procedure for synthesizing multilevel circuits with concurrent error detection based on Bose-Lin codes (1985). Bose-Lin codes are an efficient solution for providing concurrent error detection as they are separable codes and have a fixed number of check bits, independent of the number of information bits. Furthermore, Bose-Lin code checkers have a simple structure as they are based on modulo operations. Procedures are described for synthesizing circuits in a way that their structure ensures that all single-point faults can only cause errors that are detected by a Bose-Lin code. This paper also presents an efficient scheme for concurrent error detection in sequential circuits. Both the state bits and the output bits are encoded with a Bose-Lin code and their checking is combined such that one checker suffices. Results indicate low area overhead. The cost of concurrent error detection is reduced significantly compared to other methods.

Patent
15 May 1998
TL;DR: In this paper, a retransmission-based error control technique is proposed for interactive video conferencing, which does not incur any additional latency in frame playout times and is suitable for interactive videos.
Abstract: A new retransmission-based error control technique that does not incur any additional latency in frame playout times and is suitable for interactive video applications. This retransmission technique combined with layered video coding yields good error resilience for interactive video conferencing. The technique exploits the temporal dependency of inter-coded frames and can be easily incorporated into motion-compensation based coding standards such as MPEG and H.261, achieving very good compression efficiency.

Journal ArticleDOI
TL;DR: In this paper, the authors present the research work under the authors and their team in this area, which has been applied to more than 10 different kinds of machines including: turning centers and machining centers; small, medium and large machines; new products and retrofitted machines.

Journal ArticleDOI
TL;DR: A second-order autoregressive model for error correction is applied to measurements of synchronized tapping by an expert and a nonexpert subject for a wide range of patterns, ranging from simple off-beat tapping to a complex polyrhythm.

Patent
Ephraim Zehavi1
29 May 1998
TL;DR: In this paper, a concatenated code consisting of Reed-Solomon coding, CRC block coding, and convolutional coding is used to provide error free file transfer over the air.
Abstract: In a communication system which conforms to the IS-99 standard, a concatenated code is used to provide for error free file transfer over the air. The concatenated code comprises Reed-Solomon coding, CRC block coding, and convolutional coding. The file is partitioned into data frames and Reed-Solomon encoding is performed on the data frames. CRC block encoding is then performed on the Reed-Solomon encoded data. The CRC encoded data is convolutionally encoded. The CRC block encoding and convolutional encoding are performed in accordance with the IS-99 standard. The additional Reed-Solomon encoding step provides improved error correction capability while maintaining compatibility with the IS-99 standard. At the receiver, Reed-Solomon decoding is performed if the number of erasures in a code word is less than or equal to (n−k) or the symbol errors in a code word is less than or equal to (n−k)/2. Otherwise, a request for retransmission is sent.

Journal ArticleDOI
TL;DR: Based on the simulation results obtained in this study, the proposed approach to detection and concealment approach to transmission errors in H. 261 images can indeed recover high-quality H.261 images from their corresponding corrupted H.260 images, without increasing the transmission bit rate.
Abstract: The detection and concealment approach to transmission errors in H.261 images is proposed. For entropy-coded H.261 images, a transmission error in a codeword will not only affect the underlying codeword, but also may affect subsequent codewords, resulting in a great degradation of the received images. Here a transmission error may be a single-bit error or a burst error containing N successive error bits. The objective of the proposed approach is to recover high-quality H.261 images from the corresponding corrupted H.261 images, without increasing the transmission bit rate. In the proposed approach, using the constraints imposed on compressed image data, all the groups of blocks (GOBs) within an H.261 picture can be correctly located. After a GOB is located, transmission errors within the GOB are detected by two successive procedures: (1) whether the GOB is corrupted or not is determined by checking a set of error-checking conditions under decoding and (2) the precise location (block-based) of the first transmission error (i.e., the first corrupted block) within the GOB is located by a block-based backtracking procedure. For a corrupted block, a set of concealed block candidates, SC, is generated, and a proposed fitness function for error concealment is used to select the "best" concealed block candidate among SC as the concealed block of the corrupted block. Based on the simulation results obtained in this study, the proposed approach can indeed recover high-quality H.261 images from their corresponding corrupted H.261 images.

Proceedings ArticleDOI
30 Mar 1998
TL;DR: A technique which utilizes the residual redundancy at the output of the source coder to provide error protection for entropy coded systems at a reduced rate is proposed.
Abstract: When using entropy coding over a noisy channel it is customary to protect the highly vulnerable bitstream with an error correcting code. In this paper we propose a technique which utilizes the residual redundancy at the output of the source coder to provide error protection for entropy coded systems. The proposed approach provides 4-10 dB improvement over the standard approaches at a reduced rate.

Journal ArticleDOI
TL;DR: The detection and correction approach to transmission errors in JPEG images using the sequential discrete cosine transform (DCT)-based mode of operation is proposed and can recover high-quality JPEG images from the corresponding corrupted JPEG images at bit error rates up to approximately 0.4%.
Abstract: The detection and correction approach to transmission errors in JPEG images using the sequential discrete cosine transform (DCT)-based mode of operation is proposed. The objective is to eliminate transmission errors in JPEG images. Here a transmission error may be either a single-bit error or a burst error containing N successive error bits. For an entropy-coded JPEG image, a single transmission error in a codeword will not only affect the underlying codeword, but may also affect subsequent codewords. Consequently, a single error in an entropy-coded system may result in a significant degradation. To cope with the synchronization problem, in the proposed approach the restart capability of JPEG images is enabled, i.e., the eight unique restart markers (synchronization codewords) are periodically inserted into the JPEG compressed image bitstream. Transmission errors in a JPEG image are sequentially detected both when the JPEG image is under decoding and after the JPEG image has been decoded. When a transmission error or equivalently a corrupted restart interval is detected, the proposed error correction approach simply performs a sequence of bit inversions and redecoding operations on the corrupted restart interval and selects the "best" feasible redecoding solution by using a proposed cost function for error correction. The proposed approach can recover high-quality JPEG images from the corresponding corrupted JPEG images at bit error rates (BERs) up to approximately 0.4%. This shows the feasibility of the proposed approach.

Journal ArticleDOI
TL;DR: A ready-to-implement ARQ scheme with packet combining with analytical description of the scheme in random error channel shows excellent agreement with simulation results and an upper bound for type-II schemes is defined.
Abstract: In an automatic repeat request (ARQ) scheme, a packet is retransmitted if it gets corrupted due to transmission errors caused by the channel. Here we describe a ready-to-implement ARQ scheme with packet combining. An analytical description of the scheme in random error channel shows excellent agreement with simulation results. An upper bound for type-II schemes is defined. For smaller packet sizes, throughput of the proposed scheme is sufficiently close to the upper bound till a very high bit error rate.