scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 1997"


Journal ArticleDOI
Luigi Rizzo1
01 Apr 1997
TL;DR: A very basic description of erasure codes is provided, an implementation of a simple but very flexible erasure code to be used in network protocols is described, and its performance and possible applications are discussed.
Abstract: Reliable communication protocols require that all the intended recipients of a message receive the message intact. Automatic Repeat reQuest (ARQ) techniques are used in unicast protocols, but they do not scale well to multicast protocols with large groups of receivers, since segment losses tend to become uncorrelated thus greatly reducing the effectiveness of retransmissions. In such cases, Forward Error Correction (FEC) techniques can be used, consisting in the transmission of redundant packets (based on error correcting codes) to allow the receivers to recover from independent packet losses.Despite the widespread use of error correcting codes in many fields of information processing, and a general consensus on the usefulness of FEC techniques within some of the Internet protocols, very few actual implementations exist of the latter. This probably derives from the different types of applications, and from concerns related to the complexity of implementing such codes in software. To fill this gap, in this paper we provide a very basic description of erasure codes, describe an implementation of a simple but very flexible erasure code to be used in network protocols, and discuss its performance and possible applications. Our code is based on Vandermonde matrices computed over GF(pr), can be implemented very efficiently on common microprocessors, and is suited to a number of different applications, which are briefly discussed in the paper. An implementation of the erasure code shown in this paper is available from the author, and is able to encode/decode data at speeds up to several MB/s running on a Pentium 133.

1,067 citations


Journal ArticleDOI
Madhu Sudan1
TL;DR: To the best of the knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides error recovery capability beyond the error-correction bound of a code for any efficient code.

864 citations


Journal ArticleDOI
TL;DR: For a systems programmer to be able to implement Reed-Solomon coding for reliability in RAID-like systems without needing to consult any external references, this specification assumes no prior knowledge of algebra or coding theory.
Abstract: SUMMARY It is well-known that Reed-Solomon codes may be used to provide error correction for multiple failures in RAID-like systems. The coding technique itself, however, is not as well-known. To the coding theorist, this technique is a straightforward extension to a basic coding paradigm and needs no special mention. However, to the systems programmer with no training in coding theory, the technique may be a mystery. Currently, there are no references that describe how to perform this coding that do not assume that the reader is already well-versed in algebra and coding theory. This paper is intended for the systems programmer. It presents a complete specification of the coding algorithm plus details on how it may be implemented. This specification assumes no prior knowledge of algebra or coding theory. The goal of this paper is for a systems programmer to be able to implement Reed-Solomon coding for reliability in RAID-like systems without needing to consult any external references. ©1997 by John Wiley & Sons, Ltd.

664 citations


Journal ArticleDOI
TL;DR: This paper models the correlation properties of the fading mobile radio channel as a one-step Markov process whose transition probabilities are a function of the channel characteristics, and presents the throughput performance of the Go-Back-N and selective-repeat automatic repeat request (ARQ) protocols with timer control.
Abstract: In this paper, we study the correlation properties of the fading mobile radio channel Based on these studies, we model the channel as a one-step Markov process whose transition probabilities are a function of the channel characteristics Then we present the throughput performance of the Go-Back-N and selective-repeat automatic repeat request (ARQ) protocols with timer control, using the Markov model for both forward and feedback channels This approximation is found to be very good, as confirmed by simulation results

302 citations


Journal ArticleDOI
TL;DR: The second- and third-order statistics of the residual error process when block transmissions are performed over a bursty channel are studied and an effective interleaving strategy is found to require a very large buffer.
Abstract: In the development of encoding algorithms for image, video, and other mixed media transmissions, it is important to note that the channel "seen" by the applications is the physical channel as modified by the error-correcting mechanisms used at the physical level. Therefore, the statistics of the residual error process is relevant to the design of encoding algorithms. In this paper, we study the second- and third-order statistics of the residual error process when block transmissions are performed over a bursty channel. The effect of interleaving is explicitly studied. The conditions under which a Markovian model for the block errors is adequate are identified. Derivations of the parameters of the block error process are then presented in terms of the parameters of the bit/symbol error process. At higher data speeds an effective interleaving strategy is found to require a very large buffer.

230 citations


Journal ArticleDOI
TL;DR: The scheme solves the notorious problem of power control in OFDM systems by maintaining a peak-to-mean envelope power ratio of at most 3 dB while allowing simple encoding and decoding at high code rates for binary, quaternary or higher-phase signalling together with good error correction.
Abstract: A coding scheme for OFDM transmission is proposed, exploiting a previously unrecognised connection between pairs of Golay complementary sequences and second-order Reed-Muller codes. The scheme solves the notorious problem of power control in OFDM systems by maintaining a peak-to-mean envelope power ratio of at most 3 dB while allowing simple encoding and decoding at high code rates for binary, quaternary or higher-phase signalling together with good error correction.

207 citations


Patent
25 Mar 1997
TL;DR: In this paper, a method for adaptive forward error correction in a data communication system (100) provides for dynamically changing Forward Error Correction (FEC) parameters based upon communication channel conditions.
Abstract: An apparatus (101, 110) and method for adaptive forward error correction in a data communication system (100) provides for dynamically changing forward error correction parameters based upon communication channel conditions. Data having a current degree of forward error correction is received (305), and a channel parameter is monitored (310). A threshold level for the channel parameter is determined (315), and the monitored channel parameter is compared to the threshold level (320). When the channel parameter is not within a predetermined or adaptive variance of the threshold level, a revised forward error correction parameter having a greater or lesser degree of forward error correction capability is selected (330, 340, 350, 360), and the revised forward error correction parameter is transmitted (370). The device receiving the revised forward error correction parameter, such as a secondary station (110), then transmits data encoded utilizing the revised error correction parameter (425).

206 citations


Journal ArticleDOI
TL;DR: This work considers the problem of communications over a wireless channel in support of data transmissions from the perspective of small portable devices that must rely on limited battery energy, and proposes a simple probing scheme and a modified scheme that yields slightly better performance but requires some additional complexity.
Abstract: We consider the problem of communications over a wireless channel in support of data transmissions from the perspective of small portable devices that must rely on limited battery energy. We model the channel outages as statistically correlated errors. Classic ARQ strategies are found to lead to a considerable waste of energy, due to the large number of transmissions. The use of finite energy sources in the face of dependent channel errors leads to new protocol design criteria. As an example, a simple probing scheme, which slows down the transmission rate when the channel is impaired, is show? to be more energy efficient, with a slight loss in throughput. A modified scheme that yields slightly better performance but requires some additional complexity is also studied. Some references on the modeling of battery cells are discussed to highlight the fact that battery charge capacity is strongly influenced by the available "relaxation time" between current pulses. A formal approach that can track complex models for power sources, including dynamic charge recovery, is also developed.

205 citations


Book ChapterDOI
01 Jan 1997
TL;DR: In this paper, a quantum state can be stored intact for arbitrary long time at a constant decoherence rate with arbitrary small error probability, and fault tolerance is achieved by quantum error correction.
Abstract: Quantum error correction can be performed fault-tolerantly This allows to store a quantum state intact (with arbitrary small error probability) for arbitrary long time at a constant decoherence rate.

201 citations


Journal ArticleDOI
TL;DR: This survey gives an overview of existing transport-layer error control mechanisms and discusses their suitability for use in IP-based networks, and the impact of IP over ATM on the requirements of error control mechanism is discussed.
Abstract: IP-based audio-visual multicast applications are gaining increasing interest since they can be realized using inexpensive network services that offer no guarantees for loss or delay. When using network services that do not guarantee the quality of service (QoS) required by audio-visual applications, recovery from losses due to congestion in the network is a key problem that must be solved. This survey gives an overview of existing transport-layer error control mechanisms and discusses their suitability for use in IP-based networks. Additionally, the impact of IP over ATM on the requirements of error control mechanisms is discussed. Different network scenarios are used to assess the performance of retransmission-based error correction and forward error correction.

195 citations


Journal ArticleDOI
TL;DR: The problem of robust video transmission in error prone environments is addressed utilizing a feedback channel between transmitter and receiver carrying acknowledgment information and a low complexity algorithm for real-time reconstruction of spatio-temporal error propagation is described in detail.
Abstract: In this paper we address the problem of robust video transmission in error prone environments. The approach is compatible with the ITU-T video coding standard H.263. Fading situations in mobile networks are tolerated and the image quality degradation due to spatio-temporal error propagation is minimized utilizing a feedback channel between transmitter and receiver carrying acknowledgment information. In a first step, corrupted group of blocks (GOB's) are concealed to avoid annoying artifacts caused by decoding of an erroneous bit stream. The GOB and the corresponding frame number are reported to the transmitter via the back channel. The encoder evaluates the negative acknowledgments and reconstructs the spatial and temporal error propagation. A low complexity algorithm for real-time reconstruction of spatio-temporal error propagation is described in detail. Rapid error recovery is achieved by INTRA refreshing image regions (macroblocks) bearing visible distortion. The feedback channel method does not introduce additional delay and is particularly relevant for real-time conversational services in mobile networks. Experimental results with bursty bit error sequences simulating a Digital European Cordless Telephony (DECT) channel are presented with different combinations of forward error correction (FEC), automatic repeat on request (ARQ), and the proposed error compensation technique, Compared to the case where FEC and ARQ are used for error correction, a gain of up to 3 dB peak signal-to-noise ratio (PSNR) is observed if error compensation is employed additionally.

Journal ArticleDOI
01 Dec 1997-System
TL;DR: The study has shown that learners' performance in error correction in writing can provide teachers with valuable information to guide their error correction policy.

Journal ArticleDOI
TL;DR: Several temporal, spatial, and transform-domain error concealment techniques for MPEG coded pictures are discussed, a new scheme based on directional interpolation is proposed, and the performance of these techniques by computer simulation is compared.
Abstract: Compressed bitstreams are, in general, very sensitive to channel errors. For instance, a single bit error in a coded video bitstream may cause severe degradation on picture quality. When bit errors occur during transmission and cannot be corrected by an error correction scheme, error concealment is needed to conceal the corrupted image at the receiver. Error concealment algorithms attempt to repair damaged portions of the picture by exploiting both the spatial and the temporal redundancies in the received and reconstructed video signal. We discuss several temporal, spatial, and transform-domain error concealment techniques for MPEG coded pictures, and propose a new scheme based on directional interpolation. We also compare the performance of these techniques by computer simulation.

Journal ArticleDOI
TL;DR: The main results are that even for moderate head velocities, system delay causes more registration error than all other sources combined; eye tracking is probably not necessary; and there are many small error sources that will make submillimeter registration almost impossible in an optical STHMD system without feedback.
Abstract: Augmented reality AR systems typically use see-through head-mounted displays STHMDs to superimpose images of computer-generated objects onto the user's view of the real environment in order to augment it with additional information. The main failing of current AR systems is that the virtual objects displayed in the STHMD appear in the wrong position relative to the real environment. This registration error has many causes: system delay, tracker error, calibration error, optical distortion, and misalignment of the model, to name only a few. Although some work has been done in the area of system calibration and error correction, very little work has been done on characterizing the nature and sensitivity of the errors that cause misregistration in AR systems. This paper presents the main results of an end-to-end error analysis of an optical STHMD-based tool for surgery planning. The analysis was done with a mathematical model of the system and the main results were checked by taking measurements on a real system under controlled circumstances. The model makes it possible to analyze the sensitivity of the system-registration error to errors in each part of the system. The major results of the analysis are: 1 Even for moderate head velocities, system delay causes more registration error than all other sources combined; 2 eye tracking is probably not necessary; 3 tracker error is a significant problem both in head tracking and in system calibration; 4 the World or reference coordinate system adds error and should be omitted when possible; 5 computational correction of optical distortion may introduce more delay-induced registration error than the distortion error it corrects, and 6 there are many small error sources that will make submillimeter registration almost impossible in an optical STHMD system without feedback. Although this model was developed for optical STHMDs for surgical planning, many of the results apply to other HMDs as well.

Patent
25 Apr 1997
TL;DR: In this paper, a method and apparatus for increasing the data rate and providing antenna diversity using multiple transmit antennas is described, where a set of bits of a digital signal are used to generate a codeword.
Abstract: A method and apparatus for increasing the data rate and providing antenna diversity using multiple transmit antennas is disclosed. A set of bits of a digital signal are used to generate a codeword. Codewords are provided according to a channel code. Delay elements may be provided in antenna output channels, or with suitable code construction delay may be omitted. n signals represent n symbols of a codeword are transmitted with n different transmit antennas. At the receiver MLSE or other decoding is used to decode the noisy received sequence. The parallel transmission and channel coding enables an increase the data rate over previous techniques, and recovery even under fading conditions. The channel coding may be concatenated with error correction codes under appropriate conditions.

Journal ArticleDOI
TL;DR: This work studies the go-back-N retransmission protocol operating over a wireless channel using a finite energy source with a flat power profile, and characterize the sensitivity of the total number of correctly transmitted packets to the choice of the output power level.
Abstract: When terminals powered by a finite battery source are used for wireless communications, energy constraints are likely to influence the choice of error control protocols. Therefore, we propose the average number of correctly transmitted packets during the lifetime of the battery as a new metric. In particular, we study the go-back-N retransmission protocol operating over a wireless channel using a finite energy source with a flat power profile. We characterize the sensitivity of the total number of correctly transmitted packets to the choice of the output power level. We then generalize our results to arbitrary power profiles through both a recursive technique and Markov analysis. Finally, we compare the performance of go-back-N with an adaptive error control protocol that slows down the transmission rate when the channel is impaired, and document the advantages.

Patent
Ryutaro Yamanaka1
20 Nov 1997
TL;DR: In this article, an error correction device is provided with an internal code decoder which outputs a series of decoded data and reliability information of the decoding data, a CRC (Cyclic Redundancy Check) decoder, a de-interleaver, an erasure position detector, and an external decoder for decoding an external code by soft judgement.
Abstract: The error correction device is provided with an internal code decoder which outputs a series of decoded data and reliability information of the decoded data, a CRC (Cyclic Redundancy Check) decoder, a de-interleaver, an erasure position detector, and an external code decoder for decoding an external code by soft judgement. When the external code is decoded by the soft judgment, not only the series of decoded data of the internal code and their reliability information but also frame error information based on CRC are used as input signals. It is therefore possible to perform error correction with a high accuracy and to obtain low BER characteristics.

Proceedings ArticleDOI
26 Sep 1997
TL;DR: A novel error control where the most battery energy efficient hybrid combination of an appropriate FBC code and ABQ protocol is chosen, and adapted over time, for each stream (ATM virtual circuit or IP/RSVP flow).
Abstract: Energy efficiency, which directly affects battery life and portability, is perhaps the single most important design metric in hand-held computing devices capable of mobile networking over wireless radio links. By virtue of their being relatively thin clients, a high fraction of the power consumption in portable wireless computing devices is accounted for by the transport of packet data over the wireless link [Stemm96]. In particular, the error con-. trol strategy (e.g. convolutional and block channel coding for forward error correction (FBC), ARQ protocols, hybrids) used for wireless link data transport has a direct impact on battery power consumption. Error control has traditionally been studied by channel coding researchers from the perspective of selecting an error control scheme to achieve a desired level of radio channel performance. We instead study the problem of error control from a perspective more relevant to battery operated devices: the amount of battery energy consumed to transmit bits across a wireless link. This includes both the physical transmission of useful and redundancy data, as well as the computation of the error control redundancy. We first describe a novel error control where the most battery energy efficient hybrid combination of an appropriate FBC code and ABQ protocol is chosen, and adapted over time, for each stream (ATM virtual circuit or IP/RSVP flow). Next, we present analysis and simulation results to guide the selection and adaptation of the most energy efficient error control scheme as a function of quality of service, packet size, and channel state.

Journal ArticleDOI
TL;DR: The synthesis procedure (implemented in Stanford CRCs TOPS synthesis system) fully automates the design process, and reduces the cost of concurrent error detection compared with previous methods.
Abstract: This paper presents a procedure for synthesizing multilevel circuits with concurrent error detection. All errors caused by single stuck-at faults are detected using a parity-check code. The synthesis procedure (implemented in Stanford CRCs TOPS synthesis system) fully automates the design process, and reduces the cost of concurrent error detection compared with previous methods. An algorithm for selecting a good parity-check code for encoding the circuit outputs is described. Once the code has been selected, a new procedure called structure-constrained logic optimization is used to minimize the area of the circuit as much as possible while still using a circuit structure that ensures that single stuck-at faults cannot produce undetected errors. It is proven that the resulting implementation is path fault secure, and when augmented by a checker, forms a self-checking circuit. The actual layout areas required for self-checking implementations of benchmark circuits generated with the techniques described in this paper are compared with implementations using Berger codes, single-bit parity, and duplicate-and-compare. Results indicate that the self-checking multilevel circuits generated with the procedure described here are significantly more economical.

Journal ArticleDOI
TL;DR: The registration problem is considered, which is a prerequisite process of a data fusion system to accurately estimate and correct systematic errors, and an exact maximum likelihood (EML) algorithm for registration is presented.
Abstract: Data fusion is a process dealing with the association, correlation, and combination of data and information from multiple sources to achieve refined position and identity estimates. We consider the registration problem, which is a prerequisite process of a data fusion system to accurately estimate and correct systematic errors. An exact maximum likelihood (EML) algorithm for registration is presented. The algorithm is implemented using a recursive two-step optimization that involves a modified Gauss-Newton procedure to ensure fast convergence. Statistical performance of the algorithm is also investigated, including its consistency and efficiency discussions. In particular, the explicit formulas for both the asymptotic covariance and the Cramer-Rao bound (CRB) are derived. Finally, simulated and real-life multiple radar data are used to evaluate the performance of the proposed algorithm.

Patent
15 May 1997
TL;DR: In this article, a disk storage system is described where user data received from a host system is first encoded according to a first channel code having a high code rate, and then encoded using an ECC code, such as a Reed-Solomon code, wherein the ECC redundancy symbols are encoded using a second channel code with low error propagation.
Abstract: A disk storage system is disclosed wherein user data received from a host system is first encoded according to a first channel code having a high code rate, and then encoded according to an ECC code, such as a Reed-Solomon code, wherein the ECC redundancy symbols are encoded according to a second channel code having low error propagation. In the preferred embodiment, the first channel code is a RLL (d,k) code having a long k constraint which allows for longer block lengths (and higher code rates). During read back, a synchronous read channel samples the analog read signal a synchronously and interpolates the asynchronous sample values to generate sample values substantially synchronized to the baud rate. In contrast to conventional synchronous-sampling timing recovery, interpolated timing recovery can tolerate a longer RLL k constraint because it is less sensitive to noise in the read signal and not affected by process variations in fabrication. Additionally, a trellis sequence detector detects an estimated binary sequence from the synchronous sample values, wherein a state transition diagram of the trellis detector is configured according to the code constraints of the first and second channel codes. The estimated binary sequence output by the sequence detector is buffered in a data buffer to facilitate the error detection and correction process, and to allow for retroactive and split-segment symbol synchronization using multiple sync marks.

Journal ArticleDOI
TL;DR: A novel error concealment technique based on the discrete cosine transform (DCT) coefficients recovery and its application to the MPEG-2 bit stream error, requiring much lower computational load and simpler hardware structure than existing algorithms, while providing adequate performances.
Abstract: This paper presents a novel error concealment technique based on the discrete cosine transform (DCT) coefficients recovery and its application to the MPEG-2 bit stream error. Assuming a smoothness constraint on image intensity, an object function which describes the intersample variations at the boundaries of the lost block and the adjacent blocks is defined, and the corrupted DCT coefficients are recovered by solving a linear equation. Our approach can be regarded as a special case of Wang et al.'s (1991). However, we show that the linear equation in the proposed algorithm can be decomposed into four independent subequations, requiring much lower computational load and simpler hardware structure than existing algorithms, while providing adequate performances. To develop a generic error concealment (EC) system, the blocks corrupted by the random bit errors are identified by a multistage error detection algorithm. Thus, the proposed EC system can be applied to more realistic environments, such as concealment of random bit error in MPEG-2 bit stream. Computer simulation results show that the quality of a recovered image is significantly improved even at a bit error rate as high as 10/sup -5/.

Journal ArticleDOI
Immink Kornelis Antonie1
TL;DR: A new coding technique is proposed that translates user information into a constrained sequence using very long codewords using a storage-effective enumerative encoding scheme for translating user data into long dk sequences and vice versa and estimates are given of the relationship between coding efficiency versus encoder and decoder complexity.
Abstract: A new coding technique is proposed that translates user information into a constrained sequence using very long codewords. Huge error propagation resulting from the use of long codewords is avoided by reversing the conventional hierarchy of the error control code and the constrained code. The new technique is exemplified by focusing on (d, k)-constrained codes. A storage-effective enumerative encoding scheme is proposed for translating user data into long dk sequences and vice versa. For dk runlength-limited codes, estimates are given of the relationship between coding efficiency versus encoder and decoder complexity. We show that for most common d, k values, a code rate of less than 0.5% below channel capacity can be obtained by using hardware mainly consisting of a ROM lookup table of size 1 kbyte. For selected values of d and k, the size of the lookup table is much smaller. The paper is concluded by an illustrative numerical example of a rate 256/466, (d=2, k=15) code, which provides a serviceable 10% increase in rate with respect to its traditional rate 1/2, (2, 7) counterpart.

Patent
23 Jun 1997
TL;DR: In this paper, the ECC circuitry corrects the error and rewrites the corrected data back to the external cache, if the error is correctable, the data is not correctable.
Abstract: On-chip delivery of data from an on-chip or off-chip cache is separated into two buses. A fast fill bus provides data to latency critical caches without ECC error detection and correction. A slow fill bus provides the data to latency insensitive caches with ECC error detection and correction. Because the latency critical caches receive the data without error detection, they receive the data at least one clock cycle before the latency insensitive caches, thereby enhancing performance if there is no ECC error. If an ECC error is detected, a software trap is executed which flushes the external cache and the latency sensitive caches that received the data before the trap was generated. If the error is correctable, ECC circuitry corrects the error and rewrites the corrected data back to the external cache. If the error is not correctable, the data is read from main memory to the external cache.

Journal ArticleDOI
TL;DR: A technique to implement error detection as part of the arithmetic coding process to show that a small amount of extra redundancy can be very effective in detecting errors very quickly, and practical tests confirm this prediction.
Abstract: Arithmetic coding for data compression has gained widespread acceptance as the right method for optimum compression when used with a suitable source model. A technique to implement error detection as part of the arithmetic coding process is described. Heuristic arguments are given to show that a small amount of extra redundancy can be very effective in detecting errors very quickly, and practical tests confirm this prediction.

Journal ArticleDOI
TL;DR: A hybrid ARQ error control scheme based on the concatenation of a Reed-Solomon (RS) code and a rate compatible punctured convolutional (RCPC) code for low-bit-rate video transmission over wireless channels is proposed.
Abstract: This paper proposes a hybrid ARQ error control scheme based on the concatenation of a Reed-Solomon (RS) code and a rate compatible punctured convolutional (RCPC) code for low-bit-rate video transmission over wireless channels. The concatenated hybrid ARQ scheme we propose combines the advantages of both type-I and type-II hybrid ARQ schemes. Certain error correction capability is provided in each (re)transmitted packet, and the information can be recovered from each transmission or retransmission alone if the errors are within the error correction capability (similar to type-I hybrid ARQ). The retransmitted packet contains redundancy bits which, when combined with the previous transmission, result in a more powerful RS/convolutional concatenated code to recover information if error correction fails for the individual transmissions (similar to type-II hybrid ARQ). Bit-error rate (BER) or signal-to-noise ratio (SNR) of a radio channel changes over time due to mobile movement and fading. The channel quality at any instant depends on the previous channel conditions. For the accurate analysis of the performance of the hybrid ARQ scheme, we use a multistate Markov chain (MSMC) to model the radio channel at the data packet level. We propose a method to partition the range of the received SNR into a set of states for constructing the model so that the difference between the error rate of the real radio channel and that of the MSMC model is minimized. Based on the model, we analyze the performance of the concatenated hybrid ARQ scheme. The results give valuable insight into the effects of the error protection capability in each packet, the mobile speed, and the number of retransmissions. Finally, the transmission of H.263 coded video over a wireless channel with error protection provided by the concatenated hybrid ARQ scheme is studied by means of simulations.

Journal ArticleDOI
TL;DR: A comparison of expert and non-expert subjects suggests that performance skill is not only based on reduced variance and bias, but also on the construction of richer mental models of error correction.
Abstract: Many interactive human skills are based on real-time error detection and correction. Here we investigate the spectral properties of such skills, focusing on a synchronization task. A simple autoregressive error correction model, based on separate ‘motor’ and ‘cognitive’ sources, provides an excellent fit to experimental spectral data. The model can also apply to recurrent processes not based on error correction, allowing commentary on previous claims of 1/ f-type noise in human cognition. A comparison of expert and non-expert subjects suggests that performance skill is not only based on reduced variance and bias, but also on the construction of richer mental models of error correction.

Book
01 Jun 1997

Journal ArticleDOI
TL;DR: This paper provides performance results through analysis and simulation for key error control problems encountered in using wireless links to transport asynchronous transfer mode (ATM) cells and concludes that it is very important to make the physical layer as SONET-like as possible through the use of powerful FEC, interleaving, and ARQ.
Abstract: This paper provides performance results through analysis and simulation for key error control problems encountered in using wireless links to transport asynchronous transfer mode (ATM) cells. Problems considered include the forward-error correction (FEC) and interleaving at the physical layer, the impact of wireless links on the ATM cell header-error control (HEC) sand cell delineation (CD) functions, the application of data link automatic repeat-request (ARQ) for traffic requiring reliable transport, and the impact of the choice of end-to-end ARQ protocol for reliable service. We conclude that it is very important to make the physical layer as SONET-like as possible through the use of powerful FEC, interleaving, and ARQ. These additional error control measures are especially necessary for disturbed channels because of the degrading effects of the channel on higher-layer functions. A recommended error control architecture is given with tradeoffs.

Patent
Josh N. Hogan1
28 Jan 1997
TL;DR: In this paper, a method of inhibiting copying of digital data is proposed, in which a sequence of symbols is added to original data, the sequence is then encoded by a special encoder that generates special channel bits that don't have a large accumulated digital sum variance.
Abstract: A method of inhibiting copying of digital data. In a first embodiment, a sequence of symbols is added to original data, the sequence of symbols selected to encode into channel bits having a large accumulated digital sum variance. The sequence of symbols is then encoded by a special encoder that generates special channel bits that don't have a large accumulated digital sum variance. The special channel bits may be unambiguously decoded, but the resulting decoded symbol sequence will likely be reencoded into channel bits having a large accumulated digital sum variance. In a second embodiment, a single symbol in the sequence of symbols is replaced after error correction symbols have been added. The sequence of symbols with one substituted symbol is encoded into channel bits that don't have a large accumulated digital sum variance. The resulting channel bits may be unambiguously decoded but the resulting symbol sequence will be error corrected and the error corrected symbol sequence will likely be reencoded into channel bits having a large accumulated digital sum variance. In a third embodiment, additional decryption or descrambling information or other data modification information is encoded into the sign of the digital sum variance of each blocked row of data. The decoded additional information is used to decrypt, descramble or otherwise modify the primary information. In an alternative third embodiment, decryption information is encoded into patterns of run lengths. In general, the embodiments and alternatives are independent and can be combined in complex ways to make reencoding difficult.