scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 1996"


Journal ArticleDOI
TL;DR: A new family of convolutional codes, nicknamed turbo-codes, built from a particular concatenation of two recursive systematic codes, linked together by nonuniform interleaving appears to be close to the theoretical limit predicted by Shannon.
Abstract: This paper presents a new family of convolutional codes, nicknamed turbo-codes, built from a particular concatenation of two recursive systematic codes, linked together by nonuniform interleaving. Decoding calls on iterative processing in which each component decoder takes advantage of the work of the other at the previous step, with the aid of the original concept of extrinsic information. For sufficiently large interleaving sizes, the correcting performance of turbo-codes, investigated by simulation, appears to be close to the theoretical limit predicted by Shannon.

3,003 citations


Journal ArticleDOI

374 citations


Journal ArticleDOI
TL;DR: The results indicate that CD3-OFDM allows one to achieve a very fast adaptation to the channel characteristics in a mobile environment and can be suitable for digital sound and television broadcasting services over selective radio channels, addressed to fixed and vehicular receivers.
Abstract: This paper describes a novel channel estimation scheme identified as coded decision directed demodulation (CD3) for coherent demodulation of orthogonal frequency division multiplex (OFDM) signals making use of any constellation format [e.g., quaternary phase shift keying (QPSK), 16-quadrature amplitude modulation (QAM), 64-QAM]. The structure of the CD3-OFDM demodulator is described, based on a new channel estimation loop exploiting the error correction capability of a forward error correction (FEC) decoder and frequency and time domain filtering to mitigate the effects of noise and residual errors. In contrast to the conventional coherent OFDM demodulation schemes, CD3-OFDM does not require the transmission of a comb of pilot tones for channel estimation and equalization, therefore yielding a significant improvement in spectrum efficiency (typically between 5-15%). The performance of the system with QPSK modulation is analyzed by computer simulations, on additive white Gaussian noise (AWGN) and frequency selective channels, under static and mobile reception conditions. For convolutional coding rate 1/2, the results indicate that CD3-OFDM allows one to achieve a very fast adaptation to the channel characteristics in a mobile environment (maximum tolerable Doppler shift of about 80 Hz for an OFDM symbol duration of 1 ms, as differential demodulation) and an E/sub b//N/sub 0/ performance similar to coherent demodulation (e.g., E/sub b//N/sub 0/=4.3 dB at bit-error rate (BER)=2/spl middot/10/sup -4/ on the AWGN channel). Therefore, CD3-OFDM can be suitable for digital sound and television broadcasting services over selective radio channels, addressed to fixed and vehicular receivers.

281 citations


Proceedings ArticleDOI
R.D.J. van Nee1
18 Nov 1996
TL;DR: This paper shows the possibility of using complementary codes for both decreasing the peak-to-average power (PAP) ratio and error correction and demonstrates the viability of using these codes in multipath fading channels.
Abstract: Orthogonal frequency division multiplexing (OFDM) is a promising way to provide large data rates at reasonable complexity in wireless fading channels. However, a major disadvantage of OFDM is its large peak-to-average power ratio, which significantly decreases the efficiency of the transmitter power amplifier and hence forms a major obstacle to implementing OFDM in portable communication systems. This paper shows the possibility of using complementary codes for both decreasing the peak-to-average power (PAP) ratio and error correction. Set sizes and minimum distance properties of these codes are derived. It is shown that specific subsets of complementary codes have a minimum distance of up to half the code length, while their PAP ratio is only 3 dB. Simulation results demonstrate the viability of using these codes in multipath fading channels. It is currently planned to implement OFDM with complementary codes in the Wireless ATM Network Demonstrator (WAND), a joint European ACTS program.

257 citations


Journal ArticleDOI
TL;DR: An introductory review of the linear ion trap is given in this paper, with particular regard to its use for quantum information processing, and the discussion aims to bring together ideas from information theory and experimental ion trapping, to provide a resource to workers unfamiliar with one or the other of these subjects.
Abstract: An introductory review of the linear ion trap is given, with particular regard to its use for quantum information processing. The discussion aims to bring together ideas from information theory and experimental ion trapping, to provide a resource to workers unfamiliar with one or the other of these subjects. It is shown that information theory provides valuable concepts for the experimental use of ion traps, especially error correction, and conversely the ion trap provides a valuable link between information theory and physics, with attendant physical insights. Example parameters are given for the case of calcium ions. Passive stabilisation will allow about 200 computing operations on 10 ions; with error correction this can be greatly extended.

248 citations


Journal ArticleDOI
TL;DR: This work considers the transmission of QCIF resolution (176/spl times/144 pixels) video signals over wireless channels at transmission rates of 64 kb/s and below and proposes an automatic repeat request (ARQ) error control technique to retransmit erroneous data-frames.
Abstract: We consider the transmission of QCIF resolution (176/spl times/144 pixels) video signals over wireless channels at transmission rates of 64 kb/s and below. The bursty nature of the errors on the wireless channel requires careful control of transmission performance without unduly increasing the overhead for error protection. A dual-rate source coder is presented that adaptively selects a coding rate according to the current channel conditions. An automatic repeat request (ARQ) error control technique is employed to retransmit erroneous data-frames. The source coding rate is selected based on the occupancy level of the ARQ transmission buffer. Error detection followed by retransmission results in less overhead than forward error correction for the same quality. Simulation results are provided for the statistics of the frame-error bursts of the proposed system over code division multiple access (CDMA) channels with average bit error rates of 10/sup -3/ to 10/sup -4/.

176 citations


Patent
19 Dec 1996
TL;DR: In this article, a thermal asperity (TA) detection circuit detects a saturation condition in the sample values (25) of the analog read signal (62) which indicates the presence of a TA; a pole of an AC coupling capacitor (55) is elevated, timing recovery (28), gain control (51), and DC offset (44) loops in the read channel are held constant; TA erasure pointers are generated corresponding to the duration of the TA transient.
Abstract: In a magnetic disk drive storage system comprising a sampled amplitude read channel and an on-the-fly error correction coding (ECC) system, a thermal asperity compensation technique wherein: a thermal asperity (TA) detection circuit (45) detects a saturation condition in the sample values (25) of the analog read signal (62) which indicates the presence of a TA; a pole of an AC coupling capacitor (55) is elevated; timing recovery (28), gain control (51), and DC offset (44) loops in the read channel are held constant; TA erasure pointers are generated corresponding to the duration of the TA transient; and an on-the-fly error detection and correction (EDAC) circuit processes the TA erasure pointers to correct errors in the detected digital data caused by the TA. Using TA erasure pointers to compensate for the effect of thermal asperities minimizes the cost, complexity, and redundancy of the ECC. Further, soft errors in the prior art method of adjusting the headroom of the read channel ADC are avoided. Still further, the EDAC circuitry can process the erasure pointers on-the-fly and still correct a sufficient number of soft errors without having to perform any significant number of reread operations. In this manner, the disk drive storage system operates virtually uninterrupted in reading data from the disk and transferring it to a host computer.

154 citations


Patent
Masami Kato1
27 Nov 1996
TL;DR: In this article, an error correcting code including of basic data and a BCH-based parity code is divided into smaller packets, and an error detecting code is appended to each of the thus-divided packets, so that transmission basic data is formed.
Abstract: An error correcting code including of basic data and a BCH-based parity code appended thereto is divided into smaller packets. An error detecting code is appended to each of the thus-divided packets, so that transmission basic data is formed. When the transmission basic data is received, the basic data and a BCH-based parity code are derived from the transmission basic data. Error correcting is carried out with respect to the overall transmission basic data. An error detecting operation is carried out with respect to each packet using the error detecting code. If a packet is found to contain errors, a request for retransmission of that packet will be sent to the sending side.

151 citations


Proceedings Article
01 Jan 1996
TL;DR: An automatic, context-sensitive, word-error correction system based on statistical language modeling (SLM) as applied to optical character recognition (OCR) postprocessing that can correct non-word errors as well as real- word errors and achieves a 60.2% error reduction rate for real OCR text.
Abstract: This paper describes an automatic, context-sensitive, word-error correction system based on statistical language modeling (SLM) as applied to optical character recognition (OCR) postprocessing. The system exploits information from multiple sources, including letter n-grams, character confusion probabilities, and word-bigram probabilities. Letter n-grams are used to index the words in the lexicon. Given a sentence to be corrected, the system decomposes each string in the sentence into letter n-grams and retrieves word candidates from the lexicon by comparing string n-grams with lexicon-entry n-grams. The retrieved candidates are ranked by the conditional probability of matches with the string, given character confusion probabilities. Finally, the wordobigram model and Viterbi algorithm are used to determine the best scoring word sequence for the sentence. The system can correct non-word errors as well as real-word errors and achieves a 60.2% error reduction rate for real OCR text. In addition, the system can learn the character confusion probabilities for a specific OCR environment and use them in self-calibration to achieve better performance.

132 citations


Journal ArticleDOI
TL;DR: An automatic-repeat-request (ARQ) Go-Back-N (GBN) protocol with unreliable feedback and time-out mechanism is studied, using renewal theory.
Abstract: An automatic-repeat-request (ARQ) Go-Back-N (GBN) protocol with unreliable feedback and time-out mechanism is studied, using renewal theory. Transmissions on both the forward and the reverse channels are assumed to experience Markovian errors. The exact throughput of the protocol is evaluated, and simulation results, that confirm the analysis, are presented. A detailed comparison of the proposed method and the commonly used transfer function method reveals that the proposed approach is simple and potentially more powerful.

128 citations


Proceedings ArticleDOI
03 Oct 1996
TL;DR: Results indicated that users adopt a strategy of switching input modalities and lexical expressions when resolving errors, strategies that they use in a linguistically contrastive manner to distinguish a repetition from original failed input.
Abstract: Recent research indicates clear performance advantages and a strong user preference for interacting multimodally with computers. However, in the problematic area of error resolution, possible advantages of multimodal interface design remain poorly understood. In the present research, a semi automatic simulation method with a novel error generation capability was used to collect within subject data before and after recognition errors, and at different spiral depths in terms of number of repetitions required to resolve an error. Results indicated that users adopt a strategy of switching input modalities and lexical expressions when resolving errors, strategies that they use in a linguistically contrastive manner to distinguish a repetition from original failed input. Implications of these findings are discussed for the development of user centered predictive models of linguistic adaptation during human computer error resolution, and for the development of improved error handling in advanced recognition based interfaces.

Patent
20 Dec 1996
TL;DR: In this paper, the error correction method stores data in the system's internal state to update probability tables used in developing alternative lists for substitution in misrecognized text, which can be used to correct errors in strings of words.
Abstract: A continuous speech recognition system has the ability to correct errors in strings of words. The error correction method stores data in the system's internal state to update probability tables used in developing alternative lists for substitution in misrecognized text.

Patent
19 Dec 1996
TL;DR: In this article, the error detection and correction device generates a syndrome table which includes a plurality of entries mapped to correctable or uncorrectable errors, in which a detected multiple-bit error in the memory data bits is mapped to an incorrect error entry and a detected error in memory address bits are mapped to a incorrect error.
Abstract: A computer system includes a processor bus having processor data and processor check bits for performing error detection and correction of the processor data. A CPU is coupled to the processor bus. A memory sub-system is coupled to the processor bus and includes memory check bits, memory address bits, and memory data bits, and an error detection and correction device for detecting an error in the memory address bits using the memory check bits and for detecting an error in the memory data bits using the memory check bits. The CPU can include a processor from the Pentium® Pro family of processors. The error detection and correction device generates a syndrome table which includes a plurality of entries mapped to correctable or uncorrectable errors, in which a detected multiple-bit error in the memory data bits is mapped to an uncorrectable error entry and a detected error in the memory address bits is mapped to an uncorrectable error entry. An error detection device is also coupled to the processor bus for detecting an error in the address bits or data bits using the processor check bits.

Journal ArticleDOI
TL;DR: The numerical results for the analysis and the simulation show that the proposed scheme maintains high throughput even if channels become noisy and the number of receivers is large.
Abstract: In this letter, a type-II hybrid broadcast automatic-repeat-request (ARQ) scheme with adaptive forward error correction (AFEC) using BCH codes is proposed and analyzed. The basic idea in the proposed scheme is to increase the error correcting capability of BCH codes according to each channel state using incremental redundancy. The numerical results for the analysis and the simulation show that the proposed scheme maintains high throughput even if channels become noisy and the number of receivers is large.

Patent
20 Sep 1996
TL;DR: In this article, a method and apparatus for selecting an error correction code rate in a cellular radio system is presented, where three code rates, 2/3, 1/2, and 1/3 are used in the preferred embodiment in dependence on the amount of channel loss.
Abstract: A method and apparatus for selecting an error correction code rate in a cellular radio system. Each base station broadcasts a power product value, PP, which is its transmit power, PBT, multiplied by its desired receive power, PBR. A mobile unit determines its appropriate transmit power, PMT, by measuring its received power, PMR, and calculating PP/PMR (24). If, due to a large channel loss, the calculation result exceeds the maximum transmit power capability of the mobile unit (25), the mobile unit uses a lower code rate and adjusts the transmit power accordingly (27, 31). Three code rates, 2/3, 1/2, and 1/3, are used in the preferred embodiment in dependence on the amount of channel loss. The code rate selection can also depend on the quantity of data to be transmitted (30, 33, 34). A base station can detect the code rate by attempting to decode all code rates and choosing the best result.

Patent
12 Feb 1996
TL;DR: In this paper, a method of detecting and correcting errors in a memory subsystem of a computer is described, which includes beginning a write operation of N data bits to a memory, generating M check bits from the N data points, writing the N bits and the m check bits to the memory, reading the n data bits and M checks from the memory and using the X syndrome bits to detect and correct errors.
Abstract: A method of detecting and correcting errors in a memory subsystem of a computer is described. The method includes beginning a write operation of N data bits to a memory, generating M check bits from the N data bits, writing the N data bits and the M check bits to the memory, reading the N data bits and M check bits from the memory, generating X syndrome bits from the N data bits and the M check bits, and using the X syndrome bits to detect and correct errors. Preferably, the M check bits are generated also from A address bits corresponding to the location in memory to which the N data bits and M check bits are to be written.

Journal ArticleDOI
TL;DR: It is shown that average precision and recall is not affected by OCR errors across systems for several collections, and it is further shown that the O CR errors and garbage strings generated from the mistranslation of graphic objects increase the size of the index by a wide margin.
Abstract: We give a comprehensive report on our experiments with retrieval from OCR-generated text using systems based on standard models of retrieval. More specifically, we show that average precision and recall is not affected by OCR errors across systems for several collections. The collections used in these experiments include both actual OCR-generated text and standard information retrieval collections corrupted through the simulation of OCR errors. Both the actual and simulation experiments include full-text and abstract-length documents. We also demonstrate that the ranking and feedback methods associated with these models are generally not robust enough to deal with OCR errors. It is further shown that the OCR errors and garbage strings generated from the mistranslation of graphic objects increase the size of the index by a wide margin. We not only point out problems that can arise from applying OCR text within an information retrieval environment, we also suggest solutions to overcome some of these problems.

Journal ArticleDOI
TL;DR: An analytical approach for analyzing the mean packet delay and mean queue length at the transmitting terminal in wireless packet networks using the selective repeat (SR) automatic repeat request (ARQ) scheme to control the errors introduced by the nonstationary transmission channel is presented.
Abstract: This paper presents an analytical approach for analyzing the mean packet delay and mean queue length at the transmitting terminal in wireless packet networks using the selective repeat (SR) automatic repeat request (ARQ) scheme to control the errors introduced by the nonstationary transmission channel. Each transmitting terminal is modeled as a discrete time queue with an infinite buffer. The nonstationary transmission channel is modeled as a two-state Markov chain. Comparisons of numerical predictions and simulation results are presented to highlight the accuracy of the proposed analytical approach.

Patent
23 Jul 1996
TL;DR: In this paper, an error correction system was proposed to reduce redundancy of a two-dimensional product code array by eliminating an excess redundancy in prior product codes that provides detection of more corrupted rows than can be corrected.
Abstract: An error correction system (20) reduces redundancy of a two-dimensional product code array (130) by eliminating an excess redundancy in prior product codes that provides detection of more corrupted rows than can be corrected. The product code array is formed utilizing a row coding (154), a column coding (160), and an intermediate coding (155, 198), which may each be Reed-Solomon codes. The product code array has a total redundancy of n 1 r 2 +r 1 r 2 symbols (160, 198), where r 1 is the redundancy of the row coding, r 2 is the redundancy of the column coding, and raw data symbols fill a k 1 by k 2 and a r 1 by (k 2 -r 2 ) symbol portions (132, 134) of the product code array. A decoder (38, 39) for the product code array is capable of detecting and correcting up to r 2 corrupted rows (205) of the product code array.

Patent
07 Mar 1996
TL;DR: In this article, an error correction scheme was proposed for data communication over noisy media. But this scheme requires the data to be encoded to provide error correction capabilities and the encoded signal is further modified by performing one or more linear mathematical operations in order to further randomize the data signal.
Abstract: A novel apparatus and method are provided for data communication over noisy media. The apparatus includes one or both of a transmitter circuit located at a transmitting location and a receiver circuit located at a receiving location. The data is encoded to provide error correction capabilities. The encoded signal is further modified by performing one or more linear mathematical operations in order to further randomize the data signal. The transmitter circuit thus generates a wideband spread spectrum signal based on the data which is to be transmitted, which spreads the signal and improves its immunity to noise. The coding used to spread the data signal may or may not be a function of the data itself. The present invention provides enhanced noise immunity without any resulting degradation in the operation and efficiency of the error correction coding. A synchronization circuit and method are also provided for quickly achieving fast, accurate synchronization utilizing parallel synchronization and sub-bit correlation. The error correction is used to correct hard and soft errors, and dynamically adjust the combination of hard and soft errors corrected in order to improve the overall data error correction.

Patent
29 Mar 1996
TL;DR: In this paper, the authors propose to use a cyclic redundancy code (CRC) to encode the data and then decode the encoded digital data component of the composite data sequence, which is then transmitted over the reliable digital communication link to the destination site.
Abstract: In order to transport digital data across a reliable digital communication link from a transmit site to a destination site, the data is processed in parallel paths to derive error detection information, such as a cyclic redundancy code, and to encode the data. The outputs of the parallel paths are combined into a composite digital data sequence, which is then transmitted over the reliable digital communication link to the destination site. At the destination site, the encoded digital data component of the composite data sequence is decoded and then subjected to the same error detection operation carried out at the transmit site to derive error detection information associated with the decoded digital data. This recalculated error detection information is compared with the error detection information component contained in the composite digital data sequence. If there is a mismatch, the reliable digital communication link and the encoder at the transmit site and the decoder at the destination site are reset.

Proceedings ArticleDOI
25 Jun 1996
TL;DR: The results of injecting physical pin-level faults show that these tests can prevent about 40% of the fail-silent model violations that escape the simple hardware-based error detection techniques.
Abstract: An important research topic deals with the investigation of whether a non-duplicated computer can be made fail-silent, since that behaviour is a-priori assumed in many algorithms. However, previous research has shown that in systems using a simple behaviour based error detection mechanism invisible to the programmer (e.g. memory protection), the percentage of fail-silent violations could be higher than 10%. Since the study of these errors has shown that they were mostly caused by pure data errors, we evaluate the effectiveness of software techniques capable of checking the semantics of the data, such as assertions, to detect these remaining errors. The results of injecting physical pin-level faults show that these tests can prevent about 40% of the fail-silent model violations that escape the simple hardware-based error detection techniques. In order to decouple the intrinsic limitations of the tests used from other factors that might affect its error detection capabilities, we evaluated a special class of software checks known for its high theoretical coverage: algorithm based fault tolerance (ABFT). The analysis of the remaining errors showed that most of them remained undetected due to short range control flow errors. When very simple software-based control flow checking was associated to the semantic tests, the target system, without any dedicated error detection hardware, behaved according to the fail-silent model for about 98% of all the faults injected.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: An algorithm for increasing error robustness in an H.263 video coding system in the absence of a feedback channel is proposed and demonstrates a significant improvement over non-adaptive intra update strategies.
Abstract: An algorithm for increasing error robustness in an H.263 video coding system in the absence of a feedback channel is proposed. For each 16/spl times/16 macroblock, the encoder accumulates a metric representing the vulnerability to channel errors. As each new frame is encoded, the accumulated metric for each block is examined, and those blocks deemed to have an unacceptably high metric are sent using intra as opposed to inter coding. This approach is fully compatible with H.263 and involves a negligible increase in encoder complexity and no change in the decoder complexity. Simulations performed using an H.263 bitstream corrupted by channel errors demonstrate a significant improvement over non-adaptive intra update strategies.

Proceedings ArticleDOI
14 Nov 1996
TL;DR: The subband-level and bitplane-level optimization procedures give rise to an embedded channel coding strategy that derives optimal source/channel coding tradeoffs for the block erasure channel to reduce image transmission latency.
Abstract: We describe a joint source/channel allocation scheme for transmitting images lossily over block erasure channels such as the Internet. The goal is to reduce image transmission latency. Our subband-level and bitplane-level optimization procedures give rise to an embedded channel coding strategy. Source and channel coding bits are allocated in order to minimize an expected distortion measure. More perceptually important low frequency channels of images are shielded heavily using channel codes; higher frequencies are shielded lightly. The result is a more efficient use of channel codes that can reduce channel coding overhead. This reduction is most pronounced on bursty channels for which the uniform application of channel codes is expensive. We derive optimal source/channel coding tradeoffs for our block erasure channel.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: In this article, a non-iterative algorithm for controlling the error in non-linear finite element analysis which is caused by the use of finite load steps is presented. But, unlike most of the other approaches, the proposed algorithm is not iterative and treats the governing load-deflection relations as a system of ordinary differential equations.
Abstract: SUMMARY This paper presents an algorithm for controlling the error in non-linear finite element analysis which is caused by the use of finite load steps. In contrast to most recent schemes, the proposed technique is non-iterative and treats the governing load-deflection relations as a system of ordinary differential equations. This permits the governing equations to be integrated adaptively where the step size is controlled by monitoring the local truncation error. The latter is measured by computing the difference between two estimates of the displacement increments for each load step, with the initial estimate being found from the first-order Euler scheme and the improved estimate being found from the second-order modified Euler scheme. If the local truncation error exceeds a specified tolerance, then the load step is abandoned and the integration is repeated with a smaller load step whose size is found by local extrapolation. Local extrapolation is also used to predict the size of the next load step following a successful update. In order to control not only the local load path error, but also the global load path error, the proposed scheme incorporates a correction for the unbalanced forces. Overall, the cost of the automatic error control is modest since it requires only one additional equation solution for each successful load step. Because the solution scheme is non-iterative and founded on successful techniques for integrating systems of ordinary differential equations, it is particularly robust. To illustrate the ability of the scheme to constrain the load path error to lie near a desired tolerance, detailed results are presented for a variety of elastoplastic boundary value problems.

Journal ArticleDOI
TL;DR: It is shown that a simplified version of the error correction code recently suggested by Shor exhibits manifestation of the quantum Zeno effect and protection of an unknown quantum state is achieved.
Abstract: It is shown that a simplified version of the error correction code recently suggested by Shor [Phys. Rev. A 52, R2493 (1995)] exhibits manifestation of the quantum Zeno effect. Thus, under certain conditions, protection of an unknown quantum state is achieved. Error prevention procedures based on four-particle and two-particle encoding are proposed and it is argued that they have feasible practical implementations.

Patent
Odenwalder Joseph P1
07 Jun 1996
TL;DR: In this article, a set of individually gain adjusted subscriber channels are formed via the use of orthogonal subchannel codes having a small number of PN spreading chips per orthogonality waveform period.
Abstract: A novel and improved method and apparatus for high rate CDMA wireless communication is described. In accordance with one embodiment of the invention, a set of individually gain adjusted subscriber channels are formed via the use of a set of orthogonal subchannel codes having a small number of PN spreading chips per orthogonal waveform period. Data to be transmitted via one of the transmit channels is low code rate error correction encoded and sequence repeated before being modulated with one of the subchannel codes, gain adjusted, and summed with data modulated using the other subchannel codes. The resulting summed data is modulated using a user long code and a pseudorandom spreading code (PN code) and upconverted for transmission. The use of the short orthogonal codes provides interference suppression while still allowing extensive error correction coding and repetition for time diversity to overcome the Raleigh fading commonly experienced in terrestrial wireless systems. In the exemplary embodiment of the invention provided, the set of sub-channel codes are comprised of four Walsh codes, each orthogonal to the remaining set and four chips in duration. The use of four sub-channels is preferred as it allows shorter orthogonal codes to be used, however, the use of a greater number of channels and therefore longer codes is consistent with the invention. In a preferred exemplary embodiment of the invention, pilot data is transmitted via a first one of the transmit channels and power control data transmitted via a second transmit channel. The remaining two transmit channels are used for transmitting non-specified digital data including user data or signaling data, or both. In an exemplary embodiment, one of the two non-specified transmit channels is configured for BPSK modulation and transmission over the quadrature channel.

Patent
21 May 1996
TL;DR: In this article, a sensing section reads the dot codes by optically scanning a recording medium on which multimedia information has been recorded in the form of optically readable dot codes, and the thus obtained dot codes are processed and restored to the original multimedia information by a scanning conversion section, a data string adjusting section, an error correction section, reproducing section and a controller.
Abstract: A sensing section reads the dot codes by optically scanning a recording medium on which multimedia information has been recorded in the form of optically readable dot codes. The thus obtained dot codes are processed and restored to the original multimedia information by a scanning conversion section, a data string adjusting section, an error correction section, a reproducing section and a controller. An output unit reproduces each piece of information and outputs them. In this case, on the basis of the dot codes thus read, the scanning conversion section and controller sense information reproduction parameters, such as the dot size. The sensed parameters are then stored in a parameter memory. According to the parameters stored in the parameter memory, the dot codes are then subjected to a reproducing process.

Patent
01 Apr 1996
TL;DR: In this article, the authors recover data utilizing information from previously received data packets to reconstruct a data packet with an ever increasing probability of reconstructing the actually transmitted data packet, with each new reception.
Abstract: The present invention recovers data utilizing information from previously received data packets to reconstruct a data packet with an ever increasing probability of reconstructing the actually transmitted data packet. With each new reception, the accumulated information brings the receiver closer to obtaining the actually transmitted data packet. This use of information which might otherwise be discarded permits highly efficient and effective bit error correction particularly useful in hostile communications environments by transceivers with limited data processing and/or memory resources.

Proceedings ArticleDOI
03 Oct 1996
TL;DR: It is demonstrated that some improvements in word accuracy result from augmenting the channel model with an account of word fertility in the channel, and that a modem continuous speech recognizer can be used in "black-box" fashion for robustly recognizing speech for which the recognizer was not originally trained.
Abstract: The authors have implemented a post-processor called SPEECHPP to correct word-level errors committed by an arbitrary speech recognizer. Applying a noisy-channel model, SPEECHPP uses a Viterbi beam-search that employs language and channel models. Previous work demonstrated that a simple word-for-word channel model was sufficient to yield substantial increases in word accuracy. The paper demonstrates that some improvements in word accuracy result from augmenting the channel model with an account of word fertility in the channel. The work further demonstrates that a modem continuous speech recognizer can be used in "black-box" fashion for robustly recognizing speech for which the recognizer was not originally trained. The work also demonstrates that in the case where the recognizer can be tuned to the new task, environment, or speaker, the post-processor can also contribute to performance improvements.