scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 1990"


Journal ArticleDOI
Andrew J. Viterbi1
TL;DR: With this approach and coordinated processing at a common receiver, it is shown that the aggregate data rate of all simultaneous users can approach the Shannon capacity of the Gaussian noise channel.
Abstract: A spread-spectrum multiple-access (SSMA) communication system is treated for which both spreading and error control is provided by binary PSK modulation with orthogonal convolution codes. Performance of spread-spectrum multiple access by a large number of users employing this type of coded modulation is determined in the presence of background Gaussian noise. With this approach and coordinated processing at a common receiver, it is shown that the aggregate data rate of all simultaneous users can approach the Shannon capacity of the Gaussian noise channel. >

784 citations


Journal ArticleDOI
TL;DR: The authors obtain the optimum transmission ranges to maximize throughput for a direct-sequence spread-spectrum multihop packet radio network and model the network self-interference as a random variable which is equal to the sum of the interference power of all other terminals plus background noise.
Abstract: The authors obtain the optimum transmission ranges to maximize throughput for a direct-sequence spread-spectrum multihop packet radio network. In the analysis, they model the network self-interference as a random variable which is equal to the sum of the interference power of all other terminals plus background noise. The model is applicable to other spread-spectrum schemes where the interference of one user appears as a noise source with constant power spectral density to the other users. The network terminals are modeled as a random Poisson field of interference power emitters. The statistics of the interference power at a receiving terminal are obtained and shown to be the stable distributions of a parameter that is dependent on the propagation power loss law. The optimum transmission range in such a network is of the form CK/sup alpha / where C is a constant, K is a function of the processing gain, the background noise power spectral density, and the degree of error-correction coding used, and alpha is related to the power loss law. The results obtained can be used in heuristics to determine optimum routing strategies in multihop networks. >

489 citations


Journal ArticleDOI
TL;DR: The binary switching algorithm is introduced, based on the objective of minimizing a useful upper bound on the average system distortion, which yields a significant reduction in average distortion, and converges in reasonable running times.
Abstract: A pseudo-Gray code is an assignment of n-bit binary indexes to 2" points in a Euclidean space so that the Hamming distance between two points corresponds closely to the Euclidean distance. Pseudo-Gray coding provides a redundancy-free error protection scheme for vector quantization (VQ) of analog signals when the binary indexes are used as channel symbols on a discrete memoryless channel and the points are signal codevectors. Binary indexes are assigned to codevectors in a way that reduces the average quantization distortion introduced in the reproduced source vectors when a transmitted index is corrupted by channel noise. A globally optimal solution to this problem is generally intractable due to an inherently large computational complexity. A locally optimal solution, the binary switching algorithm, is introduced, based on the objective of minimizing a useful upper bound on the average system distortion. The algorithm yields a significant reduction in average distortion, and converges in reasonable running times. The sue of pseudo-Gray coding is motivated by the increasing need for low-bit-rate VQ-based encoding systems that operate on noisy channels, such as in mobile radio speech communications. >

411 citations


Journal ArticleDOI
TL;DR: Failures, faults, and errors in digital systems are examined, and measures of dependability, which dictate and evaluate fault-tolerance strategies for different classes of applications, are defined.
Abstract: The basic concepts of fault-tolerant computing are reviewed, focusing on hardware. Failures, faults, and errors in digital systems are examined, and measures of dependability, which dictate and evaluate fault-tolerance strategies for different classes of applications, are defined. The elements of fault-tolerance strategies are identified, and various strategies are reviewed. They are: error detection, masking, and correction; error detection and correction codes; self-checking logic; module replication for error detection and masking; protocol and timing checks; fault containment; reconfiguration and repair; and system recovery. >

396 citations


Patent
Dan S. Bloomberg1, Robert F. Tow1
31 Jul 1990
TL;DR: Weighted and unweighted convolution filtering processes are provided for decoding bitmap image space representations of self-clocking glyph shape codes and for tracking the number and locations of the ambiquities or "errors" that are encountered during the decoding as mentioned in this paper.
Abstract: Weighted and unweighted convolution filtering processes are provided for decoding bitmap image space representations of self-clocking glyph shape codes and for tracking the number and locations of the ambiquities or "errors" that are encountered during the decoding. This error detection may be linked to or compared against the error statistics from an alternative decoding process, such as the binary image processing techniques that are described herein to increase the reliability of the decoding that is obtained.

286 citations


Journal ArticleDOI
TL;DR: An analysis is presented of the selective-repeat type II hybrid ARQ (automatic-repeat-request) scheme, using convolutional coding and exploiting code combining to show a significant throughput is achievable, even at very high channel error rates.
Abstract: An analysis is presented of the selective-repeat type II hybrid ARQ (automatic-repeat-request) scheme, using convolutional coding and exploiting code combining. With code combining, at successive decoding attempts for a data packet, the decoder for error correction operates on a combination of all received sequences for that packet rather than only on the two most recent received ones as in the conventional type II hybrid ARQ scheme. It is shown by means of analysis and computer simulations that with code combining, a significant throughput is achievable, even at very high channel error rates. >

239 citations


Journal ArticleDOI
TL;DR: The throughput increases as the starting coding rage increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.
Abstract: A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of J. Hagenauer (1988) with the code-combining ARQ strategy of D. Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rage increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations. >

221 citations


Patent
31 Oct 1990
TL;DR: In this paper, a high bandwidth switch is used to connect a large number of processing elements (e.g. 4096) by means of a Banyan network, which includes one or more general purpose microprocessors, a local memory and a DMA controller that sends and receives messages through the switch without requiring processor intervention.
Abstract: A large number of processing elements (e.g. 4096) are interconnected by means of a high bandwidth switch. Each processing element includes one or more general purpose microprocessors, a local memory and a DMA controller that sends and receives messages through the switch without requiring processor intervention. The switch that connects the processing elements is hierarchical and comprises a network of clusters. Sixty-four processing elements can be combined to form a cluster and sixty four clusters can be linked by way of a Banyan network. Messages are routed through the switch in the form of packets which include a command field, a sequence number, a destination address, a source address, a data field (which can include subcommands), and an error correction code. Error correction is performed at the processing elements. If a packet is routed to a non-present or non-functional processor, the switch reverses the source and destination field and returns the packet to the sender with an error flag. If the packet is misrouted to a functional processing element, the processing element corrects the error and retransmits the packet through the switch over a different path. In one embodiment, each processing element can be provided with a hardware accelerator for database functions. In this embodiment, the multiprocessor of the present invention can be employed as a coprocessor to a 370 host and used to perform database functions.

214 citations


Journal ArticleDOI
TL;DR: The authors derive reliability functions and mean time to failure of four different memory systems subject to transient errors at exponentially distributed arrival times and derive easy-to-use expressions for MTTF of memories.
Abstract: The authors analyze the problem of transient-error recovery in fault-tolerant memory systems, using a scrubbing technique. This technique is based on single-error-correction and double-error-detection (SEC-DED) codes. When a single error is detected in a memory word, the error is corrected and the word is rewritten in its original location. Two models are discussed: (1) exponentially distributed scrubbing, where a memory word is assumed to be checked in an exponentially distributed time period, and (2) deterministic scrubbing, where a memory word is checked periodically. Reliability and mean-time-to-failure (MTTF) equations are derived and estimated. The results of the scrubbing techniques are compared with those of memory systems without redundancies and with only SEC-DED codes. A major contribution of the analysis is easy-to-use expressions for MTTF of memories. The authors derive reliability functions and mean time to failure of four different memory systems subject to transient errors at exponentially distributed arrival times. >

202 citations


Journal ArticleDOI
TL;DR: The unequal error protection capabilities of convolutional codes belonging to the family of rate-compatible punctured convolutionAL codes (RCPC codes) are studied and a number of examples are provided to show that it is possible to accommodate widely different error protection levels within short information blocks.
Abstract: The unequal error protection capabilities of convolutional codes belonging to the family of rate-compatible punctured convolutional codes (RCPC codes) are studied. The performance of these codes is analyzed and simulated for the first fading Rice and Rayleigh channels with differentially coherent four-phase modulation (4-DPSK). To mitigate the effect of fading, interleavers are designed for these unequal error protection codes, with the interleaving performed over one or two blocks of 256 channel bits. These codes are decoded by means of the Viterbi algorithm using both soft symbol decisions and channel state information. For reference, the performance of these codes on a Gaussian channel with coherent binary phase-shift keying (2-CPSK) is presented. A number of examples are provided to show that it is possible to accommodate widely different error protection levels within short information blocks. Unequal error protection codes for a subband speech coder are studied in detail. A detailed study of the effect of the code and channel parameters such as the encoder memory, the code rate, interleaver depth, fading bandwidth, and the contrasting performance of hard and soft decisions on the received symbols is provided. >

200 citations


Patent
09 Aug 1990
TL;DR: In this article, a data transmission system and method for processing speech, image and other data disclosed which embodies parallel and serial-generalized Viterbi decoding algorithms (GVA) that produce a rank ordered list of the L best candidates after a trellis search.
Abstract: A data transmission system and method for processing speech, image and other data disclosed which embodies parallel- and serial-generalized Viterbi decoding algorithms (GVA) that produce a rank ordered list of the L best candidates after a trellis search. Error detection is performed by identifying unreliable sequences through comparison of the likelihood metrics of the two or more most likely sequences. Unreliable sequences are re-estimated using inter-frame redundancy or retransmission.

Journal ArticleDOI
TL;DR: A functional-level concurrent error-detection scheme is presented for such VLSI signal processing architectures as those proposed for the FFT and QR factorization, and it is shown that the error coverage is high with large word sizes.
Abstract: The increasing demands for high-performance signal processing along with the availability of inexpensive high-performance processors have results in numerous proposals for special-purpose array processors for signal processing applications. A functional-level concurrent error-detection scheme is presented for such VLSI signal processing architectures as those proposed for the FFT and QR factorization. Some basic properties involved in such computations are used to check the correctness of the computed output values. This fault-detection scheme is shown to be applicable to a class of problems rather than a particular problem, unlike the earlier algorithm-based error-detection techniques. The effects of roundoff/truncation errors due to finite-precision arithmetic are evaluated. It is shown that the error coverage is high with large word sizes. >

Journal ArticleDOI
TL;DR: It is proven that linearity is a necessary and sufficient condition for codes used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU decomposition and for every linear code defined over a finite field, there exists a corresponding linear real-number code with similar error detecting capabilities.
Abstract: A generalization of existing real numer codes is proposed. It is proven that linearity is a necessary and sufficient condition for codes used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU decomposition. It is also proven that for every linear code defined over a finite field, there exists a corresponding linear real-number code with similar error detecting capabilities. Encoding schemes are given for some of the example codes which fall under the general set of real-number codes. With the help of experiments, a rule is derived for the selection of a particular code for a given application. The performance overhead of fault tolerance schemes using the generalized encoding schemes is shown to be very low, and this is substantiated through simulation experiments. >

Journal ArticleDOI
TL;DR: An 8-b, 200-megasample/s flash converter with 400-MHz analog bandwidth and error correction circuitry is described, and measured frequency and error rate performance are examined.
Abstract: An 8-b, 200-megasample/s flash converter with 400-MHz analog bandwidth and error correction circuitry is described. A cascoded input stage and a dense bipolar process make the wide bandwidth possible. Errors arising from high input slew rate and comparator metastability are reduced by means of the circuitry and the latching stages respectively. The final defense against errors is the second rank error suppression. Measured frequency and error rate performance are examined. >

Patent
Christopher P. Zook1
06 Nov 1990
TL;DR: An error correction system (1000) included in a utilization device (1002) operates upon a plurality of sectors (S) stored in a data buffer (1100) for performing write-from-host and read-fromdevice operations Overlapping and asynchronous operational steps including sector transfer into buffer, sector correction, and sector transfer out of buffer as discussed by the authors.
Abstract: An error correction system (1000) included in a utilization device (1002) operates upon a plurality of sectors (S) stored in a data buffer (1100) for performing write-from-host and read-from-device operations Overlapping and asynchronous operational steps are performed with respect to the plurality of sectors, the operational steps including sector transfer into buffer, sector correction, and sector transfer out of buffer The error correction system (1000) includes a plurality of subsystems which are supervised and sequenced by correction controller (1020) The subsystems include a CRC generation and checking subsystem (1030); an LBA subsystem (1040); an ECC/Syndrome Generator subsystem (1050); a header (ID) subsystem (1060); a correction subsystem (1070); and, a correction checker system (1075)

Journal ArticleDOI
01 Sep 1990
TL;DR: In this article, a decoding algorithm for algebraic-geometric codes arising from arbitrary algebraic curves is presented, which corrects any number of errors up to ((d-g-1)/2), where d is the designed distance of the code and g is the genus of the curve.
Abstract: A decoding algorithm for algebraic-geometric codes arising from arbitrary algebraic curves is presented. This algorithm corrects any number of errors up to ((d-g-1)/2), where d is the designed distance of the code and g is the genus of the curve. The complexity of decoding equals sigma (n/sup 3/) where n is the length of the code. Also presented is a modification of this algorithm, which in the case of elliptic and hyperelliptic curves is able to correct ((d-1)/2) errors. It is shown that for some codes based on plane curves the modified decoding algorithm corrects approximately d/2-g/4 errors. Asymptotically good q-ary codes with a polynomial construction and a polynomial decoding algorithm (for q>or=361 on some segment their parameters are better than the Gilbert-Varshamov bound) are obtained. A family of asymptotically good binary codes with polynomial construction and polynomial decoding is also obtained, whose parameters are better than the Blokh-Zyablov bound on the whole interval 0 >

Patent
28 Jun 1990
TL;DR: In this paper, an error detection and correction system which encodes data twice, once for error detection by using a cyclic redundancy check (CRC) code with a generator polynomial, g(x) [in octal form] = 2413607036565172433223 and a second time for error correction by using Reed-Solomon error correction code.
Abstract: The invention is an error detection and correction system which encodes data twice, once for error detection by using a cyclic redundancy check (CRC) code with a generator polynomial, g(x) [in octal form]: g(x)=2413607036565172433223 and a second time for error correction by using a Reed-Solomon error correction code. The system then uses the CRC code to check the data for errors. If errors are found the system uses the error location information supplied by the CRC code and the Reed-Solomon code to correct the errors.

Proceedings ArticleDOI
03 Apr 1990
TL;DR: A backward filtering formulation is given to show that sparse algebraic codes (SACs) offer distinct advantages and it is shown that they reduce the optimal-search computation per codeword.
Abstract: A general framework is introduced which allows both fast search and freedom in designing codebooks with good statistical properties. Several previously proposed schemes are compared from this viewpoint. A backward filtering formulation is given to show that sparse algebraic codes (SACs) (i.e., with few nonzero components) offer distinct advantages. It is shown that they reduce the optimal-search computation per codeword. They also allow control of the statistical properties of the codebook in the time and frequency domains. This control can be dynamic in the sense that it can be made to evolve as a function of the linear predictive coding model A(z). The algebraic-code excited linear prediction (ACELP) technology which allows full duplex operation on a single TMS320C25 at rates between 4.8 and 16 kb/s and which is based on SAC-driven dynamic codebooks is described. >

Journal ArticleDOI
TL;DR: A scheme for the construction of m-out-of-n codes based on the arithmetic coding technique is described, which allows theConstruction of optimal or nearly optimal m- out- of-N codes for a wide range of block sizes limited only by the arithmetic precision used.
Abstract: A scheme for the construction of m-out-of-n codes based on the arithmetic coding technique is described. For appropriate values of n, k, and m, the scheme can be used to construct an (n,k) block code in which all the codewords are of weight m. Such codes are useful, for example, in providing perfect error detection capability in asymmetric channels such as optical communication links and laser disks. The encoding and decoding algorithms of the scheme perform simple arithmetic operations recursively, thereby facilitating the construction of codes with relatively long block sizes. The scheme also allows the construction of optimal or nearly optimal m-out-of-n codes for a wide range of block sizes limited only by the arithmetic precision used. >

Journal ArticleDOI
TL;DR: Error correction techniques that overcome several error mechanism that can affect the accuracy of charge-redistribution analog-to-digital converters (ADCs) are described and a fully differential charge- redistribution ADC implemented with these techniques is described.
Abstract: Error correction techniques that overcome several error mechanism that can affect the accuracy of charge-redistribution analog-to-digital converters (ADCs) are described. A correction circuit and a self-calibration algorithm are used to improve the common-mode rejection of the differential ADC. A modified technique is used to self-calibrate the capacitor ratio errors and obtain higher linearity. The residual error of the ADC due to capacitor voltage dependence is minimized using a quadratic voltage coefficient (QVC) self-calibration scheme. A dual-comparator topology with digital error correction circuitry is used to avoid errors due to comparator threshold hysteresis. A fully differential charge-redistribution ADC implemented with these techniques was fabricated in a 5-V 1- mu m CMOS process using metal-to-polysilicide capacitors. The successive-approximation converter achieves 16-b accuracy with more than 90 dB of common-mode rejection while converting at a 200-kHz rate. >

Proceedings ArticleDOI
26 Jun 1990
TL;DR: A simulation model of the IBM RT PC was developed and injected with 18900 gate-level transient faults, showing several distinct classes of program-level error behavior, including program flow changes, incorrect memory bus traffic, and undetected but corrupted program state.
Abstract: Effects of gate-level faults on program behavior are described and used as a basis for fault models at the program level. A simulation model of the IBM RT PC was developed and injected with 18900 gate-level transient faults. A comparison of the system state of good and faulted runs was made to observe internal propagation of errors, while memory traffic and program flow comparisons detected errors in program behavior. Results show several distinct classes of program-level error behavior, including program flow changes, incorrect memory bus traffic, and undetected but corrupted program state. Additionally, the dependencies of fault location, injection time, and workload on error detection coverage are reported. For the IBM RT PC, the error detection latency was shown to follow a Weibull distribution dependent on the error detection mechanism and the two selected workloads. These results aid in the understanding of the effects of gate-level faults and allow for the generation and validation of new fault models, fault injection methods, and error detection mechanisms. >

Patent
13 Feb 1990
TL;DR: In this paper, error detection and correction are performed on the same chip as DRAM memory, where the data and error correction bits need not travel on an external bus, and error detection/correction can be conducted on a larger number of bits than the width of the data bus.
Abstract: Error detection or correction is provided on the same chip as DRAM memory. Because data and error correction bits need not travel on an external bus, error detection/correction can be conducted on a larger number of bits than the width of the data bus. When using memories which provide for access to a row of memory, such as static-column or fast-page mode memories, error correction is conducted on an entire row of memory during one error correction cycle. Following operations of the correction cycle, the data within a row of memory can be accessed independently of the EC circuitry.

Patent
07 Feb 1990
TL;DR: In this paper, a video signal processing circuit is disclosed in which error correction, error concealment and weighted mean processing are performed on input playback video signals from a video tape recorder.
Abstract: A video signal processing circuit is disclosed in which error correction, error concealment and weighted mean processing (in this order) are performed on input playback video signals from a video tape recorder, wherein, for performing error concealment on erroneous sample data on which error correction has not been feasible a plurality of error concealment methods or algorithms are provided One of the available error concealment methods is selected in dependence upon the state of error flags of sample data in the neighboring and/or temporal direction of the erroneous sample data to perform a concealment in accordance with the error pattern to provide satisfactory error concealment for a broad range of error rates

Journal ArticleDOI
01 May 1990
TL;DR: The algebraic decoding algorithm developed recently by Elia is compared with the shift-search method and both algorithms decode efficiently the 1/2-rate (24,12) Golay code for correcting three errors and detecting four errors.
Abstract: A simplified procedure, called the shift-search method, is developed to decode the three possible errors in a (23,12,7) Golay codeword. The algebraic decoding algorithm developed recently by Elia is compared with this algorithm. A computer simulation shows that both algorithms are modular, regular and naturally suitable for either VLSI or software implementation. Both of these algorithms decode efficiently the 1/2-rate (24,12) Golay code for correcting three errors and detecting four errors. The algebraic technique is a slightly faster algorithm in software than the shift-search procedure.

Patent
18 Oct 1990
TL;DR: In this article, an internally fault-tolerant data error detection and correction integrated circuit device (10) and a method of operating same is presented. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a relatively short eight bits of data-protecting parity.
Abstract: An internally fault-tolerant data error detection and correction integrated circuit device (10) and a method of operating same. The device functions as a bidirectional data buffer between a 32-bit data processor and the remainder of a data processing system and provides a 32-bit datum is provided with a relatively short eight bits of data-protecting parity. The 32-bits of data by eight bits of parity is partitioned into eight 4-bit nibbles and two 4-bit nibbles, respectively. For data flowing towards the processor the data and parity nibbles are checked in parallel and in a single operation employing a dual orthogonal basis technique. The dual orthogonal basis increase the efficiency of the implementation. Any one of ten (eight data, two parity) nibbles are correctable if erroneous, or two different erroneous nibbles are detectable. For data flowing away from the processor the appropriate parity nibble values are calculated and transmitted to the system along with the data. The device regenerates parity values for data flowing in either direction and compares regenerated to generated parity with a totally self-checking equality checker. As such, the device is self-validating and enabled to both detect and indicate an occurrence of an internal failure. A generalization of the device to protect 64-bit data with 16-bit parity to protect against byte-wide errors is also presented.

Journal ArticleDOI
TL;DR: In this paper, a large-signal automatic stepped CW waveform measurement system for nonlinear device characterization is presented that combines the high accuracy of a vector network analyzer with the wave-form measurement capabilities of a sampling oscilloscope.
Abstract: A large-signal automatic stepped CW waveform measurement system for nonlinear device characterization is presented that combines the high accuracy of a vector network analyzer with the waveform measurement capabilities of a sampling oscilloscope. A large-signal error model and a corresponding coaxial calibration procedure are proposed to describe the systematic errors of the measurement setup. The error parameters and the correction algorithm are independent of the properties of the RF generator. System accuracy is investigated by Schottky diode verification measurements with different offsets from the reference plane. GaAs MESFET reflection and transmission response measurements with error correction extended to the planar device under test (DUT) reference planes are given. >

Journal ArticleDOI
TL;DR: In this article, intended for readers with basic knowledge in coding, the codes used in actual systems are surveyed, including bit-error-correcting/detecting codes, byte- Error control codes, and codes to detect single-byte errors as well as correct single-bit errors and detect double- bit errors.
Abstract: In this article, intended for readers with basic knowledge in coding, the codes used in actual systems are surveyed. Error control in high-speed memories is examined, including bit-error-correcting/detecting codes, byte-error-correcting/detecting codes, and codes to detect single-byte errors as well as correct single-bit errors and detect double-bit errors. Tape and disk memory codes for error control in mass memories are discussed. Processor error control and unidirectional error-control codes are covered, including the application of the latter to masking asymmetric line faults. >

Patent
24 Sep 1990
TL;DR: In this paper, the authors proposed a channel decoding scheme for soft decision decoding in a communications network having time-dispersed signals. But this scheme requires the receiver to receive a time dispersed signal, at least partly equalizing those time dis-persal effects, recovering information contained in the signal, multiplying with that recovered information the absolute value of that atleast partly equalized signal (scaled by a number derived from channel conditions over a time during which at least part of the information to be recovered is distributed).
Abstract: In a communications network having time-dispersed signals, there is provided a mechanism for soft decision decoding. It comprises: radio reception of a time-dispersed signal, at least partly equalizing those time-dispersal effects, recovering information contained in the signal, multiplying with that recovered information the absolute value of that at-least-partly-equalized signal (scaled by a number derived from channel conditions over a time during which at least part of the information to be recovered is distributed), and error-correcting the multiplied information by a Viterbi algorithm channel decoding scheme of error correction. Accordingly, soft decision information is generated from whithin the equalization process itself.

Journal ArticleDOI
01 Jun 1990
TL;DR: In this article, a concatenated coded modulation scheme is presented for error control in data communications, which is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary phase-shift keying modulation.
Abstract: A concatenated coded modulation scheme is presented for error control in data communications. The scheme is achieved by concatenating a Reed-Solomon outer code and a bandwidth efficient block inner code for M-ary phase-shift keying (PSK) modulation. Error performance of the scheme is analyzed for an additive white Gaussian noise (AWGN) channel. It is shown that extremely high reliability can be attained by using a simple M-ary PSK modulation inner-code and a relatively powerful Reed-Solomon outer code. Furthermore, if an inner code of high effective rate is used, the bandwidth expansion required by the scheme due to coding will be greatly reduced. The scheme is particularly effective for high-speed satellite communications for large file transfer where high reliability is required. A simple method is also presented for constructing block codes for M-ary PSK modulation. Soome short M-ary PSK codes with good minimum squared Euclidean distance are constructed. These codes have trellis structure and hence can be decoded with a soft-decision Viterbi decoding algorithm. Furthermore, some of these codes are phase invariant under multiples of 45 degrees rotation. >

Patent
31 Aug 1990
TL;DR: In this paper, a secure transmission of data between the transmitting and receiving ends of a cordless telephone by encapsulating the desired command in a message code is described, which includes synchronization, security, and error detection codes as well as the encapsulated command.
Abstract: A novel method is disclosed that facilitates secured transmission of data between the transmitting and receiving ends of a cordless telephone by encapsulating the desired command in a message code. This message code includes synchronization, security, and error detection codes as well as the encapsulated command. Both the process that generates the security code and the process that enables recovery from errors during data transmission efficiently utilize the limited memory and processing capabilities available on the cordless telephones.