scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 1999"


Journal ArticleDOI
Carl James1
TL;DR: This book discusses human error Successive paradigms Interlanguage and the veto on comparison Learners and native speakers The heyday of Error Analysis Mounting criticism of Error analysis Data collection for Error Analysis.
Abstract: General editor's preface Author's preface Abbreviations 1. Definition and Delimitation Human error Successive paradigms Interlanguage and the veto on comparison Learners and native speakers The heyday of Error Analysis Mounting criticism of Error Analysis Data collection for Error Analysis 2. The Scope of Error Analysis Good English for the English Good English for the L2 learner The native speaker and the power dimension The Incompleteness hypothesis 3. Defining 'Error' Ignorance Measures of deviance Other Dimensions of Error: Error and Mistake Error: Mistake and Acquisition: Learning - An Equation? Lapsology 4. The Description of Errors Error detection Describing errors Error Classification Error Taxonomies Counting errors Profiling and Error Analysis Computerized Corpora of Errors: ICLE - COALA 5. Levels of Error Medium errors Text errors Lexical errors Classifying Lexical errors Grammar errors Discourse errors 6. Diagnosong Error Description and diagnosis Ignorance and avoidance Mother tongue influence: Interlingual errors Target language causes: Intralingual errors Learning-strategy based errors Communication strategy based errors Induced errors Compound and ambiguous errors 7. Error Gravity and Error Evaluation Evaluation Criteria for error gravity 8. Error Correction What is correction? Whether to correct: pros and cons How to do error correction: some options and principles Noticing error 9. A Case Study Elicitation and registration Error identification Categorizing the errors Status: error or mistake? Diagnosis Bibliography Index

1,058 citations


Book ChapterDOI
01 Feb 1999
TL;DR: This book can be used as a textbook for graduate-level electrical engineering students and will be of key interest to researchers and engineers of wireless and mobile communication, satellite communication, and data communication.
Abstract: From the Publisher: Convolutional codes, among the main error control codes, are routinely used in applications for mobile telephony, satellite communications, and voice-band modems. Written by two leading authorities in coding and information theory, this book brings you a clear and comprehensive discussion of the basic principles underlying convolutional coding. This book can be used as a textbook for graduate-level electrical engineering students. It will be of key interest to researchers and engineers of wireless and mobile communication, satellite communication, and data communication.

753 citations


Journal ArticleDOI
TL;DR: The method the authors propose applies an iterative expectation-maximization (EM) strategy that interleaves pixel classification with estimation of class distribution and bias field parameters, improving the likelihood of the model parameters at each iteration.
Abstract: The authors propose a model-based method for fully automated bias field correction of MR brain images. The MR signal is modeled as a realization of a random process with a parametric probability distribution that is corrupted by a smooth polynomial inhomogeneity or bias field. The method the authors propose applies an iterative expectation-maximization (EM) strategy that interleaves pixel classification with estimation of class distribution and bias field parameters, improving the likelihood of the model parameters at each iteration. The algorithm, which can handle multichannel data and slice-by-slice constant intensity offsets, is initialized with information from a digital brain atlas about the a priori expected location of tissue classes. This allows full automation of the method without need for user interaction, yielding more objective and reproducible results. The authors have validated the bias correction algorithm on simulated data and they illustrate its performance on various MR images with important field inhomogeneities. They also relate the proposed algorithm to other bias correction algorithms.

643 citations


Journal ArticleDOI
TL;DR: A new class of MDS (maximum distance separable) array codes of size n/spl times/n (n a prime number) called X-code, which has a simple geometrical construction which achieves encoding/update optimal complexity.
Abstract: We present a new class of MDS (maximum distance separable) array codes of size n/spl times/n (n a prime number) called X-code. The X-codes are of minimum column distance 3, namely, they can correct either one column error or two column erasures. The key novelty in X-code is that it has a simple geometrical construction which achieves encoding/update optimal complexity, i.e., a change of any single information bit affects exactly two parity bits. The key idea in our constructions is that all parity symbols are placed in rows rather than columns.

401 citations


Proceedings ArticleDOI
21 Mar 1999
TL;DR: A simple algorithm is obtained that optimizes a subjective measure as opposed to an objective measure of quality, and incorporates the constraints of rate control and playout delay adjustment schemes, and it adapts to varying loss conditions in the network.
Abstract: Excessive packet loss rates can dramatically decrease the audio quality perceived by users of Internet telephony applications. Previous results suggest that error control schemes using forward error correction (FEC) are good candidates for decreasing the impact of packet loss on audio quality. However, the FEC scheme must be coupled to a rate control scheme. Furthermore, the amount of redundant information used at any given point in time should also depend on the characteristics of the loss process at that time (it would make no sense to send much redundant information when the channel is loss free), on the end to end delay constraints (destination typically have to wait longer to decode the FEC as more FEC information is used), on the quality of the redundant information, etc. However, it is not clear given all these constraints how to choose the "best" possible redundant information. We address this issue, and illustrate the approach using an FEC scheme for packet audio standardized in the IETF. We show that the problem of finding the best redundant information can be expressed mathematically as a constrained optimization problem for which we give explicit solutions. We obtain from these solutions a simple algorithm with very interesting features, namely (i) the algorithm optimizes a subjective measure (such as the audio quality perceived at a destination) as opposed to an objective measure of quality (such as the packet loss rate at a destination), (ii) it incorporates the constraints of rate control and playout delay adjustment schemes, and (iii) it adapts to varying loss conditions in the network (estimated online with RTCP feedback). We have been using the algorithm, together with a TCP-friendly rate control scheme and we have found it to provide very good audio quality even over paths with high and varying loss rates. We present simulation and experimental results to illustrate its performance.

377 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: An efficient multiple description (MD) source coding scheme to achieve robust communication over unreliable channels by proposing channel coding principles to correlate the descriptions, and then using this correlation for combating channel impairments.
Abstract: We present an efficient multiple description (MD) source coding scheme to achieve robust communication over unreliable channels. In contrast to the popular signal processing based methods, we propose channel coding principles to correlate the descriptions, and then use this correlation for combating channel impairments. We propose a fast, nearly optimal algorithm that aims to maximize the expected quality at the receiver given the channel state and the side channel rates. Our scheme can be used in conjunction with any source coder that is scalable, and is most easily matched to coders outputting a progressive bitstream. It has applications to the transmission of audio, images, as well as delay-constrained video signals, and can also be used to achieve reliable multicast transmission over the existing Internet with the use of simple protocols. Comparisons of our scheme on standard test images to some of the existing state-of-the-art signal processing based MD methods suggest that our simple scheme outperforms them by significant margins.

325 citations


Book
01 Dec 1999
TL;DR: This work focuses on Steady State Data Reconciliation for Bilinear Systems, which combines linear algebra, graph theory, and measurement Errors and Error Reduction techniques with a focus on Gross Error Detection.
Abstract: Introduction. Measurement Errors and Error Reduction Techniques. Steady State Data Reconciliation for Bilinear Systems. Nonlinear Steady State Data Reconciliation. Data Reconciliation in Dynamic Systems. Introduction to Gross Error Detection. Multiple Gross Error Identification Strategies for Steady State Processes. Gross Error Detection in Dynamic Processes. Design of Sensor Networks. Industrial Applications of Data Reconciliation and Gross Error Detection Technologies. Appendix A: Basic concepts of linear algebra. Appendix B: Basic concepts of Graph Theory. Appendix C: Statistical Hypotheses Testing.

294 citations


Journal ArticleDOI
TL;DR: A modified bounding technique is presented that relies on limiting the conditional union bound before averaging over the fading process, which provides tight and hence useful numerical results.
Abstract: This correspondence considers union upper bound techniques for error control codes with limited interleaving over block fading Rician channels. A modified bounding technique is presented that relies on limiting the conditional union bound before averaging over the fading process. This technique, although analytically not very attractive, provides tight and hence useful numerical results.

253 citations


Journal ArticleDOI
TL;DR: Simulation results demonstrate that both the video distortion at the decoder and packet loss rate can be significantly reduced when incorporating the channel information provided by the feedback channel and the a priori model into the rate control algorithm.
Abstract: We study the problem of rate control for transmission of video over burst-error wireless channels, i.e., channels such that errors tend to occur in clusters during fading periods. In particular we consider a scenario consisting of packet based transmission with automatic repeat request (ARQ) error control and a back channel. We start by showing how the delay constraints in real time video transmission can be translated into rate constraints at the encoder, where the applicable rate constraints at a given time depend on future channel rates. With the acknowledgments received through the back channel we have an estimate of the current channel state. This information, combined with an a priori model of the channel, allows us to statistically model the future channel rates. Thus the rate constraints at the encoder can be expressed in terms of the expected channel behavior. We can then formalize a rate distortion optimization problem, namely, that of assigning quantizers to each of the video blocks stored in the encoder buffer such that the quality of the received video is maximized. This requires that the rate constraints be included in the optimization, since violating a rate constraint is equivalent to violating a delay constraint and thus results in losing a video block. We formalize two possible approaches. The first one seeks to minimize the distortion for the expected rate constraints given the channel model and current observation. The second approach seeks to allocate bits so as to minimize the expected distortion for the given model. We use both dynamic programming and Lagrangian optimization approaches to solve these problems. Our simulation results demonstrate that both the video distortion at the decoder and packet loss rate can be significantly reduced when incorporating the channel information provided by the feedback channel and the a priori model into the rate control algorithm.

242 citations


Proceedings ArticleDOI
06 Jun 1999
TL;DR: It is proved by an ensemble performance argument that these codes are asymptotically good in the sense of the minimum distance criterion and the flexibility in selecting the parameters makes them suitable for small and large block length forward error correcting schemes.
Abstract: We build a class of pseudo-random error correcting codes, called generalized low density codes (GLD), from the intersection of two interleaved block codes. GLD code performance approaches the channel capacity limit and the GLD decoder is based on simple and fast SISO (soft input-soft output) decoders of smaller block codes. GLD codes are a special case of Tanner codes and a generalization of Gallager's LDPC codes. It is also proved by an ensemble performance argument that these codes are asymptotically good in the sense of the minimum distance criterion. The flexibility in selecting the parameters of GLD codes makes them suitable for small and large block length forward error correcting schemes.

220 citations


Proceedings ArticleDOI
17 Aug 1999
TL;DR: A framework for low-energy digital signal processing (DSP) where the supply voltage is scaled beyond the critical voltage required to match the critical path delay to the throughput is proposed and a prediction based error-control scheme is proposed to enhance the performance of the filtering algorithm in presence of errors due to soft computations.
Abstract: In this paper, we propose a framework for low-energy digital signal processing (DSP) where the supply voltage is scaled beyond the critical voltage required to match the critical path delay to the throughput. This deliberate introduction of input-dependent errors leads to degradation in the algorithmic performance, which is compensated for via algorithmic noise-tolerance (ANT) schemes. The resulting setup comprised of the DSP architecture operating at sub-critical voltage and the error control scheme is referred to as soft DSP. It is shown that technology scaling renders the proposed scheme more effective as the delay penalty suffered due to voltage scaling reduces due to short channel effects. The effectiveness of the proposed scheme is also enhanced when arithmetic units with a higher "delay-imbalance" are employed. A prediction based error-control scheme is proposed to enhance the performance of the filtering algorithm in presence of errors due to soft computations. For a frequency selective filter, it is shown that the proposed scheme provides 60%-81% reduction in energy dissipation for filter bandwidths up to 0.5 /spl pi/ (where 2 /spl pi/ corresponds to the sampling frequency f/sub s/) over that achieved via conventional voltage scaling, with a maximum of 0.5 dB degradation in the output signal-to-noise ratio (SNR/sub o/). It is also shown that the proposed algorithmic noise-tolerance schemes can be used to improve the performance of DSP algorithms in presence of bit-error rates of up to 10/sup -3/ due to deep submicron (DSM) noise.

Book
12 May 1999
TL;DR: Fundamentals of Inductive Magnetic Head and Medium and Fundamental Limitations of Magnetic Recording are explained.
Abstract: Preface. Introduction. Fundamentals of Inductive Magnetic Head and Medium. Read Process in Magnetic Recording. Write Process in Magnetic Recording. Inductive Magnetic Process. Magnetoresistive Heads. Magnetic Recording Media. Channel Coding and Error Correction. Noises. Nonlinear Distortions. Peak Detection Channel. PRML Channel. Decision Feedback Channel. Off-Track Performance. Head-Disk Assembly Servo. Fundamental Limitations of Magnetic Recording. Alternative Information Storage Technologies.

Journal Article
TL;DR: This work considers the broadcast exclusion problem: how to transmit a message over a broadcast channel shared by N = 2n users so that all but some specified coalition of k excluded users can understand the contents of the message.
Abstract: We consider the broadcast exclusion problem: how to transmit a message over a broadcast channel shared by N = 2 n users so that all but some specified coalition of k excluded users can understand the contents of the message. Using error-correcting codes, and avoiding any computational assumptions in our constructions, we construct natural schemes that completely avoid any dependence on n in the transmission overhead. Specifically, we construct: (i) (for illustrative purposes,) a randomized scheme where the server's storage is exponential (in n), but the transmission overhead is O(k), and each user's storage is O(kn); (ii) a scheme based on polynomials here the transmission overhead is O(kn) and each user's storage is O(kn); and (iii) a scheme using algebraic-geometric codes where the transmission overhead is O(k 2 ) and each user is required to store O(kn) keys. In the process of proving these results, we show how to construct very good cover-free set systems and combinatorial designs based on algebraic-geometric codes, which may be of independent interest and application. Our approach also naturally extends to solve the problem in the case where the broadcast channel may introduce errors or lose information.

Book ChapterDOI
TL;DR: A new class of erasure codes built from irregular bipartite graphs that have linear time encoding and decoding algorithms and can transmit over an erasure channel at rates arbitrarily close to the channel capacity are introduced.
Abstract: We will introduce a new class of erasure codes built from irregular bipartite graphs that have linear time encoding and decoding algorithms and can transmit over an erasure channel at rates arbitrarily close to the channel capacity. We also show that these codes are close to optimal with respect to the trade-off between the proximity to the channel capacity and the running time of the recovery algorithm.

Journal ArticleDOI
TL;DR: Several very large scale integration (VLSI) architectures suitable for turbo decoder implementation are proposed and compared in terms of complexity and performance; the impact on the VLSI complexity of system parameters like the state number, number of iterations, and code rate are evaluated for the different solutions.
Abstract: A great interest has been gained in recent years by a new error-correcting code technique, known as "turbo coding", which has been proven to offer performance closer to the Shannon's limit than traditional concatenated codes. In this paper, several very large scale integration (VLSI) architectures suitable for turbo decoder implementation are proposed and compared in terms of complexity and performance; the impact on the VLSI complexity of system parameters like the state number, number of iterations, and code rate are evaluated for the different solutions. The results of this architectural study have then been exploited for the design of a specific decoder, implementing a serial concatenation scheme with 2/3 and 3/4 codes; the designed circuit occupies 35 mm/sup 2/, supports a 2 Mb/s data rate, and for a bit error probability of 10/sup -6/, yields a coding gain larger than 7 dB, with ten iterations.

Journal ArticleDOI
TL;DR: In this paper, the maximum free-distance codes with rate 1/2, 1/3, and 1/4 were proposed for coherent BPSK over a Rayleigh fading channel and a wide range of signal-to-noise ratios.
Abstract: New convolutional codes with rates 1/2, 1/3, and 1/4 are presented for constraint lengths ranging from 3 to 15. These new codes are maximum free-distance codes. Furthermore, the codes have optimized information error weights, resulting in a low bit-error rate for binary communication on both additive white Gaussian noise (AWGN) and Rayleigh fading channels. Improved coding gains of as much as 0.6 dB compared to previously published codes have been observed for coherent BPSK over a Rayleigh fading channel and a wide range of signal-to-noise ratios.

Proceedings ArticleDOI
01 Nov 1999
TL;DR: A systematic approach for automatically introducing data and code redundancy into an existing program written using a high-level language that can be automatically applied as a pre-compilation phase, freeing the programmer from the cost and responsibility of introducing suitable EDMs in its code.
Abstract: The paper describes a systematic approach for automatically introducing data and code redundancy into an existing program written using a high-level language. The transformations aim at making the program able to detect most of the soft-errors affecting data and code, independently of the Error Detection Mechanisms (EDMs) possibly implemented by the hardware. Since the transformations can be automatically applied as a pre-compilation phase, the programmer is freed from the cost and responsibility of introducing suitable EDMs in its code. Preliminary experimental results are reported, showing the fault coverage obtained by the method, as well as some figures concerning the slow-down and code size increase it causes.

Journal ArticleDOI
TL;DR: The main purpose of this paper is to provide a detailed review and discussion of retrospective motion correction techniques that have been described in the literature, to summarize the conclusions that can be drawn from these studies, and to provide suggestions for future research.
Abstract: Digital subtraction angiography (DSA) is a well-established modality for the visualization of blood vessels in the human body. A serious disadvantage of this technique, inherent to the subtraction operation, is its sensitivity to patient motion. The resulting artifacts frequently reduce the diagnostic value of the images. Over the past two decades, many solutions to this problem have been put forward. In this paper, the authors give an overview of the possible types of motion artifacts and the techniques that have been proposed to avoid them. The main purpose of this paper is to provide a detailed review and discussion of retrospective motion correction techniques that have been described in the literature, to summarize the conclusions that can be drawn from these studies, and to provide suggestions for future research.

Patent
Peter Mcginn1
30 Apr 1999
TL;DR: In this paper, a parity controller is used to generate parity values which correlate to the contents of the memory array (200 ) to extend the reliable life of the product, if errors are detected, such as leakage, soft error, electrical short, etc.
Abstract: A microcontroller ( 100 ) has a CPU ( 102 ) and memory ( 104 ). Memory ( 104 ) contains a memory array ( 200 ). A large portion of the array ( 200 ) is used to contain functional data for the CPU ( 102 ), but the array ( 200 ) also contains one or a few rows of memory content parity information. Once the array ( 200 ) is written with lasting data and/or software, a parity controller ( 208 ) will generate initial parity values which correlate to the contents of the memory array ( 200 ). This parity information is stored within the parity portion of the array ( 200 ). After generating the initial parity data, the parity controller ( 208 ) occasionally, upon some parity checking event, generates current parity from the data stored within the array ( 200 ). This current parity is compared against the parity portion of the array ( 200 ) using the parity logic ( 210 ). If errors are detected, it is clear that the software/data that was intended to be static and non-changing has experienced a leakage error, soft error event, electrical short, etc. Once these errors are detected corrective measures may be taken to extend the reliable life of the product.

Journal ArticleDOI
TL;DR: It is shown that the hybrid ARQ with selective combining yields better performance than the generalized type-II ARQ scheme for fading channels, and simulation results of real-time video time division multiple access (TDMA) transmission system are given.
Abstract: We propose and analyze a hybrid automatic repeat request (ARQ) with a selective combining scheme using rate-compatible punctured convolutional (RCPC) codes for fading channels. A finite-state Markov channel model is used to represent the Rayleigh fading channels. We show that the hybrid ARQ with selective combining yields better performance than the generalized type-II ARQ scheme for fading channels. Furthermore, simulation results of real-time video time division multiple access (TDMA) transmission system are given. Better video quality can be obtained by our proposed scheme, with a bounded delay. Analytical results of throughput and packet error rate (PER) are compared to the simulated results. Our analysis based on a finite-state Markov channel model, is shown to give good agreement with simulations.

Patent
19 Jan 1999
TL;DR: A rotary encoder error compensation system for use in a marking device with a rotating encoder includes an error correction logic circuit and an error corrections table linked to the error correction circuit as discussed by the authors.
Abstract: A rotary encoder error compensation system for use in a marking device with a rotary encoder includes an error correction logic circuit and an error correction table linked to the error correction logic circuit. The rotary encoder is configured to generate an encoder signal indicating a detected position of a rotating element and has a known rotary encoder error. The error correction logic circuit receives the encoder signal from the rotary encoder. The error correction table includes normalized error correction values. The error correction logic circuit accesses the error correction table based on the detected position of the rotating element and the known rotary encoder error and obtains one of the normalized error correction values corresponding to the detected position of the rotating element and the known rotary encoder error.

Patent
13 Dec 1999
TL;DR: In this paper, a message is wirelessly broadcast to wireless terminals by error correction coding the message to produce an error correction coded message block, and the message block is error correction decoded to produce the message.
Abstract: A message is wirelessly broadcast to wireless terminals by error correction coding the message to produce an error correction coded message block, dividing the error correction coded message block into frames and error correction coding the frames to produce error correction coded frames. The error correction coding frames are wirelessly broadcast to the wireless terminals. At the wireless terminals, the frames are received and the frames are error correction decoded to produce error correction decoded frames. The error correction decoded frames are combined into a message block, and the message block is error correction decoded to produce the message. By error correction coding the entire message in addition to error correction coding the frames of the message, long messages may be reliably broadcast and received notwithstanding fading and the other problems in the transmission. Accordingly, a broadcast channel that is designed for short message usage also may be used to reliably transmit long messages. The invention may, for example, be applicable to a TDMA system that includes a Digital Control CHannel (DCCH) having a short message Service Broadcast Control CHannel (S-BCCH) logical channel. The error correction coded frames are placed in the S-BCCH logical channel. The S-BCCH logical channel is then wirelessly broadcast to the radiotelephones in a plurality of TDMA time slots.

Journal ArticleDOI
TL;DR: Using the Gilbert-Elliott model to study the performance of block-coded transmission over the land mobile channel, a new analytical expression illustrating the effect of various parameters, e,g.
Abstract: By using the Gilbert-Elliott (1960, 1963) model to study the performance of block-coded transmission over the land mobile channel, a new analytical expression illustrating the effect of various parameters, e,g., mobile speed, delay constraint, and parameters for the error correcting code, is found. Comparisons between the results obtained by this analytical expression and results obtained by computer simulations show that the analytical results are accurate for a broad range of channel parameters. The Gilbert-Elliott model is then used to compare the performance of different binary BCH codes when the delay constraint does not allow the assumption of infinite interleaving. In contrast to the memoryless case, where the performance typically is improved with increased block length, short codes are found to be as good, or even superior, due to the fact that the interleaver works better for shorter codes.

Journal ArticleDOI
01 Oct 1999
TL;DR: This paper advocates the use of rate-compatible punctured systematic recursive convolutional (RCPRSC) codes which are show to lead to a straightforward and versatile unequal error protection (UEP) design.
Abstract: Multimedia transmission has to handle a variety of compressed and uncompressed source signals such as data, text, image, audio, and video. On wireless channels the error rates are high and joint source/channel coding and decoding methods are advantageous. Also, the system architecture has to adapt to the bad channel conditions. Several examples of a joint design are given. We especially advocate the use of rate-compatible punctured systematic recursive convolutional (RCPRSC) codes which are show to lead to a straightforward and versatile unequal error protection (UEP) design. In addition, the high-end receiver could use soft outputs and source-controlled channel decoding for even better performance.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the existence of a long-run aggregate merchandise import demand function for Bangladesh during the period 1974 - 94 and applied cointegration and error correction modeling approaches.
Abstract: This paper investigates the existence of a long-run aggregate merchandise import demand function for Bangladesh during the period 1974 - 94. The cointegration and error correction modelling approaches have been applied. Empirical results suggest that there exists a unique long-run or equilibrium relationship among real quantitities of imports, real import prices, real GDP and real foreign exchange reserves. The dynamic behaviour of import demand has been investigated by estimating two types of error correction models, in which the error correction terms have been found significant. In model I, real import prices and real GDP (lagged one year) and in model II, real import prices, real GDP (lagged one year), real imports (lagged one quarter) and a dummy variable capturing the effects of import liberalization policies have all emerged as important determinants of import demand function. The error correction models have also been found to be robust as they satisfy almost all relevant diagnostic tests.

Proceedings ArticleDOI
05 Dec 1999
TL;DR: A solution is proposed which uses a modified concatenation scheme, in which the positions of the modulation and error-correcting codes are reversed, and improved performance is obtained by iterating with this soft constraint decoder.
Abstract: Soft iterative decoding of turbo codes and low-density parity check codes has been shown to offer significant improvements in performance. To apply soft iterative decoding to digital recorders, where binary modulation constraints are often used, modifications must be made to allow reliability information to be accessible by the decoder. A solution is proposed which uses a modified concatenation scheme, in which the positions of the modulation and error-correcting codes are reversed. In addition, a soft decoder based on the BCJR algorithm is introduced for the modulation constraint, and improved performance is obtained by iterating with this soft constraint decoder.

Journal ArticleDOI
TL;DR: A test vector simulation-based approach for multiple design error diagnosis and correction in digital VLSI circuits that is applicable to circuits with no global binary decision diagram representation.
Abstract: With the increase in the complexity of digital VLSI circuit design, logic design errors can occur during synthesis. In this paper, we present a test vector simulation-based approach for multiple design error diagnosis and correction. Diagnosis is performed through an implicit enumeration of the erroneous lines in an effort to avoid the exponential explosion of the error space as the number of errors increases. Resynthesis during correction is as little as possible so that most of the engineering effort invested in the design is preserved. Since both steps are based on test vector simulation, the proposed approach is applicable to circuits with no global binary decision diagram representation. Experiments on ISCAS'85 benchmark circuits exhibit the robustness and error resolution of the proposed methodology. Experiments also indicate that test vector simulation is indeed an attractive technique for multiple design error diagnosis and correction in digital VLSI circuits.

Journal ArticleDOI
01 Jan 1999-EPL
TL;DR: This work investigates the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index, and examines the finite-temperature case.
Abstract: We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

Journal ArticleDOI
TL;DR: An adaptive radio is being designed that adapts the frame length, error control, processing gain, and equalization to different channel conditions, while minimizing battery energy consumption.
Abstract: The quality of wireless links suffers from time-varying channel degradations such as interference, flat-fading, and frequency-selective fading. Current radios are limited in their ability to adapt to these channel variations because they are designed with fixed values for most system parameters such as frame length, error control, and processing gain. The values for these parameters are usually a compromise between the requirements for worst-case channel conditions and the need for low implementation cost. Therefore, in benign channel conditions these commercial radios can consume more battery energy than needed to maintain a desired link quality, while in a severely degraded channel they can consume energy without providing any quality-of-service (QoS). While techniques for adapting radio parameters to channel variations have been studied to improve link performance, in this paper they are applied to minimize battery energy. Specifically, an adaptive radio is being designed that adapts the frame length, error control, processing gain, and equalization to different channel conditions, while minimizing battery energy consumption. Experimental measurements and simulation results are presented to illustrate the adaptive radio's energy savings.

Journal ArticleDOI
TL;DR: It is shown that a threshold result still exists in three, two, or one spatial dimensions when next-to-nearest-neighbour gates are available, and explicit constructions are presented.
Abstract: I discuss how to perform fault-tolerant quantum computation with concatenated codes using local gates in small numbers of dimensions. I show that a threshold result still exists in three, two, or one dimensions when next-to-nearest-neighbor gates are available, and present explicit constructions. In two or three dimensions, I also show how nearest-neighbor gates can give a threshold result. In all cases, I simply demonstrate that a threshold exists, and do not attempt to optimize the error correction circuit or determine the exact value of the threshold. The additional overhead due to the fault-tolerance in both space and time is polylogarithmic in the error rate per logical gate.