scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 2002"


Journal ArticleDOI
TL;DR: A belief-propagation (BP)-based decoding algorithm which utilizes normalization to improve the accuracy of the soft values delivered by a previously proposed simplified BP-based algorithm is proposed.
Abstract: In this paper, we propose a belief-propagation (BP)-based decoding algorithm which utilizes normalization to improve the accuracy of the soft values delivered by a previously proposed simplified BP-based algorithm. The normalization factors can be obtained not only by simulation, but also, importantly, theoretically. This new BP-based algorithm is much simpler to implement than BP decoding as it requires only additions of the normalized received values and is universal, i.e., the decoding is independent of the channel characteristics. Some simulation results are given, which show this new decoding approach can achieve an error performance very close to that of BP on the additive white Gaussian noise channel, especially for low-density parity check (LDPC) codes whose check sums have large weights. The principle of normalization can also be used to improve the performance of the max-log-MAP algorithm in turbo decoding, and some coding gain can be achieved if the code length is long enough.

660 citations


Journal ArticleDOI
TL;DR: EDDI can provide over 98% fault-coverage without any extra hardware for error detection, which is especially useful when designers cannot change the hardware, but they need dependability in the computer system.
Abstract: This paper proposes a pure software technique "error detection by duplicated instructions" (EDDI), for detecting errors during usual system operation. Compared to other error-detection techniques that use hardware redundancy, EDDI does not require any hardware modifications to add error detection capability to the original system. EDDI duplicates instructions during compilation and uses different registers and variables for the new instructions. Especially for the fault in the code segment of memory, formulas are derived to estimate the error-detection coverage of EDDI using probabilistic methods. These formulas use statistics of the program, which are collected during compilation. EDDI was applied to eight benchmark programs and the error-detection coverage was estimated. Then, the estimates were verified by simulation, in which a fault injector forced a bit-flip in the code segment of executable machine codes. The simulation results validated the estimated fault coverage and show that approximately 1.5% of injected faults produced incorrect results in eight benchmark programs with EDDI, while on average, 20% of injected faults produced undetected incorrect results in the programs without EDDI. Based on the theoretical estimates and actual fault-injection experiments, EDDI can provide over 98% fault-coverage without any extra hardware for error detection. This pure software technique is especially useful when designers cannot change the hardware, but they need dependability in the computer system. To reduce the performance overhead, EDDI schedules the instructions that are added for detecting errors such that "instruction-level parallelism" (ILP) is maximized. Performance overhead can be reduced by increasing ILP within a single super-scalar processor. The execution time overhead in a 4-way super-scalar processor is less than the execution time overhead in the processors that can issue two instructions in one cycle.

581 citations


Journal ArticleDOI
TL;DR: In this paper, attention is limited to two-dimensional inviscid flows using a standard finite volume discretization, although the procedure may be readily applied to other types of multidimensional problems and discretizations.

380 citations


Proceedings ArticleDOI
Robert Baumann1
08 Dec 2002
TL;DR: Memory and logic scaling trends are discussed along with a method for determining logic SER, the soft error rate of advanced CMOS devices, which may limit future product reliability.
Abstract: The soft error rate (SER) of advanced CMOS devices is higher than all other reliability mechanisms combined. Memories can be protected with error correction circuitry but SER in logic may limit future product reliability. Memory and logic scaling trends are discussed along with a method for determining logic SER.

336 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide densities and finite sample critical values for the single-equation error correction statistic for testing cointegration, using Monte Carlo simulations and a computer program to calculate both critical values and p-values.
Abstract: Summary This paper provides densities and finite sample critical values for the singleequation error correction statistic for testing cointegration. Graphs and response surfaces summarize extensive Monte Carlo simulations and highlight simple dependencies of the statistic’s quantiles on the number of variables in the error correction model, the choice of deterministic components, and the sample size. The response surfaces provide a convenient way for calculating finite sample critical values at standard levels; and a computer program, freely available over the Internet, can be used to calculate both critical values and p-values. Two empirical applications illustrate these tools.

300 citations


Journal ArticleDOI
U. Mittal1, N. Phamdo
TL;DR: It is demonstrated that robust codes exist whenever the source and channel bandwidths are equal and a matched tandem code whose channel encoder's output is partially/fully matched to its input is proposed and the existence of an asymptotically optimal matched tandemcode is shown.
Abstract: We consider the problem of transmitting a band-limited Gaussian source on an additive band-limited Gaussian noise channel. The well-known "threshold effect" dictates that the more powerful a code is, the more sensitive it is to the exact knowledge of the channel noise. A code is said to be robust if it is asymptotically optimal for a wide range of channel noise. Thus, robust codes have a "graceful degradation" characteristic and are free of the threshold effect. It is demonstrated that robust codes exist whenever the source and channel bandwidths are equal. In the unequal-bandwidth case, a collection of nearly robust joint source-channel codes is constructed using a hybrid digital-analog (HDA) coding technique. For designing nearly robust codes, a matched tandem code whose channel encoder's output is partially/fully matched to its input is proposed and the existence of an asymptotically optimal matched tandem code is shown. The nearly robust codes achieve the Shannon limit (theoretically optimum distortion) and have a less severe threshold effect. Finally, for the case of two different noise conditions, the distortion regions of these codes are determined.

256 citations


Journal ArticleDOI
TL;DR: Redundant precoders with cyclic prefix and superimposed training sequences are designed for optimal channel estimation and guaranteed symbol detectability, regardless of the underlying frequency-selective FIR channels.
Abstract: The adoption of orthogonal frequency-division multiplexing by wireless local area networks and audio/video broadcasting standards testifies to the importance of recovering block precoded transmissions propagating through frequency-selective finite-impulse response (FIR) channels. Existing block transmission standards invoke bandwidth-consuming error control codes to mitigate channel fades, and training sequences to identify the FIR channels. To enable block-by-block receiver processing, we design redundant precoders with cyclic prefix and superimposed training sequences for optimal channel estimation and guaranteed symbol detectability, regardless of the underlying frequency-selective FIR channels. Numerical results are presented to access the performance of the designed training and precoding schemes.

255 citations


Journal ArticleDOI
TL;DR: An iterative block decision feedback equaliser (IB-DFE) for single carrier modulation is proposed which operates on blocks of the receive signal, thus allowing the use of error correction codes on the feedback data signal.
Abstract: An iterative block decision feedback equaliser (IB-DFE) for single carrier modulation is proposed. Filtering operations are implemented by discrete Fourier transforms (DFTs) which yield a reduced computational complexity, for both filter design and signal processing, when compared to existing DFEs. Moreover, the new IB-DFE operates on blocks of the receive signal, thus allowing the use of error correction codes on the feedback data signal.

253 citations


Patent
15 Jan 2002
TL;DR: An improved optical code (101, 102, 103) reading system and method that enhances the ability of a reader to locate a symbol within a field of view and enhances the error-correcting properties of the encoding scheme commonly used in 2D bar codes was proposed in this paper.
Abstract: An improved optical code (101, 102, 103) reading system and method that enhances the ability of a reader to locate a symbol within a field of view and enhances the error-correcting properties of the encoding scheme commonly used in 2D bar codes (101, 102, 103). The reader offsets the effects of damaged finder patterns and missing symbol perimeters and, thereafter, detects high-level symbol information such as the code type, symbol size, and the number of rows and columns in the symbol. The reader then identifies those missing portions of a damaged symbol and marks each missing data bit location with a predetermined indicator. A decoding algorithm then interprets the missing bit indicator as an error of known location (e.g., an 'erasure'), thereby nearly doubling the error correcting strength of all bar codes employing the Reed-Solomon error correction scheme.

251 citations


Journal ArticleDOI
TL;DR: A new approach for multiple description video coding employs a second-order predictor for motion-compensation, which predicts a current frame from two previously coded frames, which can be varied to achieve a wide range of tradeoffs between coding efficiency and error resilience.
Abstract: A new approach for multiple description video coding is proposed. It employs a second-order predictor for motion-compensation, which predicts a current frame from two previously coded frames. The coder generates two descriptions, containing the coded even and odd frames, respectively. When only a single description (say, that containing even frames) is received, the decoder can only use previous even frames for prediction. The mismatch between the predicted frames at the encoder and decoder is explicitly coded to avoid error propagation in the ideal MD channels, where a description is either received intact or lost completely. By using the second-order predictor and coding the mismatch signal, one can also suppress error propagation in packet lossy networks where packets in either description can be lost. The predictor and the mismatch signal quantizer can be varied to achieve a wide range of tradeoffs between coding efficiency and error resilience.

239 citations


Proceedings Article
01 Jan 2002
TL;DR: A hybrid ARQ/FEC system using RC-LDPC codes is evaluated and can achieve information rate about 1 dB away from the theoretical limit, which is comparable to turbo ARQ systems, yet with lower decoding complexity.
Abstract: Strong rate-compatible codes is important to achieve high throughput in hybrid automatic repeat request with forward error correction (ARQ/FEC) systems in packet data transmission. This paper focuses on the construction of efficient rate-compatible low density parity check (RC-LDPC) codes over a wide rate range. The conventional approach of puncturing is first studied. Analysis on the code ensemble and the asymptotic performance shows that it works well only at high rates and only when the amount of puncturing is small. To extend the dynamic rate range, a special approach of extending is proposed and is shown to produce good RC-LDPC codes at low rates. Combining both approaches, efficient RC-LDPC codes are constructed and a hybrid ARQ/FEC system using RC-LDPC codes is evaluated. The proposed LDPC-ARQ system can achieve information rate about 1 dB away from the theoretical limit, which is comparable to turbo ARQ systems [1][2], yet with lower decoding complexity.

Proceedings ArticleDOI
23 Jun 2002
TL;DR: A new class of polynomials is identified that provides HD=6 up to nearly 16K bit and HD=4 up to 114K bit message lengths, providing the best achievable design point that maximizes error detection for both legacy and new applications, including potentially iSCSI and application-implemented error checks.
Abstract: Standardized 32-bit cyclic redundancy codes provide fewer bits of guaranteed error detection than they could, achieving a Hamming Distance (HD) of only 4 for maximum-length Ethernet messages, whereas HD=6 is possible. Although research has revealed improved codes, exploring the entire design space has previously been computationally intractable, even for special-purpose hardware. Moreover, no CRC polynomial has yet been found that satisfies an emerging need to attain both HD=6 for 12K bit messages and HD=4 for message lengths beyond 64 Kbits. This paper presents results from the first exhaustive search of the 32-bit CRC design space. Results from previous research are validated and extended to include identifying all polynomials achieving a better HD than the IEEE 802.3 CRC-32 polynomial. A new class of polynomials is identified that provides HD=6 up to nearly 16K bit and HD=4 up to 114K bit message lengths, providing the best achievable design point that maximizes error detection for both legacy and new applications, including potentially iSCSI and application-implemented error checks.

Proceedings ArticleDOI
04 Mar 2002
TL;DR: It is shown that retransmission strategies are more effective than correction ones from an energy viewpoint, both for the larger detection capability and for the minor decoding complexity.
Abstract: As technology scales toward deep submicron, on-chip interconnects are becoming more and more sensitive to noise sources such as power supply noise, crosstalk, radiation induced effects, etc. Transient delay and logic faults are likely to reduce the reliability of data transfers across data-path bus lines. This paper investigates how to deal with these errors in an energy efficient way. We could opt for error correction, which exhibits larger decoding overhead, or for the retransmission of the incorrectly received data word. Provided the timing penalty associated with this latter technique can be tolerated, we show that retransmission strategies are more effective than correction ones from an energy viewpoint, both for the larger detection capability and for the minor decoding complexity. The analysis wits performed by implementing several variants of a Hamming code in the VHDL model of a processor based on the Sparc V8 architecture, and exploiting the characteristics of AMBA bus slave response cycles to carry out retransmissions in a way fully compliant with this standard on-chip bus specification.

Proceedings ArticleDOI
20 Oct 2002
TL;DR: In this paper, the authors introduce network error-correcting codes for error correction when a source message is transmitted to a set of receiving nodes on a network, and derive the network generalizations of the Hamming bound and the Gilbert-Varshamov bound.
Abstract: We introduce network error-correcting codes for error correction when a source message is transmitted to a set of receiving nodes on a network. The usual approach in existing networks, namely link-by-link error correction, is a special case of network error correction. The network generalizations of the Hamming bound and the Gilbert-Varshamov bound are derived.

Journal ArticleDOI
TL;DR: This paper proposes to use a practical low-rate error correcting code in the ultra-wide bandwidth time-hopping spread-spectrum code division multiple-access system and indicates that the proposed coded scheme outperforms the uncoded scheme significantly and at a given bit error rate.
Abstract: An ultra-wide bandwidth time-hopping spread-spectrum code division multiple-access system employing a binary PPM signaling has been introduced by Scholtz (1993), and its performance was obtained based on a Gaussian distribution assumption for the multiple-access interference. In this paper, we begin first by proposing to use a practical low-rate error correcting code in the system without any further required bandwidth expansion. We then present a more precise performance analysis of the system for both coded and uncoded schemes. Our analysis shows that the Gaussian assumption is not accurate for predicting bit error rates at high data transmission rates for the uncoded scheme. Furthermore, it indicates that the proposed coded scheme outperforms the uncoded scheme significantly, or more importantly, at a given bit error rate, the coding scheme increases the number of users by a factor which is logarithmic in the number of pulses used in time-hopping spread-spectrum systems.

Patent
Dean A. Klein1
09 Apr 2002
TL;DR: In this article, a memory scrubbing controller used with a DRAM stores data in an error correcting code format, and the system then uses a memory control state machine and associated timer to periodically cause the DRAM to read the error correcting codes.
Abstract: A scrubbing controller used with a DRAM stores data in an error correcting code format. The system then uses a memory control state machine and associated timer to periodically cause the DRAM to read the error correcting codes. An ECC generator/checker in the scrubbing controller then detects any errors in the read error correcting codes, and generates corrected error correcting codes that are written to the DRAM. This scrubbing procedure, by reading error correcting codes from the DRAM, inherently refreshes memory cells in the DRAM. The error correcting codes are read at rate that may allow data errors to be generated, but these errors are corrected in the memory scrubbing procedure. However, the low rate at which the error correcting codes are read results in a substantial power saving compared to refreshing the memory cells at a higher rate needed to ensure that no data errors are generated.

Patent
18 Mar 2002
TL;DR: In this paper, a N-level cell memory controlled by the memory controller of the invention has an internal configuration in which the plurality of data input/output terminals connected to the second data bus are separated into first through Mth data input and output terminal groups, such that there is no redundancy in the n bits of data associated with one Nlevel cell.
Abstract: A N-level cell memory controlled by the memory controller of the invention have an internal configuration in which the plurality of data input/output terminals connected to the second data bus are separated into first through Mth data input/output terminal groups, such that there is no redundancy in the n bits of data associated with one N-level cell. Together with this, the memory controller separates the plurality of data bits on the first data bus into first through Mth data groups, the ECC circuits generate error-correction codes for each of these data groups, and the first through Mth data groups and first through Mth error correction codes are input to the first through Mth data input/output terminals of the N-level cell memory, via the second data bus.

Patent
25 Mar 2002
TL;DR: In this article, a stream of data is encoded using concatenated error correcting codes and the encoded data is decoded using the codes and three levels of decoding, and a method and apparatus for performing error correction is described.
Abstract: A method and apparatus for performing error correction is described. A stream of data is encoded using concatenated error correcting codes. The encoded data is communicated over a transmission system. The encoded data is decoded using the codes and three levels of decoding.

Patent
10 Dec 2002
TL;DR: In this paper, a data communications system is provided to dynamically change the error processing between an ARQ function and an FEC function in accordance with the network status, thus enabling high-quality data playback.
Abstract: A data communications system is provided to dynamically change the error processing between an ARQ function and an FEC function in accordance with the network status, thus enabling high-quality data playback. In packet transmission, error correction control is performed on the basis of the network status monitored by a network monitoring unit. The error control mode is switched between FEC-based error control and ARQ-based error control (retransmission request processing) in accordance with packet loss or error occurrence on the network, and packet transmission is performed. If the RTT is short, error correction based on ARQ is selected. If the RTT is long, error correction not based on ARQ but on FEC is selected. Such dynamic error correction control is achieved.

Journal ArticleDOI
10 Dec 2002
TL;DR: A 3GPP-compliant 4.1 Mb/s channel decoder supports data and voice calls in a unified Turbo/Viterbi architecture with hardware interleaver memory and pattern computation.
Abstract: A channel decoder chip compliant with the 3GPP mobile wireless standard is described. It supports both data and voice calls simultaneously in a unified turbo/Viterbi decoder architecture. For voice services, the decoder can process over 128 voice channels encoded with rate 1/2 or 1/3, constraint length 9 convolutional codes. For data services, the turbo decoder is capable of processing any mix of rate 1/3, constraint length 4 turbo encoded data streams with an aggregate data rate of up to 2.5 Mb/s with 10 iterations per block (or 4.1 Mb/s with six iterations). The turbo decoder uses the logMAP algorithm with a programmable logsum correction table. It features an interleaver address processor that computes the 3GPP interleaver addresses for all block sizes enabling it to quickly switch context to support different data services for several users. The decoder also contains the 3GPP first channel de-interleaving function and a post-decoder bit error rate estimation unit. The chip is fabricated in a 0.18-/spl mu/m six-layer metal CMOS technology, has an active area of 9 mm/sup 2/, and has a peak clock frequency of 110.8 MHz at 1.8 V (nominal). The power consumption is 306 mW when turbo decoding a 2-Mb/s data stream with ten iterations per block and eight voice calls simultaneously.

Patent
28 Jun 2002
TL;DR: In this paper, a memory controller comprises a check bit encoder circuit and a check/correct circuit, which are coupled to decode the encoded data block and perform at least the detection of (i) and (ii) on the data block.
Abstract: A memory controller comprises a check bit encoder circuit and a check/correct circuit. The check bit encoder circuit is coupled to receive a data block to be written to a memory comprising a plurality of memory devices, and is configured to encode the data block with a plurality of check bits to generate an encoded data block. The plurality of check bits are defined to provide at least: (i) detection and correction of a failure of one of the plurality of memory devices; and (ii) detection and correction of a single bit error in the encoded data block following detection of the failure of one of the plurality of memory devices. The check/correct circuit is coupled to receive the encoded data block from the memory and is configured to decode the encoded data block and perform at least the detection of (i) and (ii) on the encoded data block.

Journal ArticleDOI
30 Jun 2002
TL;DR: With a proper choice of the initial p, the proposed improved bit-flipping (BF) algorithm achieves gain not only in performance, but also in average decoding time for signal-to-noise ratio (SNR) values of interest with respect to p = 1.
Abstract: In this correspondence, a new method for improving hard-decision bit-flipping decoding of low-density parity-check (LDPC) codes is presented. Bits with a number of unsatisfied check sums larger than a predetermined threshold are flipped with a probability p /spl les/ 1 which is independent of the code considered. The probability p is incremented during decoding according to some rule. With a proper choice of the initial p, the proposed improved bit-flipping (BF) algorithm achieves gain not only in performance, but also in average decoding time for signal-to-noise ratio (SNR) values of interest with respect to p = 1.

Patent
16 Jul 2002
TL;DR: A method of accessing control information of a non-volatile solid state memory storage, including reading control information, reading error correction code data shared by said control information and by other associated data, is presented in this article.
Abstract: A method of accessing control information of a non-volatile solid state memory storage, including reading said control information; reading error correction code data shared by said control information and by other associated data; analyzing said error correction code data to determine if said control information is erroneous; and if erroneous, correcting said control information based on said error correction code data.

Journal ArticleDOI
TL;DR: An analysis of drift errors is provided to identify the sources of quality degradation when transcoding to a lower spatial resolution and it is found that the intra-refresh architecture offers the best tradeoff between quality and complexity and is also the most flexible.
Abstract: This paper discusses the problem of reduced-resolution transcoding of compressed video bitstreams. An analysis of drift errors is provided to identify the sources of quality degradation when transcoding to a lower spatial resolution. Two types of drift error are considered: a reference picture error, which has been identified in previous works, and error due to the noncommutative property of motion compensation and down-sampling, which is unique to this work. To overcome these sources of error, four novel architectures are presented. One architecture attempts to compensate for the reference picture error in the reduced resolution, while another architecture attempts to do the same in the original resolution. We present a third architecture that attempts to eliminate the second type of drift error and a final architecture that relies on an intrablock refresh method to compensate for all types of errors. In all of these architectures, a variety of macroblock level conversions are required, such as motion vector mapping and texture down-sampling. These conversions are discussed in detail. Another important issue for the transcoder is rate control. This is especially important for the intra-refresh architecture since it must find a balance between number of intrablocks used to compensate for errors and the associated rate-distortion characteristics of the low-resolution signal. The complexity and quality of the architectures are compared. Based on the results, we find that the intra-refresh architecture offers the best tradeoff between quality and complexity and is also the most flexible.

Journal ArticleDOI
TL;DR: In this article, the thermal spindle error and thermal feed axis error have been considered, and a measurement/compensation system for thermal error is introduced, and several modeling techniques for thermal errors are also implemented for the thermal error prediction; i.e., multiple linear regression, neural network, and the system identification methods, etc.
Abstract: Thermally induced errors have been significant factors affecting machine tool accuracy. In this paper, the thermal spindle error and thermal feed axis error have been considered, and a measurement/compensation system for thermal error is introduced. Several modelling techniques for thermal errors are also implemented for the thermal error prediction; i.e. multiple linear regression, neural network, and the system identification methods, etc. The performances of the thermal error modelling techniques are evaluated and compared, showing that the system identification method is the optimum model having the least deviation. The thermal error model for the feed axis is composed of geometric terms and thermal terms. The volumetric errors are calculated, combining the spindle thermal error and feed axis thermal error. In order to compensate for the thermal error in real-time, the coordinates of the CNC controller are modified in the PMC program. After real-time compensation, the machine tool accuracy improved about 4-5 times.

Patent
28 Jun 2002
TL;DR: In this paper, a memory controller includes a check/correct circuit and a data remap circuit, coupled to receive an encoded data block from a memory comprising a plurality of memory devices.
Abstract: A memory controller includes a check/correct circuit and a data remap circuit. The check/correct circuit is coupled to receive an encoded data block from a memory comprising a plurality of memory devices. The encoded data block includes a plurality of check bits, and the check/correct circuit is configured to decode the encoded data block and to detect a failure of one of the plurality of memory devices responsive to decoding the encoded data block. The data remap control circuit is configured to cause a remap of each of a plurality of encoded data blocks to avoid storing bits in the failing memory device. A memory controller may also be configured to detect and correct a first failed memory device and a second failed memory device of the plurality of memory devices.

Patent
30 Dec 2002
TL;DR: In this paper, an orthogonal frequency division multiplexing (OFDM) transmitter method, consistent with certain embodiments of the present invention, arranges OFDM data symbols representing data bits for transmission in a packet.
Abstract: An orthogonal frequency division multiplexing (OFDM) transmitter method, consistent with certain embodiments of the present invention arranges OFDM data symbols representing data bits for transmission in a packet. A prescribed pattern of OFDM data symbols are removed (212) and replaced (216) with pilot symbols. The packet is then transmitted (220) to an OFDM receiver that receives the packet (224) and determines a channel correction factor from the pilot pattern. The receiver then estimates a plurality of channel correction factors, one for each of the plurality of OFDM symbols representing data (228) and uses these correction factors to correct the OFDM symbols representing data (232). Arbitrary data are then inserted in place of the pilot symbols (236). The OFDM symbols representing data along with the arbitrary data are then decoded using an error correction decoder that corrects the errors induced by substitution of the pilot symbols for data symbols (240).

Patent
22 Jul 2002
TL;DR: In this paper, a multiplexing unit on the transmitting side estimates information amounts supplied from respective signal processing units, determines multiplex codes on the basis of respective information amounts, derives a parity of the first determined multiplex code to form a second multiplex codeword, adds a CRC to each of the multiplex coded codewords to generate two headers H1 and H2, takes out information data of respective media according to the two headers, incorporates the information data into a packet together with the two header H 1 and H 2, and outputs the packet.
Abstract: A multiplexing unit on the transmitting side estimates information amounts supplied from respective signal processing units, determines a multiplex code on the basis of respective information amounts, derives a parity of the first determined multiplex code to form a second multiplex code, adds a CRC to each of the multiplex codes to generate two headers H1 and H2, takes out information data of respective media according to the multiplex codes, incorporates the information data into a packet together with the two headers H1 and H2, and outputs the packet. If error correction of H1 is impossible on the receiving side, error correction decoding is conducted by using the header H2. If error correction of H2 is also impossible, error correction decoding is conducted collectively for H1 and H2.

Journal ArticleDOI
TL;DR: An automatic time stepping scheme with embedded error control is developed and applied to the moisture‐based Richards equation, and uses a numerical estimate of the local truncation error and an efficient time step selector to control the temporal accuracy of the integration.
Abstract: An automatic time stepping scheme with embedded error control is developed and applied to the moisture-based Richards equation. The algorithm is based on the first-order backward Euler scheme, and uses a numerical estimate of the local truncation error and an efficient time step selector to control the temporal accuracy of the integration. Local extrapolation, equivalent to the use of an unconditionally stable Thomas–Gladwell algorithm, achieves second-order temporal accuracy at minimal additional costs. The time stepping algorithm also provides accurate initial estimates for the iterative non-linear solver. Numerical tests confirm the ability of the scheme to automatically optimize the time step size to match a user prescribed temporal error tolerance. An important merit of the proposed method is its conceptual and computational simplicity. It can be directly incorporated into existing or new software based on the backward Euler scheme (currently prevalent in subsurface hydrologic modelling), and markedly improves their performance compared with simple fixed or heuristic time step selection. The generality of the approach also makes possible its use for solving PDEs in other engineering applications, where strong non-linearity, stability or implementation considerations favour a simple and robust low-order method, or where there is a legacy of backward Euler codes in current use. Copyright © 2001 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a thermal error model based on a correlation grouping and a successive linear regression analysis, where the residual mean square is minimized using a judgement function, which, although simple, is effective in the selection of variables in the error model.
Abstract: The objective of a thermal error compensation system for CNC machine tools is improved machining accuracy through real time error compensation. The compensation capability depends on the accuracy of the thermal error model. A thermal error model can be obtained using an appropriate combination of temperature variables. In this study, the thermal error modeling is based on a correlation grouping and a successive linear regression analysis. During the successive regression analysis, the residual mean square is minimized using a judgement function, which, although simple, is effective in the selection of variables in the error model. When evaluating the proposed thermal error model, the multi-collinearity problem and computational time are both improved through the correlation grouping, and the linear model is more robust against measurement noises than the engineering judgement model, which includes variables with higher order terms. The modeling method used in this study can be effectively and practically applied to real-time error compensation because it includes the advantages of simple application, reduced computational time, sufficient model accuracy, and model robustnesss.