scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 2001"


Journal ArticleDOI
TL;DR: The ‘dual-weighted-residual method’ is introduced initially within an abstract functional analytic setting, and is then developed in detail for several model situations featuring the characteristic properties of elliptic, parabolic and hyperbolic problems.
Abstract: This article surveys a general approach to error control and adaptive mesh design in Galerkin finite element methods that is based on duality principles as used in optimal control. Most of the existing work on a posteriori error analysis deals with error estimation in global norms like the ‘energy norm’ or the L2 norm, involving usually unknown ‘stability constants’. However, in most applications, the error in a global norm does not provide useful bounds for the errors in the quantities of real physical interest. Further, their sensitivity to local error sources is not properly represented by global stability constants. These deficiencies are overcome by employing duality techniques, as is common in a priori error analysis of finite element methods, and replacing the global stability constants by computationally obtained local sensitivity factors. Combining this with Galerkin orthogonality, a posteriori estimates can be derived directly for the error in the target quantity. In these estimates local residuals of the computed solution are multiplied by weights which measure the dependence of the error on the local residuals. Those, in turn, can be controlled by locally refining or coarsening the computational mesh. The weights are obtained by approximately solving a linear adjoint problem. The resulting a posteriori error estimates provide the basis of a feedback process for successively constructing economical meshes and corresponding error bounds tailored to the particular goal of the computation. This approach, called the ‘dual-weighted-residual method’, is introduced initially within an abstract functional analytic setting, and is then developed in detail for several model situations featuring the characteristic properties of elliptic, parabolic and hyperbolic problems. After having discussed the basic properties of duality-based adaptivity, we demonstrate the potential of this approach by presenting a selection of results obtained for practical test cases. These include problems from viscous fluid flow, chemically reactive flow, elasto-plasticity, radiative transfer, and optimal control. Throughout the paper, open theoretical and practical problems are stated together with references to the relevant literature.

1,274 citations


Journal ArticleDOI
TL;DR: The basic goal in digital communications is to transport bits of information without losing too much information along the way, but the level of information loss that is tolerable/acceptable varies for different applications.
Abstract: The basic goal in digital communications is to transport bits of information without losing too much information along the way. The level of information loss that is tolerable/acceptable varies for different applications. The loss is measured in terms of the bit error rate, or BER. An interesting application that employs error control coding is a system with a storage medium such as a hard disk drive or a compact disc (CD). We can think of the channel as a block that causes errors to occur when a signal passes through it. Regardless of the error source, we can describe the problem as follows: when the transmitted signal arrives at the receiver after passing through the channel, the received data will have some bits that are in error. The system designer would like to incorporate ways to detect and correct these errors. The field that covers such digital processing techniques is known as error control coding.

904 citations


Journal ArticleDOI
TL;DR: A linear-correction least-squares estimation procedure is proposed for the source localization problem under an additive measurement error model and yields an efficient source location estimator without assuming a priori knowledge of noise distribution.
Abstract: A linear-correction least-squares estimation procedure is proposed for the source localization problem under an additive measurement error model. The method, which can be easily implemented in a real-time system with moderate computational complexity, yields an efficient source location estimator without assuming a priori knowledge of noise distribution. Alternative existing estimators, including likelihood-based, spherical intersection, spherical interpolation, and quadratic-correction least-squares estimators, are reviewed and comparisons of their complexity, estimation consistency and efficiency against the Cramer-Rao lower bound are made. Numerical studies demonstrate that the proposed estimator performs better under many practical situations.

461 citations


Patent
02 May 2001
TL;DR: In this paper, new techniques and systems may be implemented to improve error correction in speech recognition systems, which may be used in a standard desktop environment, in a mobile environment, or in any other type of environment that can receive and/or present recognized speech.
Abstract: New techniques and systems may be implemented to improve error correction in speech recognition. These new techniques and systems may be implemented to correct errors in speech recognition systems may be used in a standard desktop environment, in a mobile environment, or in any other type of environment that can receive and/or present recognized speech.

423 citations


Journal ArticleDOI
TL;DR: A new approach in a posteriori error estimation is studied, in which the numerical error of finite element approximations is estimated in terms of quantities of interest rather than the classical energy norm.
Abstract: In this paper, we study a new approach in a posteriori error estimation, in which the numerical error of finite element approximations is estimated in terms of quantities of interest rather than the classical energy norm. These so-called quantities of interest are characterized by linear functionals on the space of functions to where the solution belongs. We present here the theory with respect to a class of elliptic boundary-value problems, and in particular, show how to obtain accurate estimates as well as upper and lower bounds on the error. We also study the new concept of goal-oriented adaptivity, which embodies mesh adaptation procedures designed to control error in specific quantities. Numerical experiments confirm that such procedures greatly accelerate the attainment of local features of the solution to preset accuracies as compared to traditional adaptive schemes based on energy norm error estimates.

370 citations


Journal ArticleDOI
TL;DR: A new block code is introduced which is capable of correcting multiple insertion, deletion, and substitution errors, and consists of nonlinear inner codes, which is called "watermark"" codes, concatenated with low-density parity-check codes over nonbinary fields.
Abstract: A new block code is introduced which is capable of correcting multiple insertion, deletion, and substitution errors. The code consists of nonlinear inner codes, which we call "watermark"" codes, concatenated with low-density parity-check codes over nonbinary fields. The inner code allows probabilistic resynchronization and provides soft outputs for the outer decoder, which then completes decoding. We present codes of rate 0.7 and transmitted length 5000 bits that can correct 30 insertion/deletion errors per block. We also present codes of rate 3/14 and length 4600 bits that can correct 450 insertion/deletion errors per block.

328 citations


Journal ArticleDOI
TL;DR: This paper presents a technique for blindly estimating the amount of gamma correction in the absence of any calibration information or knowledge of the imaging device by exploiting the fact that gamma correction introduces specific higher-order correlations in the frequency domain.
Abstract: The luminance nonlinearity introduced by many imaging devices can often be described by a simple point-wise operation (gamma correction). This paper presents a technique for blindly estimating the amount of gamma correction in the absence of any calibration information or knowledge of the imaging device. The basic approach exploits the fact that gamma correction introduces specific higher-order correlations in the frequency domain. These correlations can be detected using tools from polyspectral analysis. The amount of gamma correction is then estimated by minimizing these correlations.

320 citations


Proceedings ArticleDOI
04 Dec 2001
TL;DR: A full state feedback control law is developed using a cascaded approach, and proved to globally asymptotically stabilize the heading and the cross-track error of the ship.
Abstract: The paper considers way-point tracking control of ships using yaw torque control. A full state feedback control law is developed using a cascaded approach, and proved to globally asymptotically stabilize the heading and the cross-track error of the ship. Simulation results are presented.

232 citations


Proceedings ArticleDOI
21 Jul 2001
TL;DR: It is demonstrated that time-varying effects on wireless channels result in wireless traces which exhibit non-stationary behavior over small window sizes, and an algorithm is presented that divides traces into stationary components in order to provide analytical channel models that more accurately represent characteristics such as burstiness, statistical distribution of errors, and packet loss processes.
Abstract: Techniques for modeling and simulating channel conditions play an essential role in understanding network protocol and application behavior. In [11], we demonstrated that inaccurate modeling using a traditional analytical model yielded significant errors in error control protocol parameters choices. In this paper, we demonstrate that time-varying effects on wireless channels result in wireless traces which exhibit non-stationary behavior over small window sizes. We then present an algorithm that divides traces into stationary components in order to provide analytical channel models that, relative to traditional approaches, more accurately represent characteristics such as burstiness, statistical distribution of errors, and packet loss processes. Our algorithm also generates artificial traces with the same statistical characteristics as actual collected network traces. For validation, we develop a channel model for the circuit-switched data service in GSM and show that it: (1) more closely approximates GSM channel characteristics than a traditional Gilbert model and (2) generates artificial traces that closely match collected traces' statistics. Using these traces in a simulator environment enables future protocol and application testing under different controlled and repeatable conditions.

224 citations


Patent
Kenneth K. Smith1
16 Aug 2001
TL;DR: In this article, a divider segregating the payload and redundancy portions may be dynamically relocated, thereby altering the size of the redundancy to allow for use of an error correcting code selected to provide the data integrity required in response to changing conditions.
Abstract: Data storage media, such as silicon-based non-volatile memory, are configured according to a data structure containing a payload portion and a redundancy portion. A divider segregating the payload and redundancy portions may be dynamically relocated, thereby altering the size of the redundancy to allow for use of an error correcting code selected to provide the data integrity required in response to changing conditions.

215 citations


Journal ArticleDOI
TL;DR: A performance model of (recognition-based) multimodal interaction that predicts input speed including time needed for error correction is introduced, which suggests that recognition accuracy determines user choice between modalities: while users initially prefer speech, they learn to avoid ineffective correction modalities with experience.
Abstract: Although commercial dictation systems and speech-enabled telephone voice user interfaces have become readily available, speech recognition errors remain a serious problem in the design and implementation of speech user interfaces. Previous work hypothesized that switching modality could speed up interactive correction of recognition errors. This article presents multimodal error correction methods that allow the user to correct recognition errors efficiently without keyboard input. Correction accuracy is maximized by novel recognition algorithms that use context information for recognizing correction input. Multimodal error correction is evaluated in the context of a prototype multimodal dictation system. The study shows that unimodal repair is less accurate than multimodal error correction. On a dictation task, multimodal correction is faster than unimodal correction by respeaking. The study also provides empirical evidence that system-initiated error correction (based on confidence measures) may not expedite error correction. Furthermore, the study suggests that recognition accuracy determines user choice between modalities: while users initially prefer speech, they learn to avoid ineffective correction modalities with experience. To extrapolate results from this user study, the article introduces a performance model of (recognition-based) multimodal interaction that predicts input speed including time needed for error correction. Applied to interactive error correction, the model predicts the impact of improvements in recognition technology on correction speeds, and the influence of recognition accuracy and correction method on the productivity of dictation systems. This model is a first step toward formalizing multimodal interaction.

Journal ArticleDOI
TL;DR: An effective framework for increasing the error-resilience of low bit-rate video communications over an error-prone packet-switched network is presented and a rate-distortion optimized mode-selection algorithm is introduced for this prioritized layered framework.
Abstract: We present an effective framework for increasing the error-resilience of low bit-rate video communications over an error-prone packet-switched network. Our framework is based on the principle of layered coding with transport prioritization. We introduce a rate-distortion optimized mode-selection algorithm for our prioritized layered framework. This algorithm is based on a joint source/channel-coding approach and trades off source coding efficiency for increased bitstream error-resilience to optimize the video coding mode selection within and across layers. The algorithm considers the channel conditions, as well as the error recovery and concealment capabilities, of the channel codec and source decoder, respectively. Important framework parameters including the packetization scheme, decoder error concealment method, and channel codec error-protection strength are considered. The effects of mismatch between the parameters employed by the encoder and the actual channel conditions are considered. Results are presented for a wide range of packet loss rates in order to illustrate the benefits of the proposed framework.

Patent
28 Jun 2001
TL;DR: In this paper, a receiving device can dynamically control and/or influence a sending device's decision regarding the level of error correction that is applied to streamed media, by generating a request for streamed media that specifies an initial requested error correction level.
Abstract: Methods and apparatuses are provided which allow a receiving device to dynamically control and/or otherwise influence a sending device's decision regarding the level of error correction that is applied to streamed media. One method includes having the receiving device generate a request for streamed media that specifies an initial requested error correction level. In this manner, the receiving device is allowed to initially negotiate an error correction level with the sending device that will be providing the streamed media. The receiving device may also dynamically modify the requested level of error correction applied to the streaming media. The sending and receiving devices may also initially and/or dynamically negotiate different error correction encoding schemes. Different error encoding scheme(s) and/or error correction levels can also be selectively applied to different types of streamed media data.

Patent
28 Dec 2001
TL;DR: In this article, an adaptive quality control loop for link rate adaptation that selectively adjusts channel condition thresholds based on delay sensitivity of data packets being transmitted is proposed. But it is not suitable for wireless communication systems incorporating error correction scheme using re-transmissions.
Abstract: An adaptive quality control loop for link rate adaptation that selectively adjusts channel condition thresholds based on delay sensitivity of data packets being transmitted. For wireless communication systems incorporating an error correction scheme using re-transmissions, the quality control loop adaptively adjusts channel condition thresholds more frequently for delay sensitive data packets, such as video, and less frequently for delay insensitive data packets, such as text messaging. Channel condition thresholds may be adjusted using fixed or variable steps based on error detection results.

Patent
23 Aug 2001
TL;DR: In this paper, a system for transmitting information as frames in digital format between users on a network using a single frequency and TDMA and/or CDMA communications is described, and a synchronization segment or pre-amble for a frame may include identification of the source and the intended recipient.
Abstract: A system for transmitting information as frames in digital format between users on a network using a single frequency and TDMA and/or CDMA communications. Broadcasting, multicasting and unicasting are provided, and communications may be part of a downlink and/or an uplink signaling scheme that allows user-user and user-central station communication. A synchronization segment or pre-amble for a frame may include identification of the source and/or the intended recipient. Walsh, Haar, Rademacher coding of selected frame components, among others, can be incorporated. Reed Solomon encoding, signal interleaving and intra-leaving, trellis encoding and turbo encoding are used for error detection and correction. The system provides two-way communication with the Internet and/or with a cellular network and/or for smaller networks of users.

Patent
Dave M. Brown1
23 Aug 2001
TL;DR: An error correction arrangement for a flash EEPROM array including a plurality of redundant array circuits, apparatus for sensing when a hardware error has occurred in a block of the flash EPM array, and a circuit for replacing an array circuit with a redundant array circuit in response to detection of a hardware failure is presented in this paper.
Abstract: An error correction arrangement for a flash EEPROM array including a plurality of redundant array circuits, apparatus for sensing when a hardware error has occurred in a block of the flash EEPROM array, and a circuit for replacing an array circuit with a redundant array circuit in response to detection of a hardware error.

Journal ArticleDOI
TL;DR: The performance of packet-level media-independent forward error correction (FEC) schemes are computed in terms of both packet loss ratio and average burst length of multimedia data after error recovery.
Abstract: The performance of packet-level media-independent forward error correction (FEC) schemes are computed in terms of both packet loss ratio and average burst length of multimedia data after error recovery The set of equations leading to the analytical formulation of both parameters are first given for a renewal error process Finally, the FEC performance parameters are computed for a Gilbert (1960) model loss process and compared to various experimental data

Journal ArticleDOI
TL;DR: Although feedback is normally problematic in broadcast situations, ARQ can be simulated by having the receivers subscribe and unsubscribe to the delayed parity layers to receive missing information and this pseudo-ARQ scheme avoids an implosion of repeat requests at the sender.
Abstract: We consider the problem of error control for receiver-driven layered multicast of audio and video over the Internet. The sender injects into the network multiple source layers and multiple channel coding (parity) layers, some of which are delayed relative to the source, Each receiver subscribes to the number of source layers and the number of parity layers that optimizes the receiver's quality for its available bandwidth and packet loss probability. We augment this layered FEC system with layered pseudo-ARQ. Although feedback is normally problematic in broadcast situations, ARQ can be simulated by having the receivers subscribe and unsubscribe to the delayed parity layers to receive missing information. This pseudo-ARQ scheme avoids an implosion of repeat requests at the sender and is scalable to an unlimited number of receivers, We show gains of 4-18 dB on channels with 20% loss over systems without error control and additional gains of 1-13 dB when FEC is augmented by pseudo-ARQ in a hybrid system, Optimal error control in the hybrid system is achieved by an optimal policy for a Markov decision process.

Journal ArticleDOI
TL;DR: The decoding scheme proposed can be viewed as a turbo algorithm using alternately the intersymbol correlation due to the Markov source and the redundancy introduced by the channel code, which is used as a translator of soft information from the bit clock to the symbol clock.
Abstract: We analyze the dependencies between the variables involved in the source and channel coding chain. This analysis is carried out in the framework of Bayesian networks, which provide both an intuitive representation for the global model of the coding chain and a way of deriving joint (soft) decoding algorithms. Three sources of dependencies are involved in the chain: (1) the source model, a Markov chain of symbols; (2) the source coder model, based on a variable length code (VLC), for example a Huffman code; and (3) the channel coder, based on a convolutional error correcting code. Joint decoding relying on the hidden Markov model (HMM) of the global coding chain is intractable, except in trivial cases. We advocate instead an iterative procedure inspired from serial turbo codes, in which the three models of the coding chain are used alternately. This idea of using separately each factor of a big product model inside an iterative procedure usually requires the presence of an interleaver between successive components. We show that only one interleaver is necessary here, placed between the source coder and the channel coder. The decoding scheme we propose can be viewed as a turbo algorithm using alternately the intersymbol correlation due to the Markov source and the redundancy introduced by the channel code. The intermediary element, the source coder model, is used as a translator of soft information from the bit clock to the symbol clock.

Journal ArticleDOI
TL;DR: An associated decision problem is formalized and it is proved it is NP-complete, and algorithms based on dual codewords recognition are suggested which will be efficient for codes of length up to 512 if thecodewords contain no more than 1.5 errors.

Patent
27 Nov 2001
TL;DR: In this article, the erasure of data due to a media defect is detected inside the iterative decoder, and the second erasure flag is inputted into the inner decoder to perform erasure compensation.
Abstract: A recording and reproducing apparatus having an error correction function ECC-less, includes a erasure detector generating an erasure flag indicating disappearance of a read signal; and an iterative decoder having two soft-in/soft-out (SISO) decoders, i.e., an inner decoder and an outer decoder, and correcting the erasure by inputting the erasure flag e k into the inner decoder 84 and performing erasure compensation in the inner decoder. As the erasure compensation in the inner decoder, channel information is masked while the erasure flag is on. The erasure of data due to a media defect is detected inside the iterative decoder, and the second erasure flag is inputted into the inner decoder to perform erasure compensation in the inner decoder.

Proceedings ArticleDOI
24 Oct 2001
TL;DR: These approaches exploit the inverse relationship that exists between Rijndael encryption and decryption at various levels and develop CED architectures that explore the trade-off between area overhead, performance penalty and error detection latency.
Abstract: Fault-based side channel cryptanalysis is very effective against symmetric and asymmetric encryption algorithms. Although straightforward hardware and time redundancy based Concurrent Error Detection (CED) architectures can be used to thwart such attacks, they entail significant overhead (either area or performance). In this paper we investigate systematic approaches to low-cost, low-latency CED for Rijndael symmetric encryption algorithm. These approaches exploit the inverse relationship that exists between Rijndael encryption and decryption at various levels and develop CED architectures that explore the trade-off between area overhead, performance penalty and error detection latency. The proposed techniques have been validated on FPGA implementations.

Patent
07 Sep 2001
TL;DR: In this article, a hybrid ARQ scheme with incremental data packet combining employs three feedback signaling commands: ACK, NACK, and LOST, which provides both robustness and good performance.
Abstract: The invention relates to a hybrid ARQ scheme with incremental data packet combining. In an example embodiment, the hybrid ARQ scheme with incremental data packet combining employs three feedback signaling commands: ACK, NACK, and LOST. Using these three feedback commands, the hybrid ARQ scheme with incremental data packet combining is provides both robustness and good performance. The invention is particularly advantageous in communication systems with unreliable communication channels, e.g., a fading radio channel, where forward error correction (FEC) codes are used, some of the code symbols being more important than other code symbols. The benefits of the invention are increased throughput and decreased delay of the packet data communication.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: This work designs redundant precoders with cyclic prefix (CP) and superimposed training sequences for optimal channel estimation and guaranteed symbol recovery regardless of the underlying FIR frequency-selective channels.
Abstract: The adoption of orthogonal frequency-division multiplexing (OFDM) by wireless local area networks and audio/video broadcasting standards testifies to the importance of recovering block precoded transmissions propagating through frequency-selective FIR channels. Existing block transmission standards invoke bandwidth-consuming error control codes to mitigate channel fades and training sequences to identify the FIR channels. To enable low-complexity block-by-block receiver processing, we design redundant precoders with cyclic prefix (CP) and superimposed training sequences for optimal channel estimation and guaranteed symbol recovery regardless of the underlying FIR frequency-selective channels. Numerical results axe presented to access the performance of the designed training and precoding schemes.

Journal ArticleDOI
TL;DR: By tracing the flow of computations in the iterative decoders for low-density parity-check codes, this work forms a signal-space view for a finite number of iterations in a finite-length code and shows that some pseudocodewords cause decoding errors, but that there are also pseudocODewords that frequently correct the deleterious effects of other pseudocods.
Abstract: By tracing the flow of computations in the iterative decoders for low-density parity-check codes, we formulate a signal-space view for a finite number of iterations in a finite-length code. On a Gaussian channel, maximum a posteriori (MAP) codeword decoding (or "maximum-likelihood decoding") decodes to the codeword signal that is closest to the channel output in Euclidean distance. In contrast, we show that iterative decoding decodes to the "pseudosignal" that has highest correlation with the channel output. The set of pseudosignals corresponds to "pseudocodewords", only a vanishingly small number of which correspond to codewords. We show that some pseudocodewords cause decoding errors, but that there are also pseudocodewords that frequently correct the deleterious effects of other pseudocodewords.

Journal ArticleDOI
TL;DR: Two methods are developed to decode up to the true minimum distance of the (47,24,11) QR code and these algorithms can be utilized to decode effectively the 1/2 -rate (48,24-12) QR Code for correcting five errors and detecting six errors.
Abstract: The techniques needed to decode the (47,24,11) quadratic residue (QR) code differ from the schemes developed for cyclic codes. By finding certain nonlinear relations between the known and unknown syndromes for this special code, two methods are developed to decode up to the true minimum distance of the (47,24,11) QR code. These algorithms can be utilized to decode effectively the 1/2 -rate (48,24,12) QR code for correcting five errors and detecting six errors.

Proceedings ArticleDOI
01 Jul 2001
TL;DR: The results show that the developed framework is very useful for analysing error propagation and software vulnerability and for deciding where to place EDMs and ERMs.
Abstract: We present a novel approach for analysing the propagation of data errors in software. The concept of error permeability is introduced as a basic measure upon which we define a set of related measures. These measures guide us in the process of analysing the vulnerability of software to find the modules that are most likely exposed to propagating errors. Based on the analysis performed with error permeability and its related measures, we describe how to select suitable locations for error detection mechanisms (EDMs) and error recovery mechanisms (ERMs). A method for experimental estimation of error permeability, based on fault injection, is described and the software of a real embedded control system analysed to show the type of results obtainable by the analysis framework. The results show that the developed framework is very useful for analysing error propagation and software vulnerability and for deciding where to place EDMs and ERMs.

Patent
18 Dec 2001
TL;DR: In this article, a semiconductor storage device determines the cause of an error at the time of the error correction of data read out from a nonvolatile semiconductor memory, on the basis of a previously recorded error correction count, and selects a data refresh processing or a substitute processing to perform.
Abstract: A semiconductor storage device that determines the cause of an error at the time of the error correction of data read out from a non-volatile semiconductor memory, on the basis of a previously recorded error correction count, and selects a data refresh processing or a substitute processing to perform. When the error is detected, the corrected data is rewritten back for preventing reoccurrence of error due to accidental cause. If it is determined that the reoccurrence frequency of the error is high and the error is due to degradation of the storage medium, based on the error correction count, the substitute processing is performed.

Patent
20 Aug 2001
TL;DR: The linear MMSE equalization with parallel interference cancellation for symbol determination in a forward link of a CDMA communication system which has a plurality of code channels in use was proposed in this paper.
Abstract: The present invention provides linear MMSE equalization with parallel interference cancellation for symbol determination in a forward link of a CDMA communication system which has a plurality of code channels in use. Use of the linear MMSE equalization with parallel interference cancellation of the present invention provides significantly increased performance. The preferred method linearly filters a received signal to form a first filtered signal (410), despreads and demodulates the first filtered signal (415, 420) and provides a plurality of symbol estimates for all corresponding code channels (430). An estimated transmitted signal is generated from the plurality of symbol estimates (435), and with a channel estimate (405), an estimated received signal is generated (440). A residual signal is determined as a difference between the received signal and the estimated received signal, is linearly filtered (445), and then combined with the estimated transmitted signal to form a next, enhanced estimated transmitted signal (450). This next estimated transmitted signal is despread (455, 460) and utilized to provide a next plurality of symbol estimates, for a selected code channel of the plurality of channels, for subsequent use in error correction and decoding, and further use by a subscriber (465, 475).

Proceedings ArticleDOI
25 Nov 2001
TL;DR: A complete set of optimal data allocation algorithms for practical multicarrier systems is presented which solves three related problems: initial bit allocation, adaptive allocation updates, and limited power adjustments.
Abstract: A complete set of optimal data allocation algorithms for practical multicarrier systems is presented which solves three related problems: initial bit allocation, adaptive allocation updates, and limited power adjustments. Designed for standard compliant discrete multitone asymmetric digital subscriber line systems, the algorithms meet error correction overhead and spectral mask requirements. Unlike linear domain methods that are based on the gap approximation, these efficient algorithms are performed in the logarithmic domain and yield optimal results.