scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Signal Processing in 1992"


Journal ArticleDOI
TL;DR: It is shown that the commonly used Lagrange a trous filters are in one-to-one correspondence with the convolutional squares of the Daubechies filters for orthonormal wavelets of compact support.
Abstract: Two separately motivated implementations of the wavelet transform are brought together. It is observed that these algorithms are both special cases of a single filter bank structure, the discrete wavelet transform, the behavior of which is governed by the choice of filters. In fact, the a trous algorithm is more properly viewed as a nonorthonormal multiresolution algorithm for which the discrete wavelet transform is exact. Moreover, it is shown that the commonly used Lagrange a trous filters are in one-to-one correspondence with the convolutional squares of the Daubechies filters for orthonormal wavelets of compact support. A systematic framework for the discrete wavelet transform is provided, and conditions are derived under which it computes the continuous wavelet transform exactly. Suitable filter constraints for finite energy and boundedness of the discrete transform are also derived. Relevant signal processing parameters are examined, and it is observed that orthonormality is balanced by restrictions on resolution. >

1,856 citations


Journal ArticleDOI
TL;DR: The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations.
Abstract: The wavelet transform is compared with the more classical short-time Fourier transform approach to signal analysis. Then the relations between wavelets, filter banks, and multiresolution signal processing are explored. A brief review is given of perfect reconstruction filter banks, which can be used both for computing the discrete wavelet transform, and for deriving continuous wavelet bases, provided that the filters meet a constraint known as regularity. Given a low-pass filter, necessary and sufficient conditions for the existence of a complementary high-pass filter that will permit perfect reconstruction are derived. The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations. An alternative approach based on the theory of continued fractions is also given. These results are used to design highly regular filter banks, which generate biorthogonal continuous wavelet bases with symmetries. >

1,804 citations


Journal ArticleDOI
TL;DR: A least-mean-square adaptive filter with a variable step size, allowing the adaptive filter to track changes in the system as well as produce a small steady state error, is introduced.
Abstract: A least-mean-square (LMS) adaptive filter with a variable step size is introduced. The step size increases or decreases as the mean-square error increases or decreases, allowing the adaptive filter to track changes in the system as well as produce a small steady state error. The convergence and steady-state behavior of the algorithm are analyzed. The results reduce to well-known results when specialized to the constant-step-size case. Simulation results are presented to support the analysis and to compare the performance of the algorithm with the usual LMS algorithm and another variable-step-size algorithm. They show that its performance compares favorably with these existing algorithms. >

966 citations


Journal ArticleDOI
TL;DR: A directionally oriented 2-D filter bank with the property that the individual channels may be critically sampled without loss of information is introduced and it is shown that these filter bank outputs may be maximally decimated to achieve a minimum sample representation in a way that permits the original signal to be exactly reconstructed.
Abstract: The authors introduce a directionally oriented 2-D filter bank with the property that the individual channels may be critically sampled without loss of information. The passband regions of the component filters are wedge-shaped and thus provide directional information. It is shown that these filter bank outputs may be maximally decimated to achieve a minimum sample representation in a way that permits the original signal to be exactly reconstructed. The authors discuss the theory for directional decomposition and reconstruction. In addition, implementation issues are addressed where realizations based on both recursive and nonrecursive filters are considered. >

911 citations


Journal ArticleDOI
TL;DR: A fundamental technique for designing a classifier that approaches the objective of minimum classification error in a more direct manner than traditional methods is given and is contrasted with several traditional classifier designs in typical experiments to demonstrate the superiority of the new learning formulation.
Abstract: A formulation is proposed for minimum-error classification, in which the misclassification probability is to be minimized based on a given set of training samples. A fundamental technique for designing a classifier that approaches the objective of minimum classification error in a more direct manner than traditional methods is given. The method is contrasted with several traditional classifier designs in typical experiments to demonstrate the superiority of the new learning formulation. The method can applied to other classifier structures as well. Experimental results pertaining to a speech recognition task are provided to show the effectiveness of the technique. >

759 citations


Journal ArticleDOI
Thrasyvoulos N. Pappas1
TL;DR: The algorithm that is presented is a generalization of the K-means clustering algorithm to include spatial constraints and to account for local intensity variations in the image to preserve the most significant features of the originals, while removing unimportant details.
Abstract: The problem of segmenting images of objects with smooth surfaces is considered. The algorithm that is presented is a generalization of the K-means clustering algorithm to include spatial constraints and to account for local intensity variations in the image. Spatial constraints are included by the use of a Gibbs random field model. Local intensity variations are accounted for in an iterative procedure involving averaging over a sliding window whose size decreases as the algorithm progresses. Results with an 8-neighbor Gibbs random field model applied to pictures of industrial objects, buildings, aerial photographs, optical characters, and faces show that the algorithm performs better than the K-means algorithm and its nonadaptive extensions that incorporate spatial constraints by the use of Gibbs random fields. A hierarchical implementation is also presented that results in better performance and faster speed of execution. The segmented images are caricatures of the originals which preserve the most significant features, while removing unimportant details. They can be used in image recognition and as crude representations of the image. >

575 citations


Journal ArticleDOI
TL;DR: In this article, an exact analysis of the critically subsampled two-band modelization scheme is given, and it is demonstrated that adaptive cross-filters between the subbands are necessary for modelization with small output errors.
Abstract: An exact analysis of the critically subsampled two-band modelization scheme is given, and it is demonstrated that adaptive cross-filters between the subbands are necessary for modelization with small output errors. It is shown that perfect reconstruction filter banks can yield exact modelization. These results are extended to the critically subsampled multiband schemes, and important computational savings are seen to be achieved by using good quality filter banks. The problem of adaptive identification in critically subsampled subbands is considered and an appropriate adaptation algorithm is derived. The authors give a detailed analysis of the computational complexity of all the discussed schemes, and experimentally verify the theoretical results that are obtained. The adaptive behavior of the subband schemes that were tested is discussed. >

552 citations


Journal ArticleDOI
TL;DR: It is shown that the MEMP method can be faster than a 2-D FFT method if the number of the2-D sinusoids is much smaller than the data set, and can be very close to the Cramer-Rao lower bound.
Abstract: A new method, called the matrix enhancement and matrix pencil (MEMP) method, is presented for estimating two-dimensional (2-D) frequencies. In the MEMP method, an enhanced matrix is constructed from the data samples, and then the matrix pencil approach is used to extract out the 2-D sinusoids from the principal eigenvectors of the enhanced matrix. The MEMP method yields the estimates of the 2-D frequencies efficiently, without solving the roots of a 2-D polynomial or searching in a 2-D space. It is shown that the MEMP method can be faster than a 2-D FFT method if the number of the 2-D sinusoids is much smaller than the data set. Simulation results are provided to show that the accuracy of the MEMP method can be very close to the Cramer-Rao lower bound. >

479 citations


Journal ArticleDOI
TL;DR: Theoretical expressions for the error in the MUSIC DOA estimates are derived and compared with simulations performed for several representative cases, and an optimally weighted version of MUSIC is proposed for a particular class of array errors.
Abstract: Application of subspace-based algorithms to narrowband direction-of-arrival (DOA) estimation requires that both the array response in all directions of interest and the spatial covariance of the noise must be known. In practice, however, neither of these quantities is known precisely. Depending on the degree to which they deviate from their nominal values, serious performance degradation can result. The performance of the MUSIC algorithm is examined for situations in which the noise covariance and array response are perturbed from their assumed values. Theoretical expressions for the error in the MUSIC DOA estimates are derived and compared with simulations performed for several representative cases, and with the appropriate Cramer-Rao bound. An optimally weighted version of MUSIC is proposed for a particular class of array errors. >

460 citations


Journal ArticleDOI
Ephraim Feig1, S. Winograd1
TL;DR: Algorithms for computing scaled DCTs and their inverses have applications in compression of continuous tone image data, where the DCT is generally followed by scaling and quantization.
Abstract: Several fast algorithms for computing discrete cosine transforms (DCTs) and their inverses on multidimensional inputs of sizes which are powers of 2 are introduced. Because the 1-D 8-point DCT and the 2-D 8*8-point DCT are so widely used, they are discussed in detail. Algorithms for computing scaled DCTs and their inverses are also presented. These have applications in compression of continuous tone image data, where the DCT is generally followed by scaling and quantization. >

436 citations


Journal ArticleDOI
TL;DR: The authors present a class of time-frequency signal representations called the reduced interference distribution (RID), and a systematic procedure to create RID kernels, (or, equivalently, compute RIDs) is proposed.
Abstract: The authors present a class of time-frequency signal representations (TFRs) called the reduced interference distribution (RID). An overview of commonly used TFRs is given, and desirable distribution properties are introduced. Particular attention is paid to the interpretation of Cohen's class of time-frequency distributions of TFRs in the ambiguity, temporal correlation, spectral correlation, and time-frequency domains. Based on the desirable kernel requirements, the RID is discussed and further defined. A systematic procedure to create RID kernels, (or, equivalently, compute RIDs) is proposed. Some aspects and properties of the RID are discussed. The authors estimate design considerations for RIDs and compare various selections of the primitive window. Some experimental results demonstrating the performance of the RID are presented. >

Journal ArticleDOI
TL;DR: Robust, computationally efficient, and consistent iterative parameter estimation algorithms are derived based on the method of maximum likelihood, and Cramer-Rao bounds are obtained, included among these algorithms are optimal fractal dimension estimators for noisy data.
Abstract: The role of the wavelet transformation as a whitening filter for 1/f processes is exploited to address problems of parameter and signal estimations for 1/f processes embedded in white background noise. Robust, computationally efficient, and consistent iterative parameter estimation algorithms are derived based on the method of maximum likelihood, and Cramer-Rao bounds are obtained. Included among these algorithms are optimal fractal dimension estimators for noisy data. Algorithms for obtaining Bayesian minimum-mean-square signal estimates are also derived together with an explicit formula for the resulting error. These smoothing algorithms find application in signal enhancement and restoration. The parameter estimation algorithms find application in signal enhancement and restoration. The parameter estimation algorithms, in addition to solving the spectrum estimation problem and to providing parameters for the smoothing process, are useful in problems of signal detection and classification. Results from simulations are presented to demonstrated the viability of the algorithms. >

Journal ArticleDOI
TL;DR: A novel design procedure is presented based on the two-channel lossless lattice that enables the design of a large class of FIR (finite impulse response)-PR filter banks, and includes the N=2M case.
Abstract: The authors obtain a necessary and sufficient condition on the 2M (M=number of channels) polyphase components of a linear-phase prototype filter of length N=2 mM (where m=an arbitrary positive integer), such that the polyphase component matrix of the modulated filter is lossless. The losslessness of the polyphase component matrix, in turn, is sufficient to ensure that the analysis/synthesis system satisfies perfect reconstruction (PR). Using this result, a novel design procedure is presented based on the two-channel lossless lattice. This enables the design of a large class of FIR (finite impulse response)-PR filter banks, and includes the N=2M case. It is shown that this approach requires fewer parameters to be optimized than in the pseudo-QMF (quadrature mirror filter) designs and in the lossless lattice based PR-QMF designs (for equal length filters in the three designs). This advantage becomes significant when designing long filters for large M. The design procedure and its other advantages are described in detail. Design examples and comparisons are included. >

Journal ArticleDOI
TL;DR: A recursive algorithm for updating the coefficients of a neural network structure for complex signals is presented and the method yields the complex form of the conventional backpropagation algorithm.
Abstract: A recursive algorithm for updating the coefficients of a neural network structure for complex signals is presented. Various complex activation functions are considered and a practical definition is proposed. The method, associated to a mean-square-error criterion, yields the complex form of the conventional backpropagation algorithm. >

Journal ArticleDOI
TL;DR: It is shown that a different decompositions, called the URV decomposition, is equally effective in exhibiting the null space and can be updated in O(p/sup 2/) time.
Abstract: In certain signal processing applications it is required to compute the null space of a matrix whose rows are samples of a signal with p components. The usual tool for doing this is the singular value decomposition. However, the singular value decomposition has the drawback that it requires O(p/sup 3/) operations to recompute when a new sample arrives. It is shown that a different decomposition, called the URV decomposition, is equally effective in exhibiting the null space and can be updated in O(p/sup 2/) time. The updating technique can be run on a linear array of p processors in O(p) time. >

Journal ArticleDOI
TL;DR: The theory of a new general class of signal energy representations depending on time and scale is developed, and specific choices allow recovery of known definitions, and provide a continuous transition from Wigner-Ville to either spectrograms or scalograms (squared modulus of the WT).
Abstract: The theory of a new general class of signal energy representations depending on time and scale is developed Time-scale analysis has been introduced recently as a powerful tool through linear representations called (continuous) wavelet transforms (WTs), a concept for which an exhaustive bilinear generalization is given Although time scale is presented as an alternative method to time frequency, strong links relating the two are emphasized, thus combining both descriptions into a unified perspective The authors provide a full characterization of the new class: the result is expressed as an affine smoothing of the Wigner-Ville distribution, on which interesting properties may be further imposed through proper choices of the smoothing function parameters Not only do specific choices allow recovery of known definitions, but they also provide, via separable smoothing, a continuous transition from Wigner-Ville to either spectrograms or scalograms (squared modulus of the WT) This property makes time-scale representations a very flexible tool for nonstationary signal analysis >

Journal ArticleDOI
TL;DR: The efficacy of the mean field theory approach is demonstrated on parameter estimation for one-dimensional mixture data and two-dimensional unsupervised stochastic model-based image segmentation and on parameter estimates for both synthetic and real-world images.
Abstract: In many signal processing and pattern recognition applications, the hidden data are modeled as Markov processes, and the main difficulty of using the maximisation (EM) algorithm for these applications is the calculation of the conditional expectations of the hidden Markov processes. It is shown how the mean field theory from statistical mechanics can be used to calculate the conditional expectations for these problems efficiently. The efficacy of the mean field theory approach is demonstrated on parameter estimation for one-dimensional mixture data and two-dimensional unsupervised stochastic model-based image segmentation. Experimental results indicate that in the 1-D case, the mean field theory approach provides results comparable to those obtained by Baum's (1987) algorithm, which is known to be optimal. In the 2-D case, where Baum's algorithm can no longer be used, the mean field theory provides good parameter estimates and image segmentation for both synthetic and real-world images. >

Journal ArticleDOI
TL;DR: A novel iterative algorithm for deriving the least squares frequency response weighting function which will produce a quasi-equiripple design is presented and typically produces a design which is only about 1 dB away from the minimax optimum solution in two iterations and converges to within 0.1 dB in six iterations.
Abstract: It has been demonstrated by several authors that if a suitable frequency response weighting function is used in the design of a finite impulse response (FIR) filter, the weighted least squares solution is equiripple. The crux of the problem lies in the determination of the necessary least squares frequency response weighting function. A novel iterative algorithm for deriving the least squares frequency response weighting function which will produce a quasi-equiripple design is presented. The algorithm converges very rapidly. It typically produces a design which is only about 1 dB away from the minimax optimum solution in two iterations and converges to within 0.1 dB in six iterations. Convergence speed is independent of the order of the filter. It can be used to design filters with arbitrarily prescribed phase and amplitude response. >

Journal ArticleDOI
TL;DR: A subspace-fitting formulation of the ESPRIT problem is presented that provides a framework for extending the algorithm to exploit arrays with multiple invariances and the asymptotic distribution of the estimates is obtained.
Abstract: A subspace-fitting formulation of the ESPRIT problem is presented that provides a framework for extending the algorithm to exploit arrays with multiple invariances. In particular, a multiple invariance (MI) ESPRIT algorithm is developed and the asymptotic distribution of the estimates is obtained. Simulations are conducted to verify the analysis and to compare the performance of MI ESPRIT with that of several other approaches. The excellent quality of the MI ESPRIT estimates is explained by recent results which state that, under certain conditions, subspace-fitting methods of this type are asymptotically efficient. >

Journal ArticleDOI
TL;DR: High-quality variable-rate image compression is achieved by segmenting an image into regions of different sizes, classifying each region into one of several perceptually distinct categories, and using a distinct coding procedure for each category.
Abstract: High-quality variable-rate image compression is achieved by segmenting an image into regions of different sizes, classifying each region into one of several perceptually distinct categories, and using a distinct coding procedure for each category Segmentation is performed with a quadtree data structure by isolating the perceptually more important areas of the image into small regions and separately identifying larger random texture blocks Since the important regions have been isolated, the remaining parts of the image can be coded at a lower rate than would be otherwise possible High-quality coding results are achieved at rates between 035 and 07 b/p depending on the nature of the original image, and satisfactory results have been obtained at 025 b/p >

Journal ArticleDOI
TL;DR: A time-scale modification system that preserves shape-invariant joint time- scale and pitch modification during voicing is developed using a version of the sinusoidal analysis-synthesis system that models and independently modifies the phase contributions of the vocal tract and vocal cord excitation.
Abstract: The simplified linear model of speech production predicts that when the rate of articulation is changed, the resulting waveform takes on the appearance of the original, except for a change in the time scale. A time-scale modification system that preserves this shape-invariance property during voicing is developed. This is done using a version of the sinusoidal analysis-synthesis system that models and independently modifies the phase contributions of the vocal tract and vocal cord excitation. An important property of the system is its ability to perform time-varying rates of change. Extensions of the method are applied to fixed and time-varying pitch modification of speech. The sine-wave analysis-synthesis system also allows for shape-invariant joint time-scale and pitch modification, and allows for the adjustment of the time scale and pitch according to speech characteristics such as the degree of voicing. >

Journal ArticleDOI
TL;DR: It is shown that every function in the sequence is nonnegative and the sequence converges monotonically to a global minimum.
Abstract: Csiszar's I-divergence is used as a discrepancy measure for deblurring subject to the constraint that all functions involved are nonnegative. An iterative algorithm is proposed for minimizing this measure. It is shown that every function in the sequence is nonnegative and the sequence converges monotonically to a global minimum. Other properties of the algorithm are shown, including lower bounds on the improvement in the I-divergence at each step of the algorithm and on the difference between the I-difference at step k and at the limit point. A method for regularizing the solution is proposed. >

Journal ArticleDOI
TL;DR: A fast new algorithm is presented for training multilayer perceptrons as an alternative to the back-propagation algorithm that reduces the required training time considerably and overcomes many of the shortcomings presented by the conventional back- Propagation algorithms.
Abstract: A fast algorithm is presented for training multilayer perceptrons as an alternative to the backpropagation algorithm. The number of iterations required by the new algorithm to converge is less than 20% of what is required by the backpropagation algorithm. Also, it is less affected by the choice of initial weights and setup parameters. The algorithm uses a modified form of the backpropagation algorithm to minimize the mean-squared error between the desired and actual outputs with respect to the inputs to the nonlinearities. This is in contrast to the standard algorithm which minimizes the mean-squared error with respect to the weights. Error signals, generated by the modified backpropagation algorithm, are used to estimate the inputs to the nonlinearities, which along with the input vectors to the respective nodes, are used to produce an updated set of weights through a system of linear equations at each node. These systems of linear equations are solved using a Kalman filter at each layer. >

Journal ArticleDOI
TL;DR: In this paper, a real-time learning algorithm for a multilayered neural network is derived from the extended Kalman filter (EKF), which approximately gives the minimum variance estimate of the linkweights.
Abstract: A novel real-time learning algorithm for a multilayered neural network is derived from the extended Kalman filter (EKF). Since this EKF-based learning algorithm approximately gives the minimum variance estimate of the linkweights, the convergence performance is improved in comparison with the backwards error propagation algorithm using the steepest descent techniques. Furthermore, tuning parameters which crucially govern the convergence properties are not included, which makes its application easier. Simulation results for the XOR and parity problems are provided. >

Journal ArticleDOI
Yariv Ephraim1
TL;DR: A Bayesian estimation approach for enhancing speech signals which have been degraded by statistically independent additive noise is motivated and developed, and minimum mean square error (MMSE) and maximum a posteriori (MAP) signal estimators are developed using hidden Markov models for the clean signal and the noise process.
Abstract: A Bayesian estimation approach for enhancing speech signals which have been degraded by statistically independent additive noise is motivated and developed. In particular, minimum mean square error (MMSE) and maximum a posteriori (MAP) signal estimators are developed using hidden Markov models (HMMs) for the clean signal and the noise process. It is shown that the MMSE estimator comprises a weighted sum of conditional mean estimators for the composite states of the noisy signal, where the weights equal the posterior probabilities of the composite states given the noisy signal. The estimation of several spectral functionals of the clean signal such as the sample spectrum and the complex exponential of the phase is also considered. A gain-adapted MAP estimator is developed using the expectation-maximization algorithm. The theoretical performance of the MMSE estimator is discussed, and convergence of the MAP estimator is proved. Both the MMSE and MAP estimators are tested in enhancing speech signals degraded by white Gaussian noise at input signal-to-noise ratios of from 5 to 20 dB. >

Journal ArticleDOI
TL;DR: By incorporating the principles of the stochastic approach into the KLA, a deterministic VQ design algorithm, the soft competition scheme (SCS), is introduced and experimental results are presented where the SCS consistently provided better codebooks than the generalized Lloyd algorithm (GLA), even when the same computation time was used for both algorithms.
Abstract: The authors provide a convergence analysis for the Kohonen learning algorithm (KLA) with respect to vector quantizer (VQ) optimality criteria and introduce a stochastic relaxation technique which produces the global minimum but is computationally expensive. By incorporating the principles of the stochastic approach into the KLA, a deterministic VQ design algorithm, the soft competition scheme (SCS), is introduced. Experimental results are presented where the SCS consistently provided better codebooks than the generalized Lloyd algorithm (GLA), even when the same computation time was used for both algorithms. The SCS may therefore prove to be a valuable alternative to the GLA for VQ design. >

Journal ArticleDOI
TL;DR: It is shown that the multidimensional signal subspace method, termed weighted subspace fitting (WSF), is asymptotically efficient, which results in a novel, compact matrix expression for the Cramer-Rao bound (CRB) on the estimation error variance.
Abstract: It is shown that the multidimensional signal subspace method, termed weighted subspace fitting (WSF), is asymptotically efficient. This results in a novel, compact matrix expression for the Cramer-Rao bound (CRB) on the estimation error variance. The asymptotic analysis of the maximum likelihood (ML) and WSF methods is extended to deterministic emitter signals. The asymptotic properties of the estimates for this case are shown to be identical to the Gaussian emitter signal case, i.e. independent of the actual signal waveforms. Conclusions concerning the modeling aspect of the sensor array problem are drawn. >

Journal ArticleDOI
TL;DR: A detailed analysis of the quantization error encountered in the CORDIC (coordinate rotation digital computer) algorithm is presented.
Abstract: A detailed analysis of the quantization error encountered in the CORDIC (coordinate rotation digital computer) algorithm is presented. Two types of quantization error are examined: an approximation error due to the quantized representation of rotation angles, and a rounding error due to the finite precision representation in both fixed-point and floating-point arithmetic. Tight error bounds for these two types of error are derived. The rounding error due to a scaling (normalization) operation in the CORDIC algorithm is also discussed. An expression for overall quantization error is derived, and several simulation examples are presented. >

Journal ArticleDOI
TL;DR: In this article, the extended lapped transform (ELT) is introduced, as a generalization of the previously reported modulated Lapped Transform (MLT), which is a promising substitute for traditional block transforms in transform coding systems.
Abstract: The family of lapped orthogonal transforms is extended to include basis functions of arbitrary length. Within this new family, the extended lapped transform (ELT) is introduced, as a generalization of the previously reported modulated lapped transform (MLT). Design techniques and fast algorithms for the ELT are presented, as well as examples that demonstrate the good performance of the ELT in signal coding applications. Therefore, the ELT is a promising substitute for traditional block transforms in transform coding systems, and also a good substitute for less efficient filter banks in subband coding systems. >

Journal ArticleDOI
N.K. Jablon1
TL;DR: Two existing blind equalization tap update recursions for 64-point and greater QAM signal constellations are studied, along with existing and novel carrier and timing recovery techniques, and it is determined that the superiorTap update recursion is the one known as the constant modulus algorithm.
Abstract: Two existing blind equalization tap update recursions for 64-point and greater QAM (quadrature amplitude modulation) signal constellations are studied, along with existing and novel carrier and timing recovery techniques. It is determined that the superior tap update recursion is the one known as the constant modulus algorithm. Carrier recovery requires a modified second-order decision-directed digital phase-locked loop. An all-digital implementation of band-edge timing recovery is used. With 14.4 kb/s outbound transmission using CCITT V.33 trellis-coded 128-QAM signals having 12.5% excess bandwidth, a prototype blind retrain procedure is developed to demonstrate the feasibility of the techniques for high-speed multipoint modems. A WE DSP32-based real-time digital signal processor was employed to test the retrain over a set of severely impaired channels. For each channel in the set, the retrain succeeded at least 90% of the time. >