scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Acoustics, Speech, and Signal Processing in 1981"


Journal ArticleDOI
TL;DR: It can be shown that the order of accuracy of the cubic convolution method is between that of linear interpolation and that of cubic splines.
Abstract: Cubic convolution interpolation is a new technique for resampling discrete data. It has a number of desirable features which make it useful for image processing. The technique can be performed efficiently on a digital computer. The cubic convolution interpolation function converges uniformly to the function being interpolated as the sampling increment approaches zero. With the appropriate boundary conditions and constraints on the interpolation kernel, it can be shown that the order of accuracy of the cubic convolution method is between that of linear interpolation and that of cubic splines. A one-dimensional interpolation function is derived in this paper. A separable extension of this algorithm to two dimensions is applied to image data.

3,280 citations


Journal ArticleDOI
E. Hogenauer1
TL;DR: A class of digital linear phase finite impulse response (FIR) filters for decimation and interpolation and use limited storage making them an economical alternative to conventional implementations for certain applications.
Abstract: A class of digital linear phase finite impulse response (FIR) filters for decimation (sampling rate decrease) and interpolation (sampling rate increase) are presented. They require no multipliers and use limited storage making them an economical alternative to conventional implementations for certain applications. A digital filter in this class consists of cascaded ideal integrator stages operating at a high sampling rate and an equal number of comb stages operating at a low sampling rate. Together, a single integrator-comb pair produces a uniform FIR. The number of cascaded integrator-comb pairs is chosen to meet design requirements for aliasing or imaging error. Design procedures and examples are given for both decimation and interpolation filters with the emphasis on frequency response and register width.

1,372 citations


Journal ArticleDOI
TL;DR: In this paper, a set of functions of time obtained from acoustic analysis of a fixed, sentence-long utterance are extracted by means of LPC analysis successively throughout an utterance to form time functions, and frequency response distortions introduced by transmission systems are removed.
Abstract: This paper describes new techniques for automatic speaker verification using telephone speech. The operation of the system is based on a set of functions of time obtained from acoustic analysis of a fixed, sentence-long utterance. Cepstrum coefficients are extracted by means of LPC analysis successively throughout an utterance to form time functions, and frequency response distortions introduced by transmission systems are removed. The time functions are expanded by orthogonal polynomial representations and, after a feature selection procedure, brought into time registration with stored reference functions to calculate the overall distance. This is accomplished by a new time warping method using a dynamic programming technique. A decision is made to accept or reject an identity claim, based on the overall distance. Reference functions and decision thresholds are updated for each customer. Several sets of experimental utterances were used for the evaluation of the system, which include male and female utterances recorded over a conventional telephone connection. Male utterances processed by ADPCM and LPC coding systems were used together with unprocessed utterances. Results of the experiment indicate that verification error rate of one percent or less can be obtained even if the reference and test utterances are subjected to different transmission conditions.

1,187 citations


Journal ArticleDOI
TL;DR: Correct plots of Harris' windows are presented and additional windows with very good sidelobes and optimal behavior under several different constraints are derived.
Abstract: Some of the windows presented by Harris [1] are not correct in terms of their reported peak sidelobes and optimal behavior. We present corrected plots of Harris' windows and also derive additional windows with very good sidelobes and optimal behavior under several different constraints. The temporal weightings are characterized as a sum of weighted cosines over a finite duration. The plots enable the reader to select a window to suit his requirements, in terms of bias due to nearby sidelobes and bias due to distant sidelobes.

1,024 citations


Journal ArticleDOI
TL;DR: Digital Control Of Dynamic Systems This well-respected, market-leading text discusses the use of digital computers in the real-time control of dynamic systems with an emphasis on the design of digital controls that achieve good dynamic response and small errors while using signals that are sampled in time and quantized in amplitude.
Abstract: Digital Control Of Dynamic Systems This well-respected, market-leading text discusses the use of digital computers in the real-time control of dynamic systems. The emphasis is on the design of digital controls that achieve good dynamic response and small errors while using signals that are sampled in time and quantized in amplitude. Digital Control of Dynamic Systems (3rd Edition): Franklin ... This well-respected, market-leading text discusses the use of digital computers in the real-time control of dynamic systems. The emphasis is on the design of digital controls that achieve good dynamic response and small errors while using signals that are sampled in time and quantized in amplitude. Digital Control of Dynamic Systems: Gene F. Franklin ... Digital Control of Dynamic Systems, 2nd Edition. Gene F. Franklin, Stanford University. J. David Powell, Stanford University Digital Control of Dynamic Systems, 2nd Edition Pearson This well-respected work discusses the use of digital computers in the real-time control of dynamic systems. The emphasis is on the design of digital controls that achieve good dynamic response and small errors while using signals that are sampled in time and quantized in amplitude. MATLAB statements and problems are thoroughly and carefully integrated throughout the book to offer readers a complete design picture. Digital Control of Dynamic Systems, 3rd Edition ... Digital control of dynamic systems | Gene F. Franklin, J. David Powell, Michael L. Workman | download | B–OK. Download books for free. Find books Digital control of dynamic systems | Gene F. Franklin, J ... Abstract This well-respected work discusses the use of digital computers in the real-time control of dynamic systems. The emphasis is on the design of digital controls that achieve good dynamic... (PDF) Digital Control of Dynamic Systems Digital Control of Dynamic Systems, Addison.pdf There is document Digital Control of Dynamic Systems, Addison.pdfavailable here for reading and downloading. Use the download button below or simple online reader. The file extension PDFand ranks to the Documentscategory. Digital Control of Dynamic Systems, Addison.pdf Download ... Automatic control is the science that develops techniques to steer, guide, control dynamic systems. These systems are built by humans and must perform a specific task. Examples of such dynamic systems are found in biology, physics, robotics, finance, etc. Digital Control means that the control laws are implemented in a digital device, such as a microcontroller or a microprocessor. Introduction to Digital Control of Dynamic Systems And ... The discussions are clear, nomenclature is not hard to follow and there are plenty of worked examples. The book covers discretization effects and design by emulation (i.e. design of continuous-time control system followed by discretization before implementation) which are not to be found on every book on digital control. Amazon.com: Customer reviews: Digital Control of Dynamic ... Find helpful customer reviews and review ratings for Digital Control of Dynamic Systems (3rd Edition) at Amazon.com. Read honest and unbiased product reviews from our users. Amazon.com: Customer reviews: Digital Control of Dynamic ... 1.1.2 Digital control Digital control systems employ a computer as a fundamental component in the controller. The computer typically receives a measurement of the controlled variable, also often receives the reference input, and produces its output using an algorithm. Introduction to Applied Digital Control From the Back Cover This well-respected, marketleading text discusses the use of digital computers in the real-time control of dynamic systems. The emphasis is on the design of digital controls that achieve good dynamic response and small errors while using signals that are sampled in time and quantized in amplitude. Digital Control of Dynamic Systems (3rd Edition) Test Bank `Among the advantages of digital logic for control are the increased flexibility `of the control programs and the decision-making or logic capability of digital `systems, which can be combined with the dynamic control function to meet `other system requirements. `The digital controls studied in this book are for closed-loop (feedback) Every day, eBookDaily adds three new free Kindle books to several different genres, such as Nonfiction, Business & Investing, Mystery & Thriller, Romance, Teens & Young Adult, Children's Books, and others.

902 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived necessary and sufficient conditions for a signal to be invariant under a specific form of median filtering and proved that the form of successive median filtering of a signal (i.e., the filtered output is itself again filtered) eventually reduces the original signal to an invariant signal called a root signal.
Abstract: Necessary and sufficient conditions for a signal to be invariant under a specific form of median filtering are derived. These conditions state that a signal must be locally monotone to pass through a median filter unchanged. It is proven that the form of successive median filtering of a signal (i.e., the filtered output is itself again filtered) eventually reduces the original signal to an invariant signal called a root signal. For a signal of length L samples, a maximum of \frac{1}{2}(L - 2) repeated filterings produces a root signal.

793 citations


Journal ArticleDOI
TL;DR: The nature of the ambiguity processing is interpreted, and an algorithm approach is shown that minimizes the processing burden over a broad category of applications without affecting performance.
Abstract: Calculation of the complex ambiguity function is viewed as the basis for joint estimation of the differential delay and differential frequency offset between two waveforms that contain a common component plus additive noise. In many applications, the required accuracy leads to a need for integration over long data sets that can become a challenge for near real-time digital processing. The nature of the ambiguity processing is interpreted, and an algorithm approach is shown that minimizes the processing burden over a broad category of applications without affecting performance.

530 citations


Journal ArticleDOI
TL;DR: In this article, an overview of applied research in passive sonar signal processing estimation techniques for naval systems is presented, where the authors present a discussion of this problem in terms of estimating the position and velocity of a moving acoustic source.
Abstract: An overview of applied research in passive sonar signal processing estimation techniques for naval systems is presented. The naval problem that motivates time delay estimation is the source state estimation problem. A discussion of this problem in terms of estimating the position and velocity of a moving acoustic source is presented. Optimum bearing and range estimators are presented for the planar problem and related to the optimum time delay vector estimator. Suboptimum realizations are considered together with the effects of source motion and receiver positional uncertainty.

497 citations


Journal ArticleDOI
Abstract: We present a new direct method of estimating the three-dimensional motion parameters of a rigid planar patch from two time-sequential perspective views (image frames). First, a set of eight pure parameters are defined. These parameters can be determined uniquely from the two given image frames by solving a set of linear equations. Then, the actual motion parameters are determined from these pure parameters by a method which requires the solution of a sixth-order polynomial of one variable only, and there exists a certain efficient algorithm for solving a sixth-order polynomial. Aside from a scale factor for the translation parameters, the number of real solutions never exceeds two. In the special case of three-dimensional translation, the motion parameters can be expressed directly as some simple functions of the eight pure parameters. Thus, only a few arithmetic operations are needed.

391 citations


Journal ArticleDOI
TL;DR: The analysis shows that in the case of low SNR and when signal and noise autospectra are constants over the band or signal and noises fall off at the same rate, the minimum standard deviation of the time delay estimate varies inversely to the SNR, to the square root of the product of observation time and bandwidth, and to the center frequency.
Abstract: Sonar and radar systems not only detect targets but also localize them. The process of localization involves bearing and range estimation. These objectives of bearing and range estimation can be accomplished actively or passively, depending on the situation. In active sonar or radar systems, a pulsed signal is transmitted to the target and the echo is received at the receiver. The range of the target is determined from the time delay obtained from the echo. In passive sonar systems, the target is detected from acoustic signals emitted by the target, and it is localized using time delays obtained from received signals at spacially separated points. Several authors have calculated the variance of the time delay estimate in the neighborhood of true time delays and have presented their results in terms of coherence function and signal and noise autospectra. Here we analyze these derivations and show that they are the same for the case of low signal-to-noise ratio (SNR). We also address a practical problem with a target-generated wide-band signal and present the Cramer-Rao lower bound on the variance of the time delay estimate as a function of commonly understood terms such as SNR, bandwidth, observation time, and center frequency of the band. The analysis shows that in the case of low SNR and when signal and noise autospectra are constants over the band or signal and noise autospectra fall off at the same rate, the minimum standard deviation of the time delay estimate varies inversely to the SNR, to the square root of the product of observation time and bandwidth, and to the center frequency (provided W^{2}/12 f\min{0}\max{2} \ll 1 , where W = bandwidth and f_{0} = center frequency of the band). The only difference in the case of a high SNR is that the standard deviation varies inversely to the square root of the SNR, and all other parameter relationships are the same. We also address the effects of different signal and noise autospectral slopes on the variance of the time delay estimate in passive localization.

376 citations


Journal ArticleDOI
TL;DR: A Hilbert space approach to the derivations of magnitude normalized signal and gain recursions is presented and normalized forms are expected to have even better numerical properties than the unnormalized versions.
Abstract: Recursive least squares ladder estimation algorithms have attracted much attention recently because of their excellent convergence behavior and fast parameter tracking capability, compared to gradient based algorithms. We present some recently developed square root normalized exact least squares ladder form algorithms that have fewer storage requirements, and lower computational requirements than the unnormalized ones. A Hilbert space approach to the derivations of magnitude normalized signal and gain recursions is presented. The normalized forms are expected to have even better numerical properties than the unnormalized versions. Other normalized forms, such as joint process estimators (e.g., "adaptive line enhancer") and ARMA (pole-zero) models, will also be presented. Applications of these algorithms to fast (or "zero") startup equalizers, adaptive noise- and echo cancellers, non-Gaussian event detectors, and inverse models for control problems are also mentioned.

Journal ArticleDOI
TL;DR: A hybrid end-point detector is proposed which gives a rejection rate of less than 0.5 percent, while providing recognition accuracy close to that obtained from hand-edited endpoints.
Abstract: Accurate location of the endpoints of an isolated word is important for reliable and robust word recognition. The endpoint detection problem is nontrivial for nonstationary backgrounds where artifacts (i.e., nonspeech events) may be introduced by the speaker, the recording environment, and the transmission system. Several techniques for the detection of the endpoints of isolated words recorded over a dialed-up telephone line were studied. The techniques were broadly classified as either explicit, implicit, or hybrid in concept. The explicit techniques for endpoint detection locate the endpoints prior to and independent of the recognition and decision stages of the system. For the implicit methods, the endpoints are determined solely by the recognition and decision stages of the system, i.e., there is no separate stage for endpoint detection. The hybrid techniques incorporate aspects from both the explicit and implicit methods. Investigations showed that the hybrid techniques consistently provided the best estimates for both of the word endpoints and, correspondingly, the highest recognition accuracy of the three classes studied. A hybrid end-point detector is proposed which gives a rejection rate of less than 0.5 percent, while providing recognition accuracy close to that obtained from hand-edited endpoints.

Journal ArticleDOI
TL;DR: The resulting algorithm is shown to be significantly more efficient than the one recently proposed by Sakoe for connected word recognition, while maintaining the same accuracy in estimating the best possible matching string.
Abstract: Dynamic time warping has been shown to be an effective method of handling variations in the time scale of polysyllabic words spoken in isolation. This class of techniques has recently been applied to connected word recognition with high degrees of success. In this paper a level building technique is proposed for optimally time aligning a sequence of connected words with a sequence of isolated word reference patterns. The resulting algorithm, which has been found to be a special case of an algorithm previously described by Bahl and Jelinek, is shown to be significantly more efficient than the one recently proposed by Sakoe for connected word recognition, while maintaining the same accuracy in estimating the best possible matching string. An analysis of the level building method shows that it can be obtained as a modification to the Sakoe method by reversing the order of minimizations in the two-pass technique with some subsequent processing. This level building algorithm has a number of implementation parameters that can be used to control the efficiency of the method, as well as its accuracy. The nature of these parameters is discussed in this paper. In a companion paper we discuss the application of this level building time warping method to a connected digit recognition problem.

Journal ArticleDOI
TL;DR: It is shown that many of the existing extrapolation algorithms for noiseless observations are unified under the criterion of minimum norm least squares (MNLS) extrapolation, and some new algorithms useful for extrapolation and spectral estimation of band-limited sequences in one and two dimensions are presented.
Abstract: In this paper we present some new algorithms useful for extrapolation and spectral estimation of band-limited sequences in one and two dimensions. First we show that many of the existing extrapolation algorithms for noiseless observations are unified under the criterion of minimum norm least squares (MNLS) extrapolation. For example, the iterative algorithms proposed in [2] and [8]-[10] are shown to be special cases of a one-step gradient algorithm which has linear convergence. Convergence and other numerical properties are improved by going to a conjugate gradient algorithm. For noisy observations, these algorithms could be extended by considering a mean-square extrapolation criterion which gives rise to a mean-square extrapolation filter and also to a recursive extrapolation filter. Examples and application of these methods are given. Extension of these algorithms is made for problems where the signal is known to be periodic. A new set of functions called the periodic-discrete prolate spheroidal sequences (P-DPSS), analogous to DPSS [21], [22], are introduced and their properties are studied. Finally, several of these algorithms are generalized to two dimensions and the relevant equations are given.

Journal ArticleDOI
A. Piersol1
TL;DR: In this paper, the estimation of time delays between two received signals using phase measurements is discussed and the accuracy of such estimates is detailed for the ideal case of statistically independent noise and no scattering at the receiver locations, it is shown that phase data regression lines yield time delay estimates with the same accuracy as other optimal time delay estimation procedures.
Abstract: The estimation of time delays between two received signals using phase measurements is discussed and the accuracy of such estimates is detailed. For the ideal case of statistically independent noise and no scattering at the receiver locations, it is shown that phase data regression lines yield time delay estimates with the same accuracy as other optimal time delay estimation procedures. For less ideal situations, the potential advantages of time delay estimation using phase data are discussed and illustrated. It is shown that regression analysis of phase estimates at properly selected frequencies can sometimes be employed to reduce bias errors in time delay estimates due to correlated receiver noise. It is also shown that the estimation errors due to scattering at the receiver location can often be assessed in nonparametric terms to provide time delay estimates with a realistic error bound.

Journal ArticleDOI
TL;DR: In this article, a new application of the LMS adaptive filter, that of determining the time delay in a signal between two split-array outputs, is described, where this time delay can be converted to the bearing of the target radiating the signal.
Abstract: A new application of the LMS adaptive filter, that of determining the time delay in a signal between two split-array outputs, is described. In a split array sonar, this time delay can be converted to the bearing of the target radiating the signal. The performance of such a tracker is analyzed for stationary broad-band targets. It is shown that a continuous adaptive tracker performs within 0.5 dB of the Cramer-Rao lower bound. Further, performance predictions are developed for a discrete adaptive tracker which demonstrates excellent agreement with simulations. It is shown that the adaptive tracker can have significantly less sensitivity to changing input spectra than a conventional tracker using a fixed input filter.

Journal ArticleDOI
TL;DR: In this paper, the authors developed the theoretical basis for time-scale modification of speech based on short-time Fourier analysis and developed a high quality system for changing the apparent rate of articulation of recorded speech, while at the same time preserving such qualities as naturalness, intelligibility, and speaker-dependent features.
Abstract: This paper develops the theoretical basis for time-scale modification of speech based on short-time Fourier analysis. The goal is the development of a high-quality system for changing the apparent rate of articulation of recorded speech, while at the same time preserving such qualities as naturalness, intelligibility, and speaker-dependent features. The results of the theoretical study were used as the framework for the design of a high-quality speech rate-change system that was simulated on a general-purpose minicomputer.

Journal ArticleDOI
TL;DR: A block adaptive filtering procedure in which the filter coefficients are adjusted once per each output block in accordance with a generalized least mean-square (LMS) algorithm shows that it permits fast implementations while maintaining performance equivalent to that of the widely used LMS adaptive filter.
Abstract: Block digital filtering involves the calculation of a block or finite set of filter outputs from a block of input values. This paper presents a block adaptive filtering procedure in which the filter coefficients are adjusted once per each output block in accordance with a generalized least mean-square (LMS) algorithm. Analyses of convergence properties and computational complexity show that the block adaptive filter permits fast implementations while maintaining performance equivalent to that of the widely used LMS adaptive filter.

Journal ArticleDOI
TL;DR: An exact interpolation scheme is proposed which, in practice, can be approached with arbitrary accuracy using well-conditioned algorithms and demonstrates the feasibility of direct FT reconstruction of CT data.
Abstract: Direct Fourier transform (FT) reconstruction of images in computerized tomography (CT) is not widely used because of the difficulty of precisely interpolating from polar to Cartesian samples. In this paper, an exact interpolation scheme is proposed which, in practice, can be approached with arbitrary accuracy using well-conditioned algorithms. Several features of the direct FT method are discussed. A method that allows angular band limiting of the data before processing -to avoid angular aliasing artifacts in the reconstructed image-is discussed and experimentally verified. The experimental results demonstrate the feasibility of direct FT reconstruction of CT data.

Journal ArticleDOI
TL;DR: The two-dimensional reduced update Kalman filter is extended to the deconvolution problem of image restoration and a more thorough treatment of the uniquely two- dimensional boundary condition problems is provided.
Abstract: The two-dimensional reduced update Kalman filter was recently introduced. The corresponding scalar filtering equations were derived for the case of estimating a Gaussian signal in white Gaussian noise and were shown to constitute a general nonsymmetric half-plane recursive filter. This paper extends the method to the deconvolution problem of image restoration. This paper also provides a more thorough treatment of the uniquely two-dimensional boundary condition problems. Numerical and subjective examples are presented.

Journal ArticleDOI
TL;DR: A Fortran program that calculates the discrete Fourier transform using a prime factor algorithm is presented that is faster than both the Cooley-Tukey algorithm and the Winograd nested algorithm.
Abstract: This paper presents a Fortran program that calculates the discrete Fourier transform using a prime factor algorithm. A very simple indexing scheme is employed that results in a flexible, modular algorithm that efficiently calculates the DFT in-place. A modification of this algorithm gives the output both in-place and in-order at a slight cost in flexibility. A comparison shows it to be faster than both the Cooley-Tukey algorithm and the Winograd nested algorithm.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive analysis of the mean-squared error (MSE) of adaptation for real least mean square (LMS) algorithms is presented, based on the method developed in the 1968 dissertation by K. D. Senne.
Abstract: In narrow-band adaptive-array applications, the mean-square convergence of the discrete-time real least mean-square (LMS) algorithm is slowed by image-frequency noises generated in the LMS loops. The complex LMS algorithm proposed by Widrow et al. is shown to eliminate these noises, yielding convergence of the mean-squared error (MSE) at slightly over twice the rate. This paper includes a comprehensive analysis of the MSE of adaptation for LMS. The analysis is based upon the method developed in the 1968 dissertation by K. D. Senne, and it represents the most complete treatment of the subject published to date.

Journal ArticleDOI
TL;DR: The adaptive delay algorithm uses a gradient technique to find the value of the adaptive delay that minimizes the mean-squared (MS) error function, which is a function of the power of the input signal.
Abstract: An adaptive technique is developed which iteratively determines the time delay between two sampled signals that are highly correlated. Although the procedure does not require a priori information on the input signals, it does require that the signals have a unimodal or periodically unimodal cross-correlation function. The adaptive delay algorithm uses a gradient technique to find the value of the adaptive delay that minimizes the mean-squared (MS) error function. This iterative algorithm is similar to the adaptive filter coefficient algorithm developed by Widrow. However, the MS error function for the adaptive delay is not quadratic, as it is in the adaptive filter. A statistical analysis determines the value of the convergence parameter which effects rapid convergence of the adaptive delay. This convergence parameter is a function of the power of the input signal. Computer simulations are presented which verify that the adaptive delay correctly estimates the time delay difference between two sinusoids, including those in noisy environments. The adaptive delay is also shown to perform correctly in a time delay tracking application.

Journal ArticleDOI
TL;DR: This new method differs from previous methods in its explicit inclusion of a prior estimate of the power spectrum, and it reduces to maximum entropy spectral analysis as a special case.
Abstract: The principle of minimum cross-entropy (minimum directed divergence, minimum discrimination information, minimum relative entropy) is summarized, discussed, and applied to the classical problem of estimating power spectra given values of the autocorrelation function. This new method differs from previous methods in its explicit inclusion of a prior estimate of the power spectrum, and it reduces to maximum entropy spectral analysis as a special case. The prior estimate can be viewed as a means of shaping the spectral estimator. Cross-entropy minimization yields a family of shaped spectral estimators consistent with known autocorrelations. Results are derived in two equivalent ways: once by minimizing the cross-entropy of underlying probability densities, and once by arguments concerning the cross-entropy between the input and output of linear filters. Several example minimum cross-entropy spectra are included.

Journal ArticleDOI
TL;DR: In this article, the Radon transform is viewed as a bivariate function and two-dimensional sampling theory is used to address sampling and information content issues, and it is shown that the band region of a function with a finite space-bandwidth product is a "finite-length bowtie" because of the special shape of this band region.
Abstract: The Radon transform of a bivariate function, which has application in tomographic imaging, has traditionally been viewed as a parametrized univariate function. In this paper, the Radon transform is instead viewed as a bivariate function and two-dimensional sampling theory is used to address sampling and information content issues. It is Shown that the band region of the Radon transform of a function with a finite space-bandwidth product is a "finite-length bowtie." Because of the special shape of this band region. "Nyquist sampling" of the Radon transform is on a hexagonal grid. This sampling grid requires approximately one-half as many samples as the rectangular grid obtained from the traditional viewpoint. It is also shown that for a nonbandlimited function of finite spatial support, the bandregion of the Radon transform is an "infinite-length bowtie." Consequently, it follows that approximately 2M2/π independent pieces of information about the function can be extracted from M "projections." These results and others follow very naturally from the two-dimensional viewpoint presented.

Journal ArticleDOI
TL;DR: In this article, the authors developed iterative algorithms for reconstructing a minimum phase sequence from the phase or magnitude of its Fourier transform, which involves repeatedly imposing a causality constraint in the time domain and incorporating the known phase function in the frequency domain.
Abstract: In this paper, we develop iterative algorithms for reconstructing a minimum phase sequence from the phase or magnitude of its Fourier transform. These iterative solutions involve repeatedly imposing a causality constraint in the time domain and incorporating the known phase or magnitude function in the frequency domain. This approach is the basis of a new means of computing the Hilbert transform of the log-magnitude or phase of the Fourier transform of a minimum phase sequence which does not require phase unwrapping. Finally, we discuss the potential use of this iterative computation in determining samples of the unwrapped phase of a mixed phase sequence.

Journal ArticleDOI
John Makhoul1
TL;DR: In this paper, the eigenvectors of a symmetric Toeplitz matrix and the location of the zeros of the filters (eigenfilters) whose coefficients are the elements of the eigvectors are discussed.
Abstract: This paper presents a number of results concerning the eigenvectors of a symmetric Toeplitz matrix and the location of the zeros of the filters (eigenfilters) whose coefficients are the elements of the eigenvectors. One of the results is that the eigenfilters corresponding to the maximum and minimum eigenvalues, if distinct, have their zeros on the unit circle, while the zeros of the other eigenfilters may or may not have their zeros on the unit circle. Even if the zeros of the eigenfilters of a matrix are all on the unit circle, the matrix need not be Toeplitz. Examples are given to illustrate the different properties.

Journal ArticleDOI
TL;DR: In this paper, it was shown that with uniform sampling, a sample of the sum of M sinusoids at time nT can be uniquely expressed as a linear combination of the 2M samples at times (n − 1)T,..., (n - 2M)T.
Abstract: It is shown that with uniform sampling, a sample of the sum of M sinusoids at time nT can be uniquely expressed as a linear combination of the 2M samples at times (n - 1)T,..., (n - 2M)T. The 2M coefficients of linear combination are also the coefficients of a 2M-order polynomial whose roots are e^{\pmj\omega_{i}}, \omega_{i} being the frequencies of the sinusoids. Given the samples of sinusoids plus noise, a consistent estimate of the 2M coefficients are obtained by the instrumental variable method of parameter estimation. It is also possible to track time-varying frequencies by a recursive algorithm that exponentially weighs out the past data. Simulation examples of one and two sinusoids are given.

Journal ArticleDOI
TL;DR: In this article, a method to model a time delay by a finite impulse response filter is presented, which is useful in simulation work that involves time delays and transforms the time delay estimation problem into one of parameter estimation.
Abstract: A method to model a time delay by a finite impulse response filter is presented. It is useful in simulation work that involves time delays and transforms the time delay estimation problem into one of parameter estimation. The benefits of this approach are the elimination of spectral estimation, a choice of many parameter estimation algorithms, and the capability to track time-varying delays. Two examples of estimating nonstationary time delays are also given.

Journal ArticleDOI
TL;DR: The results of the experiments show that there is only a slight difference between the recognition accuracies for statistical features and dynamic features over the long term, and it is more efficient to use statistical features than dynamic features.
Abstract: This paper describes results of speaker recognition experiments using statistical features and dynamic features of speech spectra extracted from fixed Japanese word utterances. The speech wave is transformed into a set of time functions of log area ratios and a fundamental frequency. In the case of statistical features, a mean value and a standard deviation for each time function and a correlation matrix between these functions are calculated in the voiced portion of each word, and after a feature selection procedure, they are compared with reference features. In the case of dynamic features, the time functions are brought into time registration with reference functions. The results of the experiments show that there is only a slight difference between the recognition accuracies for statistical features and dynamic features over the long term. Since the amount of calculation necessary for recognition using statistical features is only about one-tenth of that for recognition using dynamic features, it is more efficient to use statistical features than dynamic features. When training utterances are recorded over ten months for each customer and spectral equalization is applied, 99.5 percent and 96.3 percent verification accuracies can be obtained for input utterances ten months and five years later, respectively, using statistical features extracted from two words. Combination of dynamic features with statistical features can reduce the error rate to half that obtained with either one alone.