scispace - formally typeset
Search or ask a question

Showing papers on "Prime-factor FFT algorithm published in 1970"


Journal ArticleDOI
TL;DR: This paper derives explicit expressions for the mean square error in the FFT when floating-point arithmetics are used, and upper and lower bounds for the total relative meansquare error are given.
Abstract: The fast Fourier transform (FFT) is an algorithm to compute the discrete Fourier coefficients with a substantial time saving over conventional methods. The finite word length used in the computer causes an error in computing the Fourier coefficients. This paper derives explicit expressions for the mean square error in the FFT when floating-point arithmetics are used. Upper and lower bounds for the total relative mean square error are given. The theoretical results are in good agreement with the actual error observed by taking the FFT of data sequences.

89 citations


Journal ArticleDOI
TL;DR: A procedure for factoring of the N×N matrix representing the discrete Fourier transform is presented which does not produce shuffled data, and is shown to be most efficient for Na power of two.
Abstract: A procedure for factoring of the N×N matrix representing the discrete Fourier transform is presented which does not produce shuffled data. Exactly one factor is produced for each factor of N, resulting in a fast Fourier transform valid for any N. The factoring algorithm enables the fast Fourier transform to be implemented in general with four nested loops, and with three loops if N is a power of two. No special logical organization, such as binary indexing, is required to unshuffle data. Included are two sample programs, one which writes the equations of the matrix factors employing the four key loops, and one which implements the algorithm in a fast Fourier transform for N a power of two. The algorithm is shown to be most efficient for Na power of two.

66 citations


Journal ArticleDOI
TL;DR: Alternative methods for the estimation of spectra are described and compared and general questions of statistical variability, the use of regression methods to smooth the periodogram, and use of time sectioning of the data to either smooth or to investigate non-stationarities in the data are discussed.

51 citations


Journal ArticleDOI
TL;DR: It is concluded that the Blackman-Tukey technique is more effective than the FFT approach in computing power spectra of short historic time series, but for long records the fast Fourier transform is the only feasible approach.
Abstract: Since controversy has arisen as to whether the Blackman-Tukey or the fast Fourier transform (FFT) technique should be used to compute power spectra, single and cross spectra have been computed by each approach for artificial data and real data to provide an empirical means for determining which technique should be used. The spectra were computed for five time series, two sets of which were actual field data. The results show that in general the two approaches give similar estimates. For a spectrum with a large slope, the FFT approach allowed more window leakage than the Blackman-Tukey approach. On the other hand, the Blackman-Tukey approach demonstrated a better window closing capability. From these empirical results it is concluded that the Blackman-Tukey technique is more effective than the FFT approach in computing power spectra of short historic time series, but for long records the fast Fourier transform is the only feasible approach.

19 citations


Journal ArticleDOI
TL;DR: New and simple derivations for the two basic FFT algorithms are presented that provide an intuitive basis for the manipulations involved and reduce the operation to the calculation of a large number of simple two-data-point transforms.
Abstract: The fast Fourier transform (FFT) provides an effective tool for the calculation of Fourier transforms involving a large number of data points. The paper presents new and simple derivations for the two basic FFT algorithms that provide an intuitive basis for the manipulations involved. The derivation for the "decimation in time" algorithm begins with a crude analysis for the zero frequency and fundamental components using only two data samples, one at the beginning and the second at the midpoint of the period of interest. Successive interpolations of data points midway between those previously used result in a refinement of the amplitudes already determined and a first value for the next higher order coefficients. The derivation of the "decimation in frequency" algorithm begins by resolving the original data set into two new data sets, one whose transform includes only even harmonic terms and a second whose transform includes only odd harmonic terms. Since the first of the two new data sets repeats after the midpoint, it can be transformed using only the first half of the data points. The second of the new data sets is multiplied by the negative fundamental function, thereby reducing its order by one and converting it into a data set that transforms into even harmonics only; in this form it can also be transformed using only the first half of the data set. Successive applications of this procedure result finally in reducing the operation to the calculation of a large number of simple two-data-point transforms.

12 citations


Journal ArticleDOI
TL;DR: A simple fast Fourier transformation (FFT) algorithm has been specifically adapted to calculate the experimental radial distribution function and its greatest advantage is its internal consistency—the ability to exactly transform back to the original domain.
Abstract: A simple fast Fourier transformation (FFT) algorithm has been specifically adapted to calculate the experimental radial distribution function. The number of equi-spaced data points must be a power of two [N = 2n for integer n] and must be greater than the Nyquist frequency [N = 2(rmax) (smax)/2π]. When properly defined, the data set is expanded as an odd function. The greatest advantage of the FFT algorithm is its internal consistency—the ability to exactly transform back to the original domain.

6 citations


22 Jun 1970
TL;DR: This report serves as documentation for a collection of basic time-series analysis programs written for the CDC 3200 digital computers and is constructed around the fast Fourier transform (FFT) ALGORITHM.
Abstract: : The report serves as documentation for a collection of basic time-series analysis programs written for the CDC 3200 digital computers. These programs are predominantly written in FORTRAN and can be easily adapted to other digital computers. These programs are constructed around the fast Fourier transform (FFT) ALGORITHM. Rather than rediscuss the theory of the FFT algorithm, which is adequately described in the existing literature, this report deals with the practical aspects of the use of the FFT. The problems that can and in many instances do occur in computing spectral estimates are also addressed on the basis of an extensive literature review. (Author)

6 citations


Journal ArticleDOI
TL;DR: In this article, a subband Hilbert transform based on subband decomposition is proposed for analytic signal processing in single-sideband amplitude modulation and demodulating frequency-modulated signals.
Abstract: A new and fast approximate Hilbert transform based on subband decomposition is presented. This new algorithm is called the subband (SB)-Hilbert transform. The reduction in complexity is obtained for narrow-band signal applications by considering only the band of most energy. Different properties of the SB-Hilbert transform are discussed with simulation examples. The new algorithm is compared with the full band Hilbert transform in terms of complexity and accuracy. The aliasing errors taking place in the algorithm are found by applying the Hilbert transform to the inverse FFT (time signal) of the aliasing errors of the SB-FFT of the input signal. Different examples are given to find the analytic signal using SB-Hilbert transform with a varying number of subbands. Applications of the new algorithm are given in single-sideband amplitude modulation and in demodulating frequency-modulated signals in communication systems. Key Words : Fast Algorithms, Hilbert Transform, Analytic Signal Processing.

4 citations


17 Aug 1970
TL;DR: In this article, the authors proposed a Z-transform algorithm for spectral analysis of signals, which allows one to get closer to the poles of a signal and effectively reduce the signal's bandwidth and sharpen its peak point.
Abstract: : A Z-transform algorithm, developed for the spectral analysis of signals, allows one to get closer to the poles of a signal and effectively reduces the signal's bandwidth and sharpens its peak point. It can give a high resolution, narrow-band frequency analysis with frequency spacing delta f or = 1/T, where T = total length of the analysis interval. This algorithm also enhances the signal poles that lie on circular or spiral contours that begin at almost any point in the Z-plane and the angular spacing of points in an arbitrary constant. Since this algorithm takes advantage of high-speed convolution, it is almost as fast and more flexible than the Fast Fourier Transform (FFT).