scispace - formally typeset
Search or ask a question

Showing papers on "Fast Fourier transform published in 1981"


Book
01 Jan 1981
TL;DR: This book explains the development of the Fast Fourier Transform Algorithm and its applications in Number Theory and Polynomial Algebra, as well as some examples of its application in Quantization Effects.
Abstract: 1 Introduction.- 1.1 Introductory Remarks.- 1.2 Notations.- 1.3 The Structure of the Book.- 2 Elements of Number Theory and Polynomial Algebra.- 2.1 Elementary Number Theory.- 2.1.1 Divisibility of Integers.- 2.1.2 Congruences and Residues.- 2.1.3 Primitive Roots.- 2.1.4 Quadratic Residues.- 2.1.5 Mersenne and Fermat Numbers.- 2.2 Polynomial Algebra.- 2.2.1 Groups.- 2.2.2 Rings and Fields.- 2.2.3 Residue Polynomials.- 2.2.4 Convolution and Polynomial Product Algorithms in Polynomial Algebra.- 3 Fast Convolution Algorithms.- 3.1 Digital Filtering Using Cyclic Convolutions.- 3.1.1 Overlap-Add Algorithm.- 3.1.2 Overlap-Save Algorithm.- 3.2 Computation of Short Convolutions and Polynomial Products.- 3.2.1 Computation of Short Convolutions by the Chinese Remainder Theorem.- 3.2.2 Multiplications Modulo Cyclotomic Polynomials.- 3.2.3 Matrix Exchange Algorithm.- 3.3 Computation of Large Convolutions by Nesting of Small Convolutions.- 3.3.1 The Agarwal-Cooley Algorithm.- 3.3.2 The Split Nesting Algorithm.- 3.3.3 Complex Convolutions.- 3.3.4 Optimum Block Length for Digital Filters.- 3.4 Digital Filtering by Multidimensional Techniques.- 3.5 Computation of Convolutions by Recursive Nesting of Polynomials.- 3.6 Distributed Arithmetic.- 3.7 Short Convolution and Polynomial Product Algorithms.- 3.7.1 Short Circular Convolution Algorithms.- 3.7.2 Short Polynomial Product Algorithms.- 3.7.3 Short Aperiodic Convolution Algorithms.- 4 The Fast Fourier Transform.- 4.1 The Discrete Fourier Transform.- 4.1.1 Properties of the DFT.- 4.1.2 DFTs of Real Sequences.- 4.1.3 DFTs of Odd and Even Sequences.- 4.2 The Fast Fourier Transform Algorithm.- 4.2.1 The Radix-2 FFT Algorithm.- 4.2.2 The Radix-4 FFT Algorithm.- 4.2.3 Implementation of FFT Algorithms.- 4.2.4 Quantization Effects in the FFT.- 4.3 The Rader-Brenner FFT.- 4.4 Multidimensional FFTs.- 4.5 The Bruun Algorithm.- 4.6 FFT Computation of Convolutions.- 5 Linear Filtering Computation of Discrete Fourier Transforms.- 5.1 The Chirp z-Transform Algorithm.- 5.1.1 Real Time Computation of Convolutions and DFTs Using the Chirp z-Transform.- 5.1.2 Recursive Computation of the Chirp z-Transform.- 5.1.3 Factorizations in the Chirp Filter.- 5.2 Rader's Algorithm.- 5.2.1 Composite Algorithms.- 5.2.2 Polynomial Formulation of Rader's Algorithm.- 5.2.3 Short DFT Algorithms.- 5.3 The Prime Factor FFT.- 5.3.1 Multidimensional Mapping of One-Dimensional DFTs.- 5.3.2 The Prime Factor Algorithm.- 5.3.3 The Split Prime Factor Algorithm.- 5.4 The Winograd Fourier Transform Algorithm (WFTA).- 5.4.1 Derivation of the Algorithm.- 5.4.2 Hybrid Algorithms.- 5.4.3 Split Nesting Algorithms.- 5.4.4 Multidimensional DFTs.- 5.4.5 Programming and Quantization Noise Issues.- 5.5 Short DFT Algorithms.- 5.5.1 2-Point DFT.- 5.5.2 3-Point DFT.- 5.5.3 4-Point DFT.- 5.5.4 5-Point DFT.- 5.5.5 7-Point DFT.- 5.5.6 8-Point DFT.- 5.5.7 9-Point DFT.- 5.5.8 16-Point DFT.- 6 Polynomial Transforms.- 6.1 Introduction to Polynomial Transforms.- 6.2 General Definition of Polynomial Transforms.- 6.2.1 Polynomial Transforms with Roots in a Field of Polynomials.- 6.2.2 Polynomial Transforms with Composite Roots.- 6.3 Computation of Polynomial Transforms and Reductions.- 6.4 Two-Dimensional Filtering Using Polynomial Transforms.- 6.4.1 Two-Dimensional Convolutions Evaluated by Polynomial Transforms and Polynomial Product Algorithms.- 6.4.2 Example of a Two-Dimensional Convolution Computed by Polynomial Transforms.- 6.4.3 Nesting Algorithms.- 6.4.4 Comparison with Conventional Convolution Algorithms.- 6.5 Polynomial Transforms Defined in Modified Rings.- 6.6 Complex Convolutions.- 6.7 Multidimensional Polynomial Transforms.- 7 Computation of Discrete Fourier Transforms by Polynomial Transforms.- 7.1 Computation of Multidimensional DFTs by Polynomial Transforms.- 7.1.1 The Reduced DFT Algorithm.- 7.1.2 General Definition of the Algorithm.- 7.1.3 Multidimensional DFTs.- 7.1.4 Nesting and Prime Factor Algorithms.- 7.1.5 DFT Computation Using Polynomial Transforms Defined in Modified Rings of Polynomials.- 7.2 DFTs Evaluated by Multidimensional Correlations and Polynomial Transforms.- 7.2.1 Derivation of the Algorithm.- 7.2.2 Combination of the Two Polynomial Transform Methods.- 7.3 Comparison with the Conventional FFT.- 7.4 Odd DFT Algorithms.- 7.4.1 Reduced DFT Algorithm. N = 4.- 7.4.2 Reduced DFT Algorithm. N = 8.- 7.4.3 Reduced DFT Algorithm. N = 9.- 7.4.4 Reduced DFT Algorithm. N = 16.- 8 Number Theoretic Transforms.- 8.1 Definition of the Number Theoretic Transforms.- 8.1.1 General Properties of NTTs.- 8.2 Mersenne Transforms.- 8.2.1 Definition of Mersenne Transforms.- 8.2.2 Arithmetic Modulo Mersenne Numbers.- 8.2.3 Illustrative Example.- 8.3 Fermat Number Transforms.- 8.3.1 Definition of Fermat Number Transforms.- 8.3.2 Arithmetic Modulo Fermat Numbers.- 8.3.3 Computation of Complex Convolutions by FNTs.- 8.4 Word Length and Transform Length Limitations.- 8.5 Pseudo Transforms.- 8.5.1 Pseudo Mersenne Transforms.- 8.5.2 Pseudo Fermat Number Transforms.- 8.6 Complex NTTs.- 8.7 Comparison with the FFT.- Appendix A Relationship Between DFT and Conyolution Polynomial Transform Algorithms.- A.1 Computation of Multidimensional DFT's by the Inverse Polynomial Transform Algorithm.- A.1.1 The Inverse Polynomial Transform Algorithm.- A.1.2 Complex Polynomial Transform Algorithms.- A.1.3 Round-off Error Analysis.- A.2 Computation of Multidimensional Convolutions by a Combination of the Direct and Inverse Polynomial Transform Methods.- A.2.1 Computation of Convolutions by DFT Polynomial Transform Algorithms.- A.2.2 Convolution Algorithms Based on Polynomial Transforms and Permutations.- A.3 Computation of Multidimensional Discrete Cosine Transforms by Polynomial Transforms.- A.3.1 Computation of Direct Multidimensional DCT's.- A.3.2 Computation of Inverse Multidimensional DCT's.- Appendix B Short Polynomial Product Algorithms.- Problems.- References.

867 citations


Journal ArticleDOI
TL;DR: A Fortran program that calculates the discrete Fourier transform using a prime factor algorithm is presented that is faster than both the Cooley-Tukey algorithm and the Winograd nested algorithm.
Abstract: This paper presents a Fortran program that calculates the discrete Fourier transform using a prime factor algorithm. A very simple indexing scheme is employed that results in a flexible, modular algorithm that efficiently calculates the DFT in-place. A modification of this algorithm gives the output both in-place and in-order at a slight cost in flexibility. A comparison shows it to be faster than both the Cooley-Tukey algorithm and the Winograd nested algorithm.

183 citations


Journal ArticleDOI
TL;DR: The merits of three alternative methods for estimating spectral features are compared to the fast Fourier transform (FFT), based on autoregressive (AR) modeling, and it is demonstrated that a fifth-order filter is sufficient to estimate EEG characteristics in 90 percent of the cases.
Abstract: The hypothesis that an electroencephalogram (EEG) can be analyzed by computer using a series of basic descriptive elements of short duration (1-5 s) has prompted the development of methods to extract the best possible features from very short (1 s) time intervals. In this paper, the merits of three alternative methods for estimating spectral features are compared to the fast Fourier transform (FFT). These procedures, based on autoregressive (AR) modeling are: 1) Kalman filtering, 2) the Burg algorithm, and 3) the Yule-Walker (YW) approach. The methods are reportedly able to provide high resolution spectal estimates from short EEG intervals, even in cases where intervals contain less than a ful period of a cyclic waveform. The first method is adaptive, the other two are not. Using Akaike's final prediction error (FPE) criterion, it was demonstrated that a fifth-order filter is sufficient to estimate EEG characteristics in 90 percent of the cases. However, visual inspection of the resulting spectra revealed that the order indicated by the FPE criterion is generally too low and better spectra can be obtained using a tenth-order AR model. The Yule-Walker method resulted in many unstable models and should not be used. Of two remaining methods, i.e., Burg and Kalman, the first provides spectra with peaks having a smaller bandwidth than the Kalman-flter method. Additional experiments with the Burg method revealed that, on the average, the same results were obtained using the FFT.

172 citations


Journal ArticleDOI
TL;DR: In this article, the forward and backward propagation of harmonic acoustic fields using Fourier transform methods was studied for planar vibrators operating above and below coincidence, and numerical results illustrate the acoustic nearfield as a function of distance from the vibrator.
Abstract: The forward and backward propagation of harmonic acoustic fields using Fourier transform methods is presented. In particular, the forward propagation of a velocity distribution to obtain a pressure field and the backward propagation of a pressure field to obtain a velocity distribution are addressed. Numerical examples are presented to illustrate the nearfield behavior of the pressure field from complex planar vibrators, e.g,—an ultrasonic transducer or plate, with nonuniform velocity distributions. The numerical results, which were obtained via the use of FFT algorithms, are presented for vibrators which are operating above and below coincidence. These results illustrate the acoustic nearfield as a function of distance from the vibrator. Numerical results are also presented to illustrate the backward projection method. The pressure field of a 3×3 focused array is back projected to obtain the velocity distribution for several cases of interest. These results illustrate the utility of the transform method and the effect of spatial windows or filters in its implementation using FFT algorithms.

136 citations


Journal ArticleDOI
TL;DR: The results of computer simulations show clearly how the process of forcing the image to conform to a priori object data reduces artifacts arising from limited data available in the Fourier domain.
Abstract: An iterative technique is proposed for improving the quality of reconstructions from projections when the number of projections is small or the angular range of projections is limited. The technique consists of transforming repeatedly between image and transform spaces and applying a priori object information at each iteration. The approach is a generalization of the Gerchberg-Papoulis algorithm, a technique for extrapolating in the Fourier domain by imposing a space-limiting constraint on the object in the spatial domain. A priori object data that may be applied, in addition to truncating the image beyond the known boundaries of the object, include limiting the maximum range of variation of the physical parameter being imaged. The results of computer simulations show clearly how the process of forcing the image to conform to a priori object data reduces artifacts arising from limited data available in the Fourier domain.

116 citations


Journal ArticleDOI
TL;DR: An optimally regularized (filtered) Fourier series can be used most effectively for estimating higher-order derivatives of noisy data sequences, such as occur in biomechanical investigations.

112 citations


Journal ArticleDOI
TL;DR: A new iterative algorithm for the maximum entropy power spectrum estimation is presented, which utilizes the computational efficiency of the fast Fourier transform (FFT) algorithm and has been empirically observed to solve the maximum Entropy Power spectrum estimation problem.
Abstract: A new iterative algorithm for the maximum entropy power spectrum estimation is presented in this paper. The algorithm, which is applicable to two-dimensional signals as well as one-dimensional signals, utilizes the computational efficiency of the fast Fourier transform (FFT) algorithm and has been empirically observed to solve the maximum entropy power spectrum estimation problem. Examples are shown to illustrate the performance of the new algorithm.

101 citations


01 Nov 1981
TL;DR: A new technique for representing digital pictures that greatly simplifies the problem of finding the correspondence between components in the description of two pictures, based on a new class of reversible transforms (the Difference of Low Pass or DOLP transform).
Abstract: : This dissertation presents a new technique for representing digital pictures. The principal benefit of this representation is that it greatly simplifies the problem of finding the correspondence between components in the description of two pictures. This representation technique is based on a new class of reversible transforms (the Difference of Low Pass or DOLP transform). A fast algorithm for computing the DOLP transform is then presented. This algorithm, called cascade convolution with expansion is based on the auto-convolution scaling property of Gaussian functions. Techniques are then described for constructing a structural description of an image from its Sampled DOLP transform. The symbols in this description are detected by detecting local peaks and ridges in each band-pass image, and among all of the band-pass image. This description has the form of a tree of peaks, with the peaks interconnected by chains of symbols from the ridges. The tree of peaks has a structure which can be matched despite changes in size, orientation, or position of the gray scale shape that is described.

96 citations


Journal ArticleDOI
TL;DR: Using experimental data subject to noise and drift, it is found the structure function can be computed to higher accuracy, yet using less data, than the correlation function, using one to two orders less data points than correlation functions of comparable information content.
Abstract: Using experimental data subject to noise and drift, we find the structure function can be computed to higher accuracy, yet using less data, than the correlation function. While this tendency is in line with theoretical reasoning, we seem to be the first to report on quantitative aspects. Taking wall pressure data from a transsonic wind tunnel, our structure functions are obtained with one to two orders less of data points than correlation functions of comparable information content. These advantages apply to auto- and cross-structure functions alike when compared to auto- and cross-correlation functions, respectively. Some comments are added on the possibility of designing digital “structurators” similar to existing digital correlators, either as software products using the FFT and recursive algorithms, or as hardware products in the form of fast special purpose paralled processors.

89 citations


Journal ArticleDOI
TL;DR: In this paper, a general method for convolving discrete distributions using Fast Fourier Transforms is described, which can be used in evaluating reliability of any system involving discrete or discretised convolution and has been used in power system studies to deduce capacity-outage probability tables and to solve probabilistic load flows.
Abstract: This paper describes a general method for convolving discrete distributions using Fast Fourier Transforms. It can be used in evaluating reliability of any system involving discrete or discretised convolution. It has been used in power system studies to deduce capacity-outage probability tables and to solve probabilistic load flows. These studies have shown it to be much less time-consuming and more efficient than the conventional direct methods. The method is used in the paper to evaluate the loss of load probability of a generating system in order to demonstrate the method's application and inherent merits.

83 citations


Journal ArticleDOI
TL;DR: The Cooley-Tukey fast Fourier transform (FFT) algorithm is generalized to the multidimensional case in a natural way which allows for the evaluation of discrete Fourier transforms of rectangularly or hexagonally sampled signals or of signals which are sampled on an arbitrary periodic grid in either the spatial or Fourier domain.
Abstract: In this paper the Cooley-Tukey fast Fourier transform (FFT) algorithm is generalized to the multidimensional case in a natural way which allows for the evaluation of discrete Fourier transforms of rectangularly or hexagonally sampled signals or of signals which are sampled on an arbitrary periodic grid in either the spatial or Fourier domain. This general algorithm incorporates both the traditional rectangular row-column and vector-radix algorithms as special cases. This FFT algorithm is shown to result from the factorization of an integer matrix; for each factorization of that matrix, a different algorithm can be developed. This paper presents the general algorithm, discusses its computational efficiency, and relates it to existing multi-dimensional FFT algorithms.

Patent
26 Oct 1981
TL;DR: In this article, a synthetic aperture radar system (SAR) having a range correlator (10) is provided with a hybrid azimuth correlator(12) for correlation utilizing a block-pipelined Fast Fourier Transform (12a) with delay elements (Z) for so delaying SAR range correlated data as to embed in the Fourier transform operation a corner-turning function as the range correlated SAR data is converted from the time domain to a frequency domain.
Abstract: A synthetic aperture radar system (SAR) having a range correlator (10) is provided with a hybrid azimuth correlator (12) for correlation utilizing a block-pipelined Fast Fourier Transform (12a) having a predetermined FFT transform size with delay elements (Z) for so delaying SAR range correlated data as to embed in the Fourier transform operation a corner-turning function as the range correlated SAR data is converted from the time domain to a frequency domain. A transversal filter (12b) connected to receive the SAR data in the frequency domain, and from a generator (14b) a range migration compensation function, D, to a programmable shift register (30) for accurate range migration compensation; weights, W i , to multipliers (35-38) for interpolation, and an azimuth reference function, φ j , in the frequency domain to a multiplier 42 for correlation of the SAR data. Following the transversal filter is a block-pipelined inverse FFT (12c) used to restore azimuth correlated data in the frequency domain to the time domain for imaging. The FFT transform size is selected to accommodate the different SAR azimuth aperture lengths, number of looks and prefiltering requirements.

Book ChapterDOI
David S. Wise1
01 Jan 1981
TL;DR: A two-layer pattern is presented for the crossover pattern that appears as the FFT signal flow graph and in many switching networks like the banyan, delta, or omega nets, providing uniform propagation delay and capacitance, and ameliorating design problems for VLSI implementation.
Abstract: A two-layer pattern is presented for the crossover pattern that appears as the FFT signal flow graph and in many switching networks like the banyan, delta, or omega nets. It is characterized by constant path length and a regular pattern, providing uniform propagation delay and capacitance, and ameliorating design problems for VLSI implementation. These are important issues since path length grows linearly with the number of inputs to such networks, even though switching delay seems to grow only logarithmically.

Journal ArticleDOI
TL;DR: The approximation problem for high-order minimum phase FIR filter is solved without requiring any polynomial factorization, using a modified Parks-McClellan program and the FFT algorithm.

Journal ArticleDOI
TL;DR: Application of a simple end correction to the quasi-fast Hankel-transform algorithm to a Gaussian beam shows that for a given accuracy, the use of this end correction permits a reduction of a factor of 8 in storage as well as a Factor 8 in running time.
Abstract: An explicit evaluation is made of a simple end correction to the quasi-fast Hankel-transform algorithm. Application to a Gaussian beam shows that for a given accuracy, the use of this end correction permits a reduction of a factor of 8 in storage as well as a factor of 8 in running time.

Journal ArticleDOI
TL;DR: In this article, a new window design technique was proposed to improve the detectability of a small tone without degrading resolvability, and several examples of detecting a three-tone signal have demonstrated the superiority of the new window.
Abstract: This paper presents a new window design technique and discusses its effect on the detection of harmonic signals in the presence of nearby strong harmonic interference. Four design parameters are introduced to independently control the pattern falloff rate, the overall sidelobe level, the near-sidelobe level and the depth of a steerable wide dip. Since the deep dip can be steered to any frequency the new window has effectively improved the detectability of a small tone without degrading resolvability. Contrary to the conventional approach of initially specifying the continuous weighting, the new design technique starts with constructing the spectral window which meets the specifications and then employs the fast Fourier transform to compute the discrete weighting. No iterative sampling or perturbation procedure is required. Numerous examples are given to demonstrate the flexibility of the new window. Several examples of detecting a three-tone signal have demonstrated the superiority of the new window in its detectability, resolvability, and accuracy of measuring the tone frequencies and amplitudes.

Journal ArticleDOI
TL;DR: Using a 1-D analysis of a double-exposed specklegram, the influence of the diffraction halo removal in the numerical processing of data when it is done via discrete Fourier transform is explored.
Abstract: Using a 1-D analysis of a double-exposed specklegram, we explore the influence of the diffraction halo removal in the numerical processing of data when it is done via discrete Fourier transform. Relative errors in displacements appear if the removal is not done, and they increase as fringe visibility and fringe density decrease. These errors are <0.5% for fringe densities larger than six fringes within the diffraction halo.

Patent
15 Apr 1981
TL;DR: In this article, the authors proposed a pulse compression system for use with step approximation to linear FM and rank coded signals to eliminate sampling errors and range time grating lobes while providing large pulse compression ratios comprising: a receiving circuit for receiving echo signals, a converting circuit for converting echo signals from the receiver to I and Q baseband signals without clock sampling, a sliding window discrete Fourier transform or fast Fourier Transform (FFT) circuit including a taped delay line and a plurality of resistor-type phase weighting networks and adders.
Abstract: A pulse compression system for use with step approximation to linear FM andrank coded signals to eliminate sampling errors and range time grating lobes while providing large pulse compression ratios comprising: a receiving circuit for receiving echo signals, a converting circuit for converting echo signals from the receiver to I and Q baseband signals without clock sampling, a sliding window discrete Fourier Transform or fast Fourier Transform (FFT) circuit including a taped delay line and a plurality of resistor-type phase weighting networks and adders for generating a plurality of output signals representing the different frequency steps in the signals, a delay circuit for differentially delaying the output frequency steps from the sliding window DFT circuit so the the output steps occur simultaneously, and a summer for adding the differentially delayed outputs to yield a short pulse with a peak amplitude when a coded echo pulse is correctly indexed within the delay line of the DFT circuit.

Journal ArticleDOI
TL;DR: Both 1-D and 2-D methods are developed which overcome both of the above limitations and can unite techniques used by Hadamard spectroscopy and coded aperture imaging with uniformly redundant arrays into the same mathematical basis.
Abstract: In many fields (eg, spectroscopy, imaging spectroscopy, photoacoustic imaging, coded aperture imaging) binary bit patterns known as m sequences are used to encode (by multiplexing) a series of measurements in order to obtain a larger throughput The observed measurements must be decoded to obtain the desired spectrum (or image in the case of coded aperture imaging) Decoding in the past has used a technique called the fast Hadamard transform (FHT) whose chief advantage is that it can reduce the computational effort from N2 multiplies to N log2N additions or subtractions However, the FHT has the disadvantage that it does not readily allow one to sample more finely than the number of bits used in the m sequence This can limit the obtainable resolution and cause confusion near the sample boundaries (phasing errors) We have developed both 1-D and 2-D methods (called fast delta Hadamard transforms, FDHT) which overcome both of the above limitations Applications of the FDHT are discussed in the context of Hadamard spectroscopy and coded aperture imaging with uniformly redundant arrays Special emphasis has been placed on how the FDHT can unite techniques used by both of these fields into the same mathematical basis

Journal ArticleDOI
TL;DR: Some applications to geoscience, along with some examples where the FFT subroutine based on the Cooley and Tukey algorithm is undesirable also are given.

Journal ArticleDOI
M.M Tropper1
TL;DR: A rigorous theoretical analysis of the echo-planar imaging technique is presented, as a result of which a general reconstruction algorithm, applicable to any form of periodic gradient modulation, is derived.


Journal ArticleDOI
TL;DR: Using complex numbers of the form a + b μ (where μ is a complex cube root of unity), a radix-6 FFT algorithm in which the component six-point DFT's do not require any multiplication is developed.
Abstract: Using complex numbers of the form a + b μ (where μ is a complex cube root of unity), a radix-6 FFT algorithm in which the component six-point DFT's do not require any multiplication is developed. This number system was used by Dubois and Venetsanopoulos to implement radix-3 FFT. The number of arithmetic operations for the new algorithm is compared with those of standard radix-6, radix-2, and radix-4 FFT algorithms.

Journal ArticleDOI
TL;DR: This paper evaluates the error performance of radix-4 FFT algorithms (the input quantization error and the coefficient inaccuracy is not considered), and assumes fixed-point two's complement arithmetic.

Journal ArticleDOI
TL;DR: Conditions are given under which the two implementations are essentially equivalent for white noise inputs so that the frequency- domain algorithm can be used to predict the mean, variance, time response, and MSE of the time-domain algorithm.
Abstract: Adaptive cancelling can be performed in the frequency domain with significant computational savings over time-domain implementations. This paper considers the statistical behavior of a frequency-domain adaptive canceller with white noise inputs, and develops expressions for the mean and variance of the adaptive filter weights, and for the mean-square error (MSE). These are compared to the behavior of a time-domain canceller with the same inputs through a combination of analysis and simulation. It is shown that the performance of the two algorithms can differ significantly due to the effects of block processing in the FFT. However, conditions are given under which the two implementations are essentially equivalent for white noise inputs so that the frequency-domain algorithm can be used to predict the mean, variance, time response, and MSE of the time-domain algorithm.

Journal ArticleDOI
TL;DR: In this article, a very simple computational requirement for the FFT butterfly was developed based upon the distributed arithmetic of Peled and Liu, and upon an algebraic substitution of Bfittner and Schiissler.
Abstract: A very simple computational requirement is developed for the FFT butterfly. Based upon the distributed arithmetic of Peled and Liu, and upon an algebraic substitution of Bfittner and Schiissler, an architecture develops which uses as weighting coefficients not \sin (n\theta) and \cos(n\theta) but \cos(n\theta)\pm \sin(n\theta) . The resulting serial arithmetic structure is very simple to implement.

Journal ArticleDOI
TL;DR: This new algorithm requires fewer multiplications and about the same number of additions as the conventional FFT method for computing the two-dimensional convolution, but has the advantage that the operation of transposing the matrix of data can be avoided.
Abstract: In this paper, a fast algorithm is developed to compute two-dimensional convolutions of an array of d 1 × d 2 complex number points where d 2 =2mand d 1 =2m-r+1for some 1\leq r \leq m . This new algorithm requires fewer multiplications and about the same number of additions as the conventional FFT method for computing the two-dimensional convolution. It also has the advantage that the operation of transposing the matrix of data can be avoided.

Journal ArticleDOI
Henri J. Nussbaumer1
TL;DR: In this paper, a new method for the computation of multidimensional DFT's by polynomial transforms is introduced. But the method is not suitable for the analysis of multi-dimensional DFTs in the presence of quantization noise.
Abstract: This paper introduces a new method for the computation of multidimensional DFT's by polynomial transforms. The method, which maps mtiltidimensional DFT's into one-dimensional odd-time DFT's by use of inverse polynomial transforms, is shown to be significantly more efficient than the conventional row-column method from the standpoint of the number of arithmetic operations and quantization noise. The relationship between DFT and convolution algorithms using polynomial transforms is clarified and new convolution algorithms with reduced computational complexities are proposed.

Journal ArticleDOI
TL;DR: The design scheme, theory of operation, and practical techniques of a new spinner magnetometer are developed and tested and it is found that this new magnetometer is easily adapted to digital control and automatic measurement using a microcomputer.
Abstract: Summary Design scheme, theory of operation, and practical techniques of a new spinner magnetometer are developed and tested. In this apparatus, a pair of bevel gears are used to rotate the sample simultaneously around two orthogonal axes, and magnetic signals are picked up by a fluxgate sensor, amplified by a programable gain amplifier, and sampled by an analogue - digital (AD) converter at timing signals from a rotary encorder. Magnetic signals contain many frequency components corresponding to dipole terms, quadrupole terms, and so on, and calculation of magnetization can be done by the fast Fourier transform (FFT) technique. As there is no need to replace the sample during a measurement, this new magnetometer is easily adapted to digital control and automatic measurement using a microcomputer. These possibilities are briefly discussed.

Journal ArticleDOI
TL;DR: Korn and Lambiotte as mentioned in this paper showed that trigonometric tables can lead to more than three times faster execution times compared to the traditional Pease algorithm for large transforms on the CDC STAR-100 vector computer.
Abstract: A recent article in this journal by D. G. Korn and J. J. Lambiotte, Jr. discusses implementations of the FFT algorithm on the CDC STAR-100 vector computer. The 'Pease'-algorithm is recommended in cases when only a few transforms can be performed simultaneously. We show how the use of a different algorithm and of trigonometric tables will lead to more than three times faster execution times. The times for large transforms increase only about 39% if the tables are eliminated in order to save storage.