scispace - formally typeset
Search or ask a question

Showing papers on "Fast Fourier transform published in 1993"


Journal ArticleDOI
TL;DR: The FMM provides an efficient mechanism for the numerical convolution of the Green's function for the Helmholtz equation with a source distribution and can be used to radically accelerate the iterative solution of boundary-integral equations.
Abstract: A practical and complete, but not rigorous, exposition of the fact multiple method (FMM) is provided. The FMM provides an efficient mechanism for the numerical convolution of the Green's function for the Helmholtz equation with a source distribution and can be used to radically accelerate the iterative solution of boundary-integral equations. In the simple single-stage form presented here, it reduces the computational complexity of the convolution from O(N/sup 2/) to O(N/sup 3/2/), where N is the dimensionality of the problem's discretization. >

1,491 citations


Journal ArticleDOI
TL;DR: In this paper, a group of algorithms is presented generalizing the fast Fourier transform to the case of noninteger frequencies and nonequispaced nodes on the interval $[ - \pi,\pi ].
Abstract: A group of algorithms is presented generalizing the fast Fourier transform to the case of noninteger frequencies and nonequispaced nodes on the interval $[ - \pi ,\pi ]$. The schemes of this paper are based on a combination of certain analytical considerations with the classical fast Fourier transform and generalize both the forward and backward FFTs. Each of the algorithms requires $O(N\cdot \log N + N\cdot \log (1/\varepsilon ))$ arithmetic operations, where $\varepsilon $ is the precision of computations and N is the number of nodes. The efficiency of the approach is illustrated by several numerical examples.

848 citations


Proceedings ArticleDOI
27 Apr 1993
TL;DR: The resulting WSOLA (waveform-similarity-based synchronized overlap-add) algorithm produces high-quality speech output, is algorithmically and computationally efficient and robust, and allows for online processing with arbitrary time-scaling factors.
Abstract: A concept of waveform similarity for tackling the problem of time-scale modification of speech is proposed. It is worked out in the context of short-time Fourier transform representations. The resulting WSOLA (waveform-similarity-based synchronized overlap-add) algorithm produces high-quality speech output, is algorithmically and computationally efficient and robust, and allows for online processing with arbitrary time-scaling factors that may be specified in a time-varying fashion and can be chosen over a wide continuous range of values. >

454 citations


Journal ArticleDOI
TL;DR: In this article, the authors provided the first optimal algorithms in terms of the number of input/outputs (I/Os) required between internal memory and multiple secondary storage devices for sorting, FFT, matrix transposition, standard matrix multiplication, and related problems.
Abstract: We provide the first optimal algorithms in terms of the number of input/outputs (I/Os) required between internal memory and multiple secondary storage devices for the problems of sorting, FFT, matrix transposition, standard matrix multiplication, and related problems. Our two-level memory model is new and gives a realistic treatment of {\em parallel block transfer}, in which dureing a single I/O each of the $P$ secondary storage devices can simultaneously transfer a contiguous block of $B$ records. The model pertains to a large-scale uniprocessor system or parallel multiprocessor system with $P$ disks. In addition, the sorting, FFT, permutation network, and standard matrixmultiplication algorithms are typically optimal in terms of the amount of internal processing time. The difficulty in developing optimal algorithms is to cope with the partitioning of memory into $P$ separate physical devices. Our algorithms'' performance can be significantly better than those obtained by the well-known but nonoptimal technique of disk striping. Our optimal sorting algorithm is randomized, but practical; the probability of using more than $\ell$ times the optimal number of I/Os is exponentially small in $\ell$ (log $\ell$)log($M/B$), where $M$ is the internal memory size.

353 citations


Posted Content
TL;DR: The wavelet transform as mentioned in this paper maps each $f(x)$ to its coefficients with respect to an orthogonal basis of piecewise constant functions, constructed by dilation and translation.
Abstract: This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The ``wavelet transform'' maps each $f(x)$ to its coefficients with respect to this basis. The mathematics is simple and the transform is fast (faster than the Fast Fourier Transform, which we briefly explain), but approximation by piecewise constants is poor. To improve this first wavelet, we are led to dilation equations and their unusual solutions. Higher-order wavelets are constructed, and it is surprisingly quick to compute with them --- always indirectly and recursively. We comment informally on the contest between these transforms in signal processing, especially for video and image compression (including high-definition television). So far the Fourier Transform --- or its 8 by 8 windowed version, the Discrete Cosine Transform --- is often chosen. But wavelets are already competitive, and they are ahead for fingerprints. We present a sample of this developing theory.

311 citations


Journal ArticleDOI
TL;DR: An approach to obtaining high-resolution image reconstruction from low-resolution, blurred, and noisy multiple-input frames is presented and a recursive-least-squares approach with iterative regularization is developed in the discrete Fourier transform (DFT) domain.
Abstract: An approach to obtaining high-resolution image reconstruction from low-resolution, blurred, and noisy multiple-input frames is presented. A recursive-least-squares approach with iterative regularization is developed in the discrete Fourier transform (DFT) domain. When the input frames are processed recursively, the reconstruction does not converge in general due to the measurement noise and ill-conditioned nature of the deblurring. Through the iterative update of the regularization function and the proper choice of the regularization parameter, good high-resolution reconstructions of low-resolution, blurred, and noisy input frames are obtained. The proposed algorithm minimizes the computational requirements and provides a parallel computation structure since the reconstruction is done independently for each DFT element. Computer simulations demonstrate the performance of the algorithm. >

270 citations


Journal ArticleDOI
TL;DR: In this paper, a transform decomposition algorithm was proposed to reduce the number of operations required to compute the discrete Fourier transform (DFT) when the input and output data points differ.
Abstract: Ways of efficiently computing the discrete Fourier transform (DFT) when the number of input and output data points differ are discussed. The two problems of determining whether the length of the input sequence or the length of the output sequence is reduced can be found to be duals of each other, and the same methods can, to a large extent, be used to solve both. The algorithms utilize the redundancy in the input or output to reduce the number of operations below those of the fast Fourier transform (FFT) algorithms. The usual pruning method is discussed, and an efficient algorithm, called transform decomposition, is introduced. It is based on a mixture of a standard FFT algorithm and the Horner polynomial evaluation scheme equivalent to the one in Goertzel's algorithms. It requires fewer operations and is more flexible than pruning. The algorithm works for power-of-two and prime-factor algorithms, as well as for real-input data. >

256 citations


Journal ArticleDOI
TL;DR: An algorithm based on a two-dimensional discrete cross correlation between subimages from different images is presented, and the reliability and accuracy is analyzed by using computer-generated speckle patterns.
Abstract: Replacing photographic recording by electronic processing has some obvious advantages. An algorithm used for electronic speckle pattern photography is presented, and the reliability and accuracy is analyzed by using computer-generated speckle patterns. The algorithm is based on a two-dimensional discrete cross correlation between subimages from different images. Subpixel accuracy is obtained by a Fourier series expansion of the discrete correlation surface. The accuracy of the algorithm was found to vary in proportion to sigma/n(1 - delta)(2), where sigma is the speckle size, n is the subimage size, and delta is the amount of decorrelation, with negligible systematic errors. For typical values the uncertainty in the displacement is approximately 0.05 pixels. The uncertainty is found to increase with increased displacement gradients.

227 citations


Journal ArticleDOI
TL;DR: Five limited-data computed tomography algorithms are compared and the multiplicative algebraic reconstruction technique algorithm gave the best results overall; the algebraic Reconstruction technique gave thebest results for very smooth objects or very noisy data.
Abstract: Five limited-data computed tomography algorithms are compared. The algorithms used are adapted versions of the algebraic reconstruction technique, the multiplicative algebraic reconstruction technique, the Gerchberg–Papoulis algorithm, a spectral extrapolation algorithm descended from that of Harris [J. Opt. Soc. Am. 54, 931–936 (1964)], and an algorithm based on the singular value decomposition technique. These algorithms were used to reconstruct phantom data with realistic levels of noise from a number of different imaging geometries. The phantoms, the imaging geometries, and the noise were chosen to simulate the conditions encountered in typical computed tomography applications in the physical sciences, and the implementations of the algorithms were optimized for these applications. The multiplicative algebraic reconstruction technique algorithm gave the best results overall; the algebraic reconstruction technique gave the best results for very smooth objects or very noisy (20-dB signal-to-noise ratio) data. My implementations of both of these algorithms incorporate a priori knowledge of the sign of the object, its extent, and its smoothness. The smoothness of the reconstruction is enforced through the use of an appropriate object model (by use of cubic B-spline basis functions and a number of object coefficients appropriate to the object being reconstructed). The average reconstruction error was 1.7% of the maximum phantom value with the multiplicative algebraic reconstruction technique of a phantom with moderate-to-steep gradients by use of data from five viewing angles with a 30-dB signal-to-noise ratio.

213 citations


Journal ArticleDOI
TL;DR: The discrete Wigner distribution was implemented for the time/frequency mapping of variations of R-R interval, blood pressure and respiratory signals and it was shown that the DWD follows well the instantaneous changes of spectral content of cardiovascular and respiratory messages which characterise the dynamics of autonomic nervous system responses.
Abstract: The discrete Wigner distribution (DWD) was implemented for the time/frequency mapping of variations of R-R interval, blood pressure and respiratory signals. The smoothed cross-DWD was defined and the modified algorithm for the smoothed auto- and cross-DWD was proposed. Spurious cross-terms were suppressed using a smoothing data window and a Gauss frequency window. The DWD is easy to implement using the FFT algorithm. Examples show that the DWD follows well the instantaneous changes of spectral content of cardiovascular and respiratory signals which characterise the dynamics of autonomic nervous system responses.

176 citations


Proceedings ArticleDOI
02 Oct 1993
TL;DR: In this paper, a variable hysteresis band current controller is described, which achieves constant switching frequency without requiring a precise knowledge of the motor parameters and can be readily implemented in hardware.
Abstract: A novel method for implementing a variable hysteresis band current controller is described which achieves constant switching frequency without requiring a precise knowledge of the motor parameters. The controller works by using feedback and feedback variables to create a variable hysteresis band envelope, and then compensating for the interaction between phase back-EMFs that occurs when the neutral of a three-phase motor is left floating. The controller has good dynamic and steady-state response, and its performance is substantially immune to variations in the inverter DC supply voltage and motor parameters. It can be readily implemented in hardware, and only requires a few additional components compared to a conventional hysteresis current controller. Analytical, hardware implementation, simulation, FFT (fast Fourier transform) analysis, and experimental results are presented. >

Journal ArticleDOI
TL;DR: An autoregressive (AR) spectral estimation method is compared with a conventional fast Fourier transform (FFT)-based approach for this task and offers promise for enhanced spatial resolution and accuracy in ultrasonic tissue characterization and nondestructive evaluation of materials.
Abstract: The problem of estimation of mean scatterer spacing in an object containing regularly spaced structures is addressed. An autoregressive (AR) spectral estimation method is compared with a conventional fast Fourier transform (FFT)-based approach for this task. Regularly spaced structures produce a periodicity in the power spectrum of ultrasonic backscatter. This periodicity is manifested as a peak in the cepstrum. A phantom was constructed for comparison of the two methods. It contained regularly spaced nylon filaments. It also contained randomly positioned glass spheres that produced incoherent backscatter. In an experiment in which this target was interrogated using broadband ultrasound, the AR spectral estimate offered considerable improvement over the FFT when the analysis gate length was on the order of the structural dimension. Advantages included improved resolution, reduction in bias and variance of scatterer spacing estimates, and greater resistance to ringing artifacts. Data were also acquired from human liver in vivo. AR spectral estimates on human data exhibited a decreased dependence on gate length. These results offer promise for enhanced spatial resolution and accuracy in ultrasonic tissue characterization and nondestructive evaluation of materials. >

Journal ArticleDOI
TL;DR: High-frequency acoustic energy between 300 and 800 Hz is associated with coronary stenosis and is confirmed that high- frequencies are associated with disease states of the patients.
Abstract: Previous studies have indicated that, during diastole, the sounds associated with turbulent blood flow through partially occluded coronary arteries should be detectable. To detect such sounds, recordings of diastolic heart sound segments were analyzed using four signal processing techniques: the fast Fourier transform (FFT) autoregressive (AR), autoregressive moving-average (ARMA), and minimum-norm (eigenvector) methods. To further enhance the diastolic heart sounds and reduce background noise, an adaptive filter was used as a preprocessor. The power ratios of the FFT method and the poles of the AR, ARMA, and eigenvector methods were used to diagnose patients as having diseased or normal arteries using a blind protocol without prior knowledge of the actual disease states of the patients to guard against human bias. Of 80 cases, results showed that normal and abnormal records were correctly distinguished in 56 using the fast Fourier transform (FFT), in 63 using the AR, in 62 using the ARMA method, and in 67 using the eigenvector method. These results confirm that high-frequency acoustic energy between 300 and 800 Hz is associated with coronary stenosis. >

Journal ArticleDOI
TL;DR: The authors present the scalability analysis of a parallel fast Fourier transform (FFT) algorithm on mesh and hypercube connected multicomputers using the isoefficiency metric and show that it is more cost-effective to implement the FFT algorithm on a hypercube rather than a mesh.
Abstract: The authors present the scalability analysis of a parallel fast Fourier transform (FFT) algorithm on mesh and hypercube connected multicomputers using the isoefficiency metric. The isoefficiency function of an algorithm architecture combination is defined as the rate at which the problem size should grow with the number of processors to maintain a fixed efficiency. It is shown that it is more cost-effective to implement the FFT algorithm on a hypercube rather than a mesh despite the fact that large scale meshes are cheaper to construct than large hypercubes. Although the scope of this work is limited to the Cooley-Tukey FFT algorithm on a few classes of architectures, the methodology can be used to study the performance of various FFT algorithms on a variety of architectures such as SIMD hypercube and mesh architectures and shared memory architecture. >

01 Jan 1993
TL;DR: By using plug-in DAQ boards, you can build a lower cost measurement system as well as avoid the communication overhead of working with a stand-alone instrument and have the flexibility of configuring your measurement processing to meet your needs.
Abstract: The Fast Fourier Transform (FFT) and the power spectrum in LabVIEW® and LabWindows® are powerful tools for analyzing and measuring signals from plug-in data acquisition (DAQ) boards. For example, you can effectively acquire time-domain signals, measure the frequency content, and convert the results to real-world units and displays as shown on traditional bench-top spectrum and network analyzers. By using plug-in DAQ boards, you can build a lower cost measurement system as well as avoid the communication overhead of working with a stand-alone instrument. Plus, you have the flexibility of configuring your measurement processing to meet your needs.

Journal ArticleDOI
02 May 1993
TL;DR: This paper presents a new method for computing the configuration-space map of obstacles that is used in motion-planning algorithms, and is particularly promising for workspaces with many and/or complicated obstacles, or when the shape of the robot is not simple.
Abstract: This paper presents a new method for computing the configuration-space map of obstacles that is used in motion-planning algorithms. The method derives from the observation that, when the robot is a rigid object that can only translate, the configuration space is a convolution of the workspace and the robot. This convolution is computed with the use of the fast Fourier transform (FFT) algorithm. The method is particularly promising for workspaces with many and/or complicated obstacles, or when the shape of the robot is not simple. It is an inherently parallel method that can significantly benefit from existing experience and hardware on the FFT. >

Patent
06 Sep 1993
TL;DR: In this paper, a first inverse Fast Fourier Transform circuit (30) transforms the information in the frequency domain into the time domain instead of being transmitted directly, the resultant signals are then amplitude limited (42) and applied to an FFT (44) to reconvert them to frequency domain.
Abstract: A frequency-division-multiplex transmitter for carrying an OFDM signal on a large number of closely-spaced carriers receives the digital data at an input (12), applies coding such as convolutional coding (14), time interleaves (16) the signals, and assembles them frame-by-frame in a matrix (18). The data is converted to parallel form by a shift register (18), frequency interleaved, (24) differentially encoded (26), and quadrature phase modulated (28) on several hundred or more closely-spaced carriers. A first inverse Fast Fourier Transform circuit (30) transforms this information in the frequency domain into the time domain. Instead of being transmitted directly, the resultant signals are then amplitude limited (42) and applied to an FFT (44) to reconvert them to the frequency domain. The phases of the wanted signals are then re-set, and the amplitudes of the signals near the edges of the band reduced, in correction circuits (46). The resultant signals are applied to a second inverse FFT (48) for transmission. The output of the second inverse FFT contains smaller power peaks than the output of the first inverse FFT (30). The output is converted to serial form by a shift register (32), a guard interval added (34), and the resultant converted (36) to analogue form.

Patent
22 Oct 1993
TL;DR: In this article, a method and apparatus for detecting and enabling the clearance of high impedance faults (HIFs) in an electrical transmission or distribution system is presented, where current in at least one phase is monitored in real time by sensors.
Abstract: The present invention features a method and apparatus for detecting and enabling the clearance of high impedance faults (HIFs) in an electrical transmission or distribution system. Current in at least one phase in a distribution system is monitored in real time by sensors. Analog current signature information is then digitized for processing by a digital computer. Zero crossings are identified and current maxima and minima located. The first derivatives of the maxima and minima are computed and a modified Fast Fourier Transform (FFT) is then performed to convert time domain to frequency domain information. The transformed data is formatted and normalized and then applied to a trained neural network, which provides an output trigger signal when an HIF condition is probable. The trigger signal is made available to either a network administrator for manual intervention, or directly to switchgear to deactivate an affected portion of the network. The inventive method may be practiced using either conventional computer hardware and software or dedicated custom hardware such as a VLSI chip.


Journal ArticleDOI
TL;DR: Two-block preconditioners, related to those proposed by T. Chan and J. Olkin for square nonsingular Toeplitz-block systems, are derived and analyzed and it is shown that, for important classes of T, the singular values of the preconditionsed matrix are clustered around one.
Abstract: Discretized two-dimensional deconvolution problems arising, e.g., in image restoration and seismic tomography, can be formulated as least squares computations, $\min \| {b - Tx} \|_2 $, where T is often a large-scale rectangular Toeplitz-block matrix. The authors consider solving such block least squares problems by the preconditioned conjugate gradient algorithm using square nonsingular circulant-block and related preconditioners, constructed from the blocks of the rectangular matrix T. Preconditioning with such matrices allows efficient implementation using the one-dimensional or two-dimensional fast Fourier transform (FFT). Two-block preconditioners, related to those proposed by T. Chan and J. Olkin for square nonsingular Toeplitz-block systems, are derived and analyzed. It is shown that, for important classes of T, the singular values of the preconditioned matrix are clustered around one. This extends the authors’ earlier work on preconditioners for Toeplitz least squares iterations for one-dimensiona...

Patent
02 Mar 1993
TL;DR: In this paper, a multidimensional ECG processing and display system was proposed for an electrocadiographic (ECG) monitoring system, where a two-dimensional matrix is decomposed using singular value decomposition (SVD) to obtain its corresponding singular values and singular vectors, a compressed form of the matrix.
Abstract: The multidimensional ECG processing and display system (60) of the present invention may be used with an electrocadiographic (ECG) monitoring system. Input ECG data (61) from multiple, sequential time intervals is collected and formatted into a two-dimensional matrix using the processing function (62). The two-dimensional matrix is decomposed using singular value decomposition (SVD) to obtain its corresponding singular values and singular vectors, a compressed form of the matrix. The singular vectors are analyzed and filtered to identify and enhance signal components of interest using the subspace processing function (63) and the signal processing function (64). Selected singular vectors are transformed into their frequency domain representations by the Fast Fourier Transform (FFT), or related techniques.

Journal ArticleDOI
TL;DR: Brown and Puckette as discussed by the authors used a modified version of the constant Q transform to track the fundamental frequency of extremely rapid musical passages, where the frequency changes are rapid and continuous.
Abstract: The constant Q transform described recently [J. C. Brown and M. S. Puckette, ‘‘An efficient algorithm for the calculation of a constant Q transform,’’ J. Acoust. Soc. Am. 92, 2698–2701 (1992)] has been adapted so that it is suitable for tracking the fundamental frequency of extremely rapid musical passages. For this purpose the calculation described previously has been modified so that it is constant frequency resolution rather than constant Q for lower frequency bins. This modified calculation serves as the input for a fundamental frequency tracker similar to that described by Brown [J. C. Brown, ‘‘Musical fundamental frequency tracking using a pattern recognition method,’’ J. Acoust. Soc. Am. 92, 1394–1402 (1992)]. Once the fast Fourier transform (FFT) bin corresponding to the fundamental frequency is chosen by the frequency tracker, an approximation is used for the phase change in the FFT for a time advance of one sample to obtain an extremely precise value for this frequency. Graphical examples are given for musical passages by a violin executing vibrato and glissando where the fundamental frequency changes are rapid and continuous.

Journal ArticleDOI
TL;DR: A numerical simulation technique is presented that combines the advantages of the discrete Fourier transform (DFT) algorithm and a digital filtering scheme to generate continuous long-duration multivariate random processes.
Abstract: A numerical simulation technique is presented that combines the advantages of the discrete Fourier transform (DFT) algorithm and a digital filtering scheme to generate continuous long-duration multivariate random processes. This approach offers the simple convenience of conventional fast Fourier transform (FFT) based simulation schemes; however, it does not suffer from the drawback of the large computer memory requirement that in the past has precluded the generation of long-duration time series utilizing FFT-based approaches. Central to this technique is a simulation of a large number of time series segments by utilizing the FFT algorithm, which are subsequently synthesized by means of a digital filter to provide the desired duration of simulated processes. This approach offers computational efficiency, convenience, and robustness. The computer code based on the present methodology does not require users to have experience in determining optimal model parameters, unlike the procedures based on parametric models. The effectiveness of this methodology is demonstrated by means of examples concerning the simulation of a multivariate random wind field and the spatial variation of wave kinematics in a random sea with prescribed spectral descriptions. The simulated data showed excellent agreement with the target spectral characteristics. The proposed technique has immediate applications to the simulation of real-time processes.

Journal ArticleDOI
18 May 1993
TL;DR: In this article, the authors present criteria for choosing the DFT window and derive a constraint for the window coefficients to insure that quantization error does not influence the estimate of the amplitude of a sine wave from the main lobe of its DFT.
Abstract: The discrete Fourier transform (DFT) can be used to compute the signal-to-noise ratio (SNR) and harmonic distortion of a waveform recorder. When the data record contains a non-integer number of cycles of the sine wave, energy leaks from the sine wave and its harmonics to adjacent frequencies. A.L. Benetazzo et al. (1992) describe a windowed DFT method for computing the RMS value of a sine wave from the magnitude of the main lobe of its DFT and recommend the use of minimum energy windows. We present criteria for choosing the DFT window. A constraint for the window coefficients is derived to insure that quantization error does not influence the estimate of the amplitude of a sine wave from the main lobe of its DFT. >

Journal ArticleDOI
TL;DR: The experiments show that wavelet encoding by selective excitation of wavelet‐shaped profiles is feasible, and there is no discernible degradation in image quality due to the wavelet encoded images.
Abstract: Reconstructions of images from wavelet-encoded data are shown. The method of MR wavelet encoding in one dimension was proposed previously by Weaver and Healy. The technique relies on selective excitation with wavelet-shaped profiles generated by special radio-frequency waveforms. The result of the imaging sequence is a set of inner products of the image with orthogonal functions of the wavelet basis. Inversion of the wavelet data is accomplished with an efficient algorithm with processing times comparable with those of a fast Fourier transform. The experiments show that wavelet encoding by selective excitation of wavelet-shaped profiles is feasible. Wavelet-encoded images are compared with phase-encoded images that have a similar signal-to-noise ratio, and there is no discernible degradation in image quality due to the wavelet encoding. Potential benefits of wavelet encoding are briefly discussed.

Journal ArticleDOI
TL;DR: In this paper, the double Fourier series is used in a limited-area model (LAM) for the horizontal discretization of global atmospheric models, and a simple explicit (leapfrog) integration is shown to give results that are almost identical to the hemispherical forecast used as boundary fields.
Abstract: The spectral technique is frequently used for the horizontal discretization in global atmospheric models. This paper presents a method where double Fourier series are used in a limited-area model (LAM). The method uses fast Fourier transforms (FFT) in both horizontal directions and takes into account time-dependent boundary conditions. The basic idea is to extend the time-dependent boundary fields into a zone outside the integration area in such a way that periodic fields are obtained. These fields in the extension zone and the forecasted fields inside the integration area are connected by use of a narrow relaxation zone along the boundaries of the limited area. The extension technique is applied to the shallow-water equations. A simple explicit (leapfrog) integration is shown to give results that are almost identical to the hemispherical forecast used as boundary fields. A nonlinear normal-mode initialization scheme developed in the framework of the spectral formulation is shown to work satisfac...

Journal ArticleDOI
TL;DR: A new method for coherent wide-band direction finding of far-field sources impinging on a two-dimensional array with a known arbitrary geometry, based on linear interpolation of the array manifold at a given frequency, f.
Abstract: We present a new method for coherent wide-band direction finding of far-field sources impinging on a two-dimensional array with a known arbitrary geometry. This method, termed array manifold interpolation (AMI), is based on obtaining the array manifold at a desired frequency f0, by linear interpolation of the array manifold at a given frequency, f. We use a separable representation of the array manifold vector, which separates the array geometry and the frequency from the direction θ, in order to derive the required array manifold interpolation matrix. The AMI method is practical, computationally efficient, and robust. For the special case of a uniform circular array, we present a fast implementation of the AMI method, which utilizes the FFT algorithm.


Proceedings ArticleDOI
27 Apr 1993
TL;DR: An image registration algorithm which achieves subpixel accuracy using a frequency-domain technique is proposed, which is efficient compared with the conventional approaches based on interpolation or correlations in the spatial/frequency domain.
Abstract: An image registration algorithm which achieves subpixel accuracy using a frequency-domain technique is proposed. This approach is efficient compared with the conventional approaches based on interpolation or correlations in the spatial/frequency domain. This approach can achieve subpixel accuracy registration even when images contain aliasing errors due to undersampling. The FFTs (fast Fourier transforms) of the images have computational complexities smaller than the interpolation or convolution computations by orders of magnitude. The accuracy of the proposed approach is demonstrated through computer simulations for different types of images. >

Journal ArticleDOI
TL;DR: The most efficient of the reviewed methods, which uses the Zak transform as an operational calculus, performs Gabor analysis and synthesis transforms with complexity of the same order as a fast Fourier transform (FFT).
Abstract: Equations for the continuous-parameter Gabor transform are presented and converted to finite discrete form suitable for digital computation. A comparative assessment of the computational complexity of several algorithms that execute the finite discrete equations is given, with results in the range O ( P 2 ) to O ( P log, P), where P is the number of input data points being transformed. The most efficient of the reviewed methods, which uses the Zak transform as an operational calculus, performs Gabor analysis and synthesis transforms with complexity of the same order as a fast Fourier transform (FFT).