scispace - formally typeset
Search or ask a question

Showing papers on "Fast Fourier transform published in 2003"


Journal ArticleDOI
TL;DR: This paper presents an interpolation method for the nonuniform FT that is optimal in the min-max sense of minimizing the worst-case approximation error over all signals of unit norm and indicates that the proposed method easily generalizes to multidimensional signals.
Abstract: The fast Fourier transform (FFT) is used widely in signal processing for efficient computation of the FT of finite-length signals over a set of uniformly spaced frequency locations. However, in many applications, one requires nonuniform sampling in the frequency domain, i.e., a nonuniform FT. Several papers have described fast approximations for the nonuniform FT based on interpolating an oversampled FFT. This paper presents an interpolation method for the nonuniform FT that is optimal in the min-max sense of minimizing the worst-case approximation error over all signals of unit norm. The proposed method easily generalizes to multidimensional signals. Numerical results show that the min-max approach provides substantially lower approximation errors than conventional interpolation methods. The min-max criterion is also useful for optimizing the parameters of interpolation kernels such as the Kaiser-Bessel function.

1,251 citations


Journal ArticleDOI
TL;DR: The sliding DFT process for spectrum analysis was presented and shown to be more efficient than the popular Goertzel (1958) algorithm for sample-by-sample DFT bin computations and a modified slide DFT structure is proposed that provides improved computational efficiency.
Abstract: The sliding DFT process for spectrum analysis was presented and shown to be more efficient than the popular Goertzel (1958) algorithm for sample-by-sample DFT bin computations. The sliding DFT provides computational advantages over the traditional DFT or FFT for many applications requiring successive output calculations, especially when only a subset of the DFT output bins are required. Methods for output stabilization as well as time-domain data windowing by means of frequency-domain convolution were also discussed. A modified sliding DFT algorithm, called the sliding Goertzel DFT, was proposed to further reduce the computational workload. We start our sliding DFT discussion by providing a review of the Goertzel algorithm and use its behavior as a yardstick to evaluate the performance of the sliding DFT technique. We examine stability issues regarding the sliding DFT implementation as well as review the process of frequency-domain convolution to accomplish time-domain windowing. Finally, a modified sliding DFT structure is proposed that provides improved computational efficiency.

630 citations


Journal ArticleDOI
TL;DR: A new technique based on a random shifting, or jigsaw, algorithm is proposed, which does not require the use of phase keys for decrypting data and shows comparable or superior robustness to blind decryption.
Abstract: A number of methods have recently been proposed in the literature for the encryption of two-dimensional information by use of optical systems based on the fractional Fourier transform. Typically, these methods require random phase screen keys for decrypting the data, which must be stored at the receiver and must be carefully aligned with the received encrypted data. A new technique based on a random shifting, or jigsaw, algorithm is proposed. This method does not require the use of phase keys. The image is encrypted by juxtaposition of sections of the image in fractional Fourier domains. The new method has been compared with existing methods and shows comparable or superior robustness to blind decryption. Optical implementation is discussed, and the sensitivity of the various encryption keys to blind decryption is examined.

434 citations


Journal ArticleDOI
TL;DR: A tool for accelerating iterative reconstruction of field-corrected MR images: a novel time-segmented approximation to the MR signal equation that uses a min-max formulation to derive the temporal interpolator.
Abstract: In magnetic resonance imaging, magnetic field inhomogeneities cause distortions in images that are reconstructed by conventional fast Fourier transform (FFT) methods Several noniterative image reconstruction methods are used currently to compensate for field inhomogeneities, but these methods assume that the field map that characterizes the off-resonance frequencies is spatially smooth Recently, iterative methods have been proposed that can circumvent this assumption and provide improved compensation for off-resonance effects However, straightforward implementations of such iterative methods suffer from inconveniently long computation times This paper describes a tool for accelerating iterative reconstruction of field-corrected MR images: a novel time-segmented approximation to the MR signal equation We use a min-max formulation to derive the temporal interpolator Speedups of around 60 were achieved by combining this temporal interpolator with a nonuniform fast Fourier transform with normalized root mean squared approximation errors of 007% The proposed method provides fast, accurate, field-corrected image reconstruction even when the field map is not smooth

402 citations


Journal ArticleDOI
TL;DR: An algorithm is presented that solves the phase unwrapping problem, using a combination of Fourier techniques, that is equivalent to the computation time required for performing eight fast Fourier transforms and stable against noise and residues present in the wrapped phase.
Abstract: A wide range of interferometric techniques recover phase information that is mathematically wrapped on the interval (-π,π] . Obtaining the true unwrapped phase is a longstanding problem. We present an algorithm that solves the phase unwrapping problem, using a combination of Fourier techniques. The execution time for our algorithm is equivalent to the computation time required for performing eight fast Fourier transforms and is stable against noise and residues present in the wrapped phase. We have extended the algorithm to handle data of arbitrary size. We expect the state of the art of existing interferometric applications, including the possibility for real-time phase recovery, to benefit from our algorithm.

389 citations


Journal ArticleDOI
TL;DR: To preserve the important property of iterative subspace methods of regularizing the solution by the number of iterations, the model weights are incorporated into the operators and can be understood in terms of the singular vectors of the weighted transform.
Abstract: The Radon transform (RT) suffers from the typical problems of loss of resolution and aliasing that arise as a consequence of incomplete information, including limited aperture and discretization. Sparseness in the Radon domain is a valid and useful criterion for supplying this missing information, equivalent somehow to assuming smooth amplitude variation in the transition between known and unknown (missing) data. Applying this constraint while honoring the data can become a serious challenge for routine seismic processing because of the very limited processing time available, in general, per common midpoint. To develop methods that are robust, easy to use and flexible to adapt to different problems we have to pay attention to a variety of algorithms, operator design, and estimation of the hyperparameters that are responsible for the regularization of the solution. In this paper, we discuss fast implementations for several varieties of RT in the time and frequency domains. An iterative conjugate gradient algorithm with fast Fourier transform multiplication is used in all cases. To preserve the important property of iterative subspace methods of regularizing the solution by the number of iterations, the model weights are incorporated into the operators. This turns out to be of particular importance, and it can be understood in terms of the singular vectors of the weighted transform. The iterative algorithm is stopped according to a general cross validation criterion for subspaces. We apply this idea to several known implementations and compare results in order to better understand differences between, and merits of, these algorithms.

351 citations


Journal ArticleDOI
TL;DR: A complete analysis is given for a seven-level converter (three dc sources), where it is shown that for a range of the modulation index m/sub I/, the switching angles can be chosen to produce the desired fundamental V/sub 1/=m/ sub I/(s4V/sub dc///spl pi/) while making the fifth and seventh harmonics identically zero.
Abstract: In this work, a method is given to compute the switching angles in a multilevel converter to produce the required fundamental voltage while at the same time cancel out specified higher order harmonics. Specifically, a complete analysis is given for a seven-level converter (three dc sources), where it is shown that for a range of the modulation index m/sub I/, the switching angles can be chosen to produce the desired fundamental V/sub 1/=m/sub I/(s4V/sub dc///spl pi/) while making the fifth and seventh harmonics identically zero.

324 citations


Proceedings ArticleDOI
26 Jul 2003
TL;DR: A system that can synthesize an image by conventional means, perform the FFT, filter the image, and finally apply the inverse FFT in well under 1 second for a 512 by 512 image is demonstrated.
Abstract: The Fourier transform is a well known and widely used tool in many scientific and engineering fields. The Fourier transform is essential for many image processing techniques, including filtering, manipulation, correction, and compression. As such, the computer graphics community could benefit greatly from such a tool if it were part of the graphics pipeline. As of late, computer graphics hardware has become amazingly cheap, powerful, and flexible. This paper describes how to utilize the current generation of cards to perform the fast Fourier transform (FFT) directly on the cards. We demonstrate a system that can synthesize an image by conventional means, perform the FFT, filter the image, and finally apply the inverse FFT in well under 1 second for a 512 by 512 image. This work paves the way for performing complicated, real-time image processing as part of the rendering pipeline.

322 citations


Journal ArticleDOI
TL;DR: In this work the method is applied to study the rheology of concentrated colloidal suspensions, and results are compared with conventional SD, and a faster approximate method is presented and its accuracy discussed.
Abstract: A new Stokesian dynamics (SD) algorithm for Brownian suspensions is presented. The implementation is based on the recently developed accelerated Stokesian dynamics (ASD) simulation method [Sierou and Brady, J. Fluid Mech. 448, 115 (2001)] for non-Brownian particles. As in ASD, the many-body long-range hydrodynamic interactions are computed using fast Fourier transforms, and the resistance matrix is inverted iteratively, in order to keep the computational cost O(N log N). A fast method for computing the Brownian forces acting on the particles is applied by splitting them into near- and far-field contributions to avoid the O(N3) computation of the square root of the full resistance matrix. For the near-field part, representing the forces as a sum of pairwise contributions reduces the cost to O(N); and for the far-field part, a Chebyshev polynomial approximation for the inverse of the square root of the mobility matrix results in an O(N1.25 log N) computational cost. The overall scaling of the method is thus roughly of O(N1.25 log N) and makes possible the simulation of large systems, which are necessary for studying long-time dynamical properties and/or polydispersity effects in colloidal dispersions. In this work the method is applied to study the rheology of concentrated colloidal suspensions, and results are compared with conventional SD. Also, a faster approximate method is presented and its accuracy discussed.

243 citations


Journal ArticleDOI
Yiteng Huang1, Jacob Benesty1
TL;DR: Simulations show that the frequency-domain adaptive approaches perform as well as or better than their time-domain counterparts and the cross-relation (CR) batch method in most practical cases.
Abstract: We extend our previous studies on adaptive blind channel identification from the time domain into the frequency domain. A class of frequency-domain adaptive approaches, including the multichannel frequency-domain LMS (MCFLMS) and constrained/unconstrained normalized multichannel frequency-domain LMS (NMCFLMS) algorithms, are proposed. By utilizing the fast Fourier transform (FFT) and overlap-save techniques, the convolution and correlation operations that are computationally intensive when performed by the time-domain multichannel LMS (MCLMS) or multichannel Newton (MCN) methods are efficiently implemented in the frequency domain, and the MCFLMS is rigorously derived. In order to achieve independent and uniform convergence for each filter coefficient and, therefore, accelerate the overall convergence, the coefficient updates are properly normalized at each iteration, and the NMCFLMS algorithms are developed. Simulations show that the frequency-domain adaptive approaches perform as well as or better than their time-domain counterparts and the cross-relation (CR) batch method in most practical cases. It is remarkable that for a three-channel acoustic system with long impulse responses (256 taps in each channel) excited by a male speech signal, only the proposed NMCFLMS algorithm succeeds in determining a reasonably accurate channel estimate, which is good enough for applications such as time delay estimation.

207 citations


Journal ArticleDOI
TL;DR: A new bound is introduced for the peak of the continuous envelope of an OFDM signal, based on the maximum of its corresponding oversampled sequence, to derive a closed-form probability upper bound for the complementary cumulative distribution function of the peak-to-mean envelope power ratio of uncoded OFDM signals for sufficiently large numbers of subcarriers.
Abstract: Orthogonal frequency-division multiplexing (OFDM) introduces large amplitude variations in time, which can result in significant signal distortion in the presence of nonlinear amplifiers. We introduce a new bound for the peak of the continuous envelope of an OFDM signal, based on the maximum of its corresponding oversampled sequence; it is shown to be very tight as the oversampling rate increases. The bound is then used to derive a closed-form probability upper bound for the complementary cumulative distribution function of the peak-to-mean envelope power ratio of uncoded OFDM signals for sufficiently large numbers of subcarriers. As another application of the bound for oversampled sequences, we propose tight relative error bounds for computation of the peak power using two main methods: the oversampled inverse fast Fourier transform and the method introduced for coded systems based on minimum distance decoding of the code.

Journal ArticleDOI
Necati Gülünay1
TL;DR: In this article, a data adaptive interpolation method is designed and applied in the Fourier transform domain (f•k or f•kx•ky) for spatially aliased data.
Abstract: A data adaptive interpolation method is designed and applied in the Fourier transform domain (f‐k or f‐kx‐ky for spatially aliased data. The method makes use of fast Fourier transforms and their cyclic properties, thereby offering a significant cost advantage over other techniques that interpolate aliased data.The algorithm designs and applies interpolation operators in the f‐k (or f‐kx‐ky domain to fill zero traces inserted in the data in the t‐x (or t‐x‐y) domain at locations where interpolated traces are needed. The interpolation operator is designed by manipulating the lower frequency components of the stretched transforms of the original data. This operator is derived assuming that it is the same operator that fills periodically zeroed traces of the original data but at the lower frequencies, and corresponds to the f‐k (or f‐kx‐ky domain version of the well‐known f‐x (or f‐x‐y) domain trace interpolators.The method is applicable to 2D and 3D data recorded sparsely in a horizontal plane. The most comm...

Journal ArticleDOI
TL;DR: In this article, a non-equispaced fast Fourier transform (FFT) is proposed for computerized tomography reconstruction, which is similar to the algorithms of Dutt and Rokhlin and Beylkin.
Abstract: In this article we describe a non-equispaced fast Fourier transform. It is similar to the algorithms of Dutt and Rokhlin and Beylkin but is based on an exact Fourier series representation. This results in a greatly simplified analysis and increased flexibility. The latter can be used to achieve more efficiency. Accuracy and efficiency of the resulting algorithm are illustrated by numerical examples. In the second part of the article the non-equispaced FFT is applied to the reconstruction problem in Computerized Tomography. This results in a different view of the gridding method of O’Sullivan and in a new ultra fast reconstruction algorithm. The new reconstruction algorithm outperforms the filtered backprojection by a speedup factor of up to 100 on standard hardware while still producing excellent reconstruction quality.

Journal ArticleDOI
TL;DR: This paper presents a novel methodology for inferring the queuing delay distributions across internal links in the network based solely on unicast, end-to-end measurements and develops a new estimation methodology based on recently proposed nonparametric, wavelet-based density estimation method.
Abstract: The substantial overhead of performing internal network monitoring motivates techniques for inferring spatially localized information about performance using only end-to-end measurements. In this paper, we present a novel methodology for inferring the queuing delay distributions across internal links in the network based solely on unicast, end-to-end measurements. The major contributions are: 1) we formulate a measurement procedure for estimation and localization of delay distribution based on end-to-end packet pairs; 2) we develop a simple way to compute maximum likelihood estimates (MLEs) using the expectation-maximization (EM) algorithm; 3) we develop a new estimation methodology based on recently proposed nonparametric, wavelet-based density estimation method; and 4) we optimize the computational complexity of the EM algorithm by developing a new fast Fourier transform implementation. Realistic network simulations are carried out using network-level simulator ns-2 to demonstrate the accuracy of the estimation procedure.

Journal ArticleDOI
TL;DR: A novel approach to harmonic and interharmonic analysis, based on the "subspace" methods, is proposed, and Min-norm harmonic retrieval method is an example of high-resolution eigenstructure-based methods.
Abstract: Modern frequency power converters generate a wide spectrum of harmonic components. Large converters systems can also generate noncharacteristic harmonics and interharmonics. Standard tools of harmonic analysis based on the Fourier transform assume that only harmonics are present and the periodicity intervals are fixed, while periodicity intervals in the presence of interharmonics are variable and very long. A novel approach to harmonic and interharmonic analysis, based on the "subspace" methods, is proposed. Min-norm harmonic retrieval method is an example of high-resolution eigenstructure-based methods. The Prony method as applied for signal analysis was also tested for this purpose. Both high-resolution methods do not show the disadvantages of the traditional tools and allow exact estimation of the interharmonics frequencies. To investigate the methods several experiments were performed using simulated signals, current waveforms at the output of a simulated frequency converter, and current waveforms at the output of an industrial frequency converter. For comparison, similar experiments were repeated using the fast Fourier transform (FFT). The comparison proved the superiority of the new methods. However, their computation is much more complex than FFT.

Journal ArticleDOI
TL;DR: A novel split-radix fast Fourier transform pipeline architecture design is presented to balance the latency between complex multiplication and butterfly operation by using carry-save addition and the number of complex multiplier is minimized via a bit-inverse and bit-reverse data scheduling scheme.
Abstract: This paper presents a novel split-radix fast Fourier transform (SRFFT) pipeline architecture design. A mapping methodology has been developed to obtain regular and modular pipeline for split-radix algorithm. The pipeline is repartitioned to balance the latency between complex multiplication and butterfly operation by using carry-save addition. The number of complex multiplier is minimized via a bit-inverse and bit-reverse data scheduling scheme. One can also apply the design methodology described here to obtain regular and modular pipeline for the other Cooley-Tukey-based algorithms. For an N(= 2/sup n/)-point FFT, the requirements are log/sub 4/ N - 1 multipliers, 4log/sub 4/ N complex adders, and memory of size N - 1 complex words for data reordering. The initial latency is N + 2 /spl middot/ log/sub 2/ N clock cycles. On the average, it completes an N-point FFT in N clock cycles. From post-layout simulations, the maximum clock rate is 150 MHz (75 MHz) at 3.3 V (2.7 V), 25/spl deg/C (100/spl deg/C) using a 0.35-/spl mu/m cell library from Avant!. A 64-point SRFFT pipeline design has been implemented and consumes 507 mW at 100 MHz, 3.3 v, and 25/spl deg/C. Compared with a radix-2/sup 2/ FFT implementation, the power consumption is reduced by an amount of 15%, whereas the speed is improved by 14.5%.

Journal ArticleDOI
TL;DR: A Bayesian Fast Fourier Transform approach (BFFTA) for modal updating is presented which uses the statistical properties of the Fast Fouriers transform to obtain not only the optimal values of the updated modal parameters but also their associated uncertainties, calculated from their joint probability distribution.
Abstract: The problem of identification of the modal parameters of a structural model using measured ambient response time histories is addressed. A Bayesian Fast Fourier Transform approach (BFFTA) for modal updating is presented which uses the statistical properties of the Fast Fourier transform (FFT) to obtain not only the optimal values of the updated modal parameters but also their associated uncertainties, calculated from their joint probability distribution. Calculation of the uncertainties of the identified modal parameters is very important when one plans to proceed with the updating of a theoretical finite element model based on modal estimates. The proposed approach requires only one set of response data in contrast to many of the existing frequency-based approaches which require averaging. It is found that the updated PDF can be well approximated by a Gaussian distribution centred at the optimal parameters at which the posterior PDF is maximized. Examples using simulated data are presented to illustrate ...

Journal ArticleDOI
TL;DR: In this article, the authors proposed antireflective boundary conditions (BCs) for deblurring and detecting the regularization parameters in the presence of noise, which can be related to the algebra of the matrices that can be simultaneously diagonalized by the (fast) sine transform DST I.
Abstract: In a recent work Ng, Chan, and Tang introduced reflecting (Neumann) boundary conditions (BCs) for blurring models and proved that the resulting choice leads to fast algorithms for both deblurring and detecting the regularization parameters in the presence of noise. The key point is that Neumann BC matrices can be simultaneously diagonalized by the (fast) cosine transform DCT III. Here we propose antireflective BCs that can be related to $\tau$ structures, i.e., to the algebra of the matrices that can be simultaneously diagonalized by the (fast) sine transform DST I. We show that, in the generic case, this is a more natural modeling whose features are (a) a reduced analytical error since the zero (Dirichlet) BCs lead to discontinuity at the boundaries, the reflecting (Neumann) BCs lead to C0 continuity at the boundaries, while our proposal leads to C1 continuity at the boundaries; (b) fast numerical algorithms in real arithmetic for both deblurring and estimating regularization parameters. Finally, simple yet significant 1D and 2D numerical evidence is presented and discussed.

Journal ArticleDOI
TL;DR: This work improves well-known fast algorithms for the discrete spherical Fourier transform with a computational complexity of O(N2 log2 N), and presents, for the first time, a fast algorithm for scattered data on the sphere.

Journal ArticleDOI
TL;DR: A new robust magnetotelluric data processing algorithm is described, involving Siegel estimation on the basis of a repeated median (RM) algorithm for maximum protection against the influence of outliers and large errors.
Abstract: SUMMARY A new robust magnetotelluric (MT) data processing algorithm is described, involving Siegel estimation on the basis of a repeated median (RM) algorithm for maximum protection against the influence of outliers and large errors. The spectral transformation is performed by means of a fast Fourier transformation followed by segment coherence sorting. To remove outliers and gaps in the time domain, an algorithm of forward autoregression prediction is applied. The processing technique is tested using two 7 day long synthetic MT time-series prepared within the framework of the COMDAT processing software comparison project. The first test contains pure MT signals, whereas in the second test the same signal is superimposed on different types of noise. To show the efficiency of the algorithm some examples of real MT data processing are also presented.

Journal ArticleDOI
TL;DR: This paper presents an algebraic characterization of the important class of discrete cosine and sine transforms as decomposition matrices of certain regular modules associated with four series of Chebyshev polynomials.
Abstract: It is known that the discrete Fourier transform (DFT) used in digital signal processing can be characterized in the framework of the representation theory of algebras, namely, as the decomposition matrix for the regular module ${\mathbb{C}}[Z_n] = {\mathbb{C}}[x]/(x^n - 1)$. This characterization provides deep insight into the DFT and can be used to derive and understand the structure of its fast algorithms. In this paper we present an algebraic characterization of the important class of discrete cosine and sine transforms as decomposition matrices of certain regular modules associated with four series of Chebyshev polynomials. Then we derive most of their known algorithms by pure algebraic means. We identify the mathematical principle behind each algorithm and give insight into its structure. Our results show that the connection between algebra and digital signal processing is stronger than previously understood.

Proceedings ArticleDOI
01 Dec 2003
TL;DR: A new noise variance and SNR estimation algorithm for a 2 /spl times/ 2 MIMO wireless OFDM system as defined in the IST-STINGRAY project is presented and shows good results as long as the delay spread of the channel is small enough compared to the OFDM symbol period.
Abstract: A new noise variance and SNR estimation algorithm for a 2 /spl times/ 2 MIMO wireless OFDM system as defined in the IST-STINGRAY project is presented. The SNR information is used to adapt parameters or reconfigure parts of the transmitter. The noise variance estimation algorithm uses only 2 OFDM training symbols from each transmitting antenna and the FFT output signals at the receiver. It does not require knowledge of the channel coefficients. Then, using the channel coefficient estimates given by a channel estimator and the estimate of the noise variance, the SNR is computed. The algorithm's performance is measured through Monte-Carlo simulations on a variety of channel models and compared to those of an MMSE algorithm using perfect channel estimates. The normalized MSE of the obtained noise variance estimate shows good results as long as the delay spread of the channel is small enough compared to the OFDM symbol period.

Journal ArticleDOI
TL;DR: This work presents a numerical algorithm for computationally correcting the effect of material dispersion on OCT reflectance data for homogeneous and stratified media with broad spectral bandwidths and highly dispersive media or thick objects.
Abstract: The resolution of optical coherence tomography (OCT) often suffers from blurring caused by material dispersion. We present a numerical algorithm for computationally correcting the effect of material dispersion on OCT reflectance data for homogeneous and stratified media. This is experimentally demonstrated by correcting the image of a polydimethyl siloxane microfludic structure and of glass slides. The algorithm can be implemented using the fast Fourier transform. With broad spectral bandwidths and highly dispersive media or thick objects, dispersion correction becomes increasingly important.

Journal ArticleDOI
TL;DR: Experimental results show that root mean square errors of the normal velocity reconstruction for a point-driven vibrator over 200-2700 Hz average less than 20% for two small, concentric patch surfaces 0.4 cm apart.
Abstract: Nearfield acoustical holography (NAH) requires the measurement of the pressure field over a complete surface in order to recover the normal velocity on a nearby concentric surface, the latter generally coincident with a vibrator. Patch NAH provides a major simplification by eliminating the need for complete surface pressure scans-only a small area needs to be scanned to determine the normal velocity on the corresponding (small area) concentric patch on the vibrator. The theory of patch NAH is based on (1) an analytic continuation of the patch pressure which provides a spatially tapered aperture extension of the field and (2) a decomposition of the transfer function (pressure to velocity and/or pressure to pressure) between the two surfaces using the singular value decomposition (SVD) for general shapes and the fast Fourier transform (FFT) for planar surfaces. Inversion of the transfer function is stabilized using Tikhonov regularization and the Morozov discrepancy principle. Experimental results show that root mean square errors of the normal velocity reconstruction for a point-driven vibrator over 200-2700 Hz average less than 20% for two small, concentric patch surfaces 0.4 cm apart. Reconstruction of the active normal acoustic intensity was also successful, with less than 30% error over the frequency band.

Journal ArticleDOI
TL;DR: A new algorithm for the fast computation of discrete sums f(y_j) := \sum_{k=1}^N \alpha_k K(y-x-x_k) based on the recently developed fast Fourier transform (FFT) at nonequispaced knots is developed.
Abstract: We develop a new algorithm for the fast computation of discrete sums $f(y_j) := \sum_{k=1}^N \alpha_k K(y_j-x_k)$ (j =1, . . ., M) based on the recently developed fast Fourier transform (FFT) at nonequispaced knots. Our algorithm, in particular our regularization procedure, is simply structured and can be easily adapted to different kernels K. Our method utilizes the widely known FFT and can consequently incorporate advanced FFT implementations. In summary, it requires ${\cal O} (N \log N +M)$ arithmetic operations. We prove error estimates to obtain clues about the choice of the involved parameters and present numerical examples in one and two dimensions.

Journal ArticleDOI
TL;DR: In this paper, the stabilized biconjugate gradient fast Fourier transform (BCGS-FFT) method is applied to simulate electromagnetic scattering from large inhomogeneous objects embedded in a planarly layered medium.
Abstract: A newly developed iterative method, the stabilized biconjugate gradient fast Fourier transform (BCGS-FFT) method is applied to simulate electromagnetic scattering from large inhomogeneous objects embedded in a planarly layered medium. In this fast solver, the weak-form formulation is applied to obtain a less singular discretization of the volume electric field integral equation. Several techniques are utilized to speed up the dyadic Green's function evaluation. To accelerate the operation of the dyadic Green's function on an induced current (i.e., the "Green's operation"), the Green's function is split into convolutional and correlational components so that FFT can be applied. The CPU time and memory cost of this BCGS-FFT method is O(NlogN) and O(N), respectively, where N is the number of unknowns, significantly more efficient than the method of moments (MoM). As a result, this method is capable of solving large-scale electromagnetic scattering problems in a planarly layered background. A large-scale scattering problem in a layered medium with more than three million unknowns has been solved on a Sun Ultra 60 workstation with 1.2 GBytes memory.

Journal ArticleDOI
TL;DR: A FFT-based algorithm has been successfully implemented in Interactive Data Language (IDL) and added as two user functions to an image processing software package—ENVI interface, which shows that the accuracy of the resulting registration is quite good compared to current manual methods.

Proceedings ArticleDOI
15 Dec 2003
TL;DR: Results show that the parallel implementation of 2-D FFT achieves virtually linear speed-up and real-time performance for large matrix sizes, and an FPGA-based parametrisable environment based on the developed parallel 2- D FFT architecture is presented as a solution for frequency-domain image filtering application.
Abstract: Applications based on Fast Fourier Transform (FFT) such as signal and image processing require high computational power, plus the ability to experiment with algorithms. Reconfigurable hardware devices in the form of Field Programmable Gate Arrays (FPGAs) have been proposed as a way of obtaining high performance at an economical price. At present, however, users must program FPGAs at a very low level and have a detailed knowledge of the architecture of the device being used. To try to reconcile the dual requirements of high performance and ease of development, this paper reports on the design and realisation of a High Level framework for the implementation of 1-D and 2-D FFTs for real-time applications. Results show that the parallel implementation of 2-D FFT achieves virtually linear speed-up and real-time performance for large matrix sizes. Finally, an FPGA-based parametrisable environment based on the developed parallel 2-D FFT architecture is presented as a solution for frequency-domain image filtering application.

Journal ArticleDOI
TL;DR: In this article, the effectiveness of vibration-based methods in damage detection of a typical highway structure is investigated, and two types of full-scale concrete structures subjected to fatigue loads are studied: Portland cement concrete pavements on grade; and a simply supported prestressed concrete beams.
Abstract: The effectiveness of vibration-based methods in damage detection of a typical highway structure is investigated. Two types of full-scale concrete structures subjected to fatigue loads are studied: (1) Portland cement concrete pavements on grade; and (2) a simply supported prestressed concrete beams. Fast Fourier transform (FFT) and continuous wavelet transform (CWT) are used in the analysis of the structures' dynamic response to impact, and results from both techniques are compared. Both FFT and CWT can identify which frequency components exist in a signal. However, only the wavelet transform can show when a particular frequency occurs. Results of this research are such that FFT can detect the progression of damage in the beam but not in the slab. In contrast, the CWT analysis yielded a clear difference between the initial and damaged states for both structures. These findings confirm the conclusions of previous studies conducted on small-scale specimens that wavelet analysis has a great potential in the damage detection of concrete. The study also demonstrates that the approach is applicable to full-scale components of sizes similar or close to actual in-service structures.

Journal ArticleDOI
TL;DR: It is demonstrated that the complete LU decomposition of the matrix system from a single array element can be used as a highly effective block-diagonal preconditioner on the larger array matrix system.
Abstract: Presented in this paper is a fast method to accurately model finite arrays of arbitrary three-dimensional elements. The proposed technique, referred to as the array decomposition method (ADM), exploits the repeating features of finite arrays and the free-space Green's function to assemble a nonsymmetric block-Toeplitz matrix system. The Toeplitz property is used to significantly reduce storage requirements and allows the fast Fourier transform (FFT) to be applied in accelerating the matrix-vector product operations of the iterative solution process. Each element of the array is modeled using the finite element-boundary integral (FE-BI) technique for rigorous analysis. Consequently, we demonstrate that the complete LU decomposition of the matrix system from a single array element can be used as a highly effective block-diagonal preconditioner on the larger array matrix system. This rigorous method is compared to the standard FE-BI technique for several tapered-slot antenna (TSA) arrays and is demonstrated to generate the same accuracy with a fraction of the storage and solution time.