scispace - formally typeset
Search or ask a question

Showing papers on "Fast Fourier transform published in 2011"


Journal ArticleDOI
TL;DR: In this paper, an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver is used to generate displacements and velocities following first- or second-order Lagrangian perturbation theory (2LPT).
Abstract: We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.

564 citations


Journal ArticleDOI
TL;DR: Principal component analysis (PCA) is used to show how a significant reduction on data complexity was achieved, improving the ability to highlight chemical differences amongst the samples.

203 citations


Journal ArticleDOI
Sanjit K. Debnath1, YongKeun Park1
TL;DR: This Letter reports on the use of a spatial phase-shifting algorithm in a fast, straightforward method of real-time quantitative phase imaging, effective and sufficiently general for application to the dynamic phenomena of biological samples.
Abstract: This Letter reports on the use of a spatial phase-shifting algorithm in a fast, straightforward method of real-time quantitative phase imaging. The computation time for phase extraction is five times faster than a Fourier transform and twice as fast as a Hilbert transform. The fact that the phase extraction from an interferogram of 512 × 512 pixels takes less than 8.93 ms with a typical desktop computer suggests the proposed method can be readily applied to high-speed dynamic quantitative phase imaging. The proposed method of quantitative phase imaging is effective and sufficiently general for application to the dynamic phenomena of biological samples.

195 citations


Journal ArticleDOI
TL;DR: The distribution is demonstrated to be a CFCR representation that is computed without using any searching operation and to generate a new TF representation, called inverse LVD (ILVD), and a new ambiguity function, called Lv's ambiguity function (LVAF), both of which may break through the tradeoff between resolution and cross terms.
Abstract: This paper proposes a novel representation, known as Lv's distribution (LVD), of linear frequency modulated (LFM) signals. It has been well known that a monocomponent LFM signal can be uniquely determined by two important physical quantities, centroid frequency and chirp rate (CFCR). The basic reason for expressing a LFM signal in the CFCR domain is that these two quantities may not be apparent in the time or time-frequency (TF) domain. The goal of the LVD is to naturally and accurately represent a mono- or multicomponent LFM in the CFCR domain. The proposed LVD is simple and only requires a two-dimensional (2-D) Fourier transform of a parametric scaled symmetric instantaneous autocorrelation function. It can be easily implemented by using the complex multiplications and fast Fourier transforms (FFT) based on the scaling principle. The computational complexity, properties, detection performance and representation errors are analyzed for this new distribution. Comparisons with three other popular methods, Radon-Wigner transform (RWT), Radon-Ambiguity transform (RAT), and fractional Fourier transform (FRFT) are performed. With several numerical examples, our distribution is demonstrated to be a CFCR representation that is computed without using any searching operation. The main significance of the LVD is to convert a 1-D LFM into a 2-D single-frequency signal. One of the most important applications of the LVD is to generate a new TF representation, called inverse LVD (ILVD), and a new ambiguity function, called Lv's ambiguity function (LVAF), both of which may break through the tradeoff between resolution and cross terms.

191 citations


Journal ArticleDOI
TL;DR: This work develops a novel sampling theorem on the sphere and corresponding fast algorithms by associating the sphere with the torus through a periodic extension and highlights the advantages of the sampling theorem in the context of potential applications, notably in the field of compressive sampling.
Abstract: We develop a novel sampling theorem on the sphere and corresponding fast algorithms by associating the sphere with the torus through a periodic extension. The fundamental property of any sampling theorem is the number of samples required to represent a band-limited signal. To represent exactly a signal on the sphere band-limited at L, all sampling theorems on the sphere require O(L2) samples. However, our sampling theorem requires less than half the number of samples of other equiangular sampling theorems on the sphere and an asymptotically identical, but smaller, number of samples than the Gauss-Legendre sampling theorem. The complexity of our algorithms scale as O(L3), however, the continual use of fast Fourier transforms reduces the constant prefactor associated with the asymptotic scaling considerably, resulting in algorithms that are fast. Furthermore, we do not require any precomputation and our algorithms apply to both scalar and spin functions on the sphere without any change in computational complexity or computation time. We make our implementation of these algorithms available publicly and perform numerical experiments demonstrating their speed and accuracy up to very high band-limits. Finally, we highlight the advantages of our sampling theorem in the context of potential applications, notably in the field of compressive sampling.

188 citations


Journal ArticleDOI
TL;DR: The mathematical structure of the modal identification problem is analyzed and efficient methods for computations are developed, focusing on well-separated modes, which reveals a scientific definition of signal-to-noise ratio that governs the behavior of the solution in a characteristic manner.
Abstract: Previously a Bayesian theory for modal identification using the fast Fourier transform (FFT) of ambient data was formulated. That method provides a rigorous way for obtaining modal properties as well as their uncertainties by operating in the frequency domain. This allows a natural partition of information according to frequencies so that well-separated modes can be identified independently. Determining the posterior most probable modal parameters and their covariance matrix, however, requires solving a numerical optimization problem. The dimension of this problem grows with the number of measured channels; and its objective function involves the inverse of an ill-conditioned matrix, which makes the approach impractical for realistic applications. This paper analyzes the mathematical structure of the problem and develops efficient methods for computations, focusing on well-separated modes. A method is developed that allows fast computation of the posterior most probable values and covariance matrix. The analysis reveals a scientific definition of signal-to-noise ratio that governs the behavior of the solution in a characteristic manner. Asymptotic behavior of the modal identification problem is investigated for high signal-to-noise ratios. The proposed method is applied to modal identification of two field buildings. Using the proposed algorithm, Bayesian modal identification can now be performed in a few seconds even for a moderate to large number of measurement channels.

182 citations


Journal ArticleDOI
TL;DR: The augmented Lagrangian method is extended to total variation (TV) restoration models with non-quadratic fidelities and it is shown that the third sub-problem also has closed form solution and thus can be efficiently solved.
Abstract: Recently augmented Lagrangian method has been successfully applied to image restoration. We extend the method to total variation (TV) restoration models with non-quadratic fidelities. We will first introduce the method and present an iterative algorithm for TV restoration with a quite general fidelity. In each iteration, three sub-problems need to be solved, two of which can be very efficiently solved via Fast Fourier Transform (FFT) implementation or closed form solution. In general the third sub-problem need iterative solvers. We then apply our method to TV restoration with $L^1$ and Kullback-Leibler (KL) fidelities, two common and important data terms for deblurring images corrupted by impulsive noise and Poisson noise, respectively. For these typical fidelities, we show that the third sub-problem also has closed form solution and thus can be efficiently solved. In addition, convergence analysis of these algorithms are given. Numerical experiments demonstrate the efficiency of our method.

167 citations


Journal ArticleDOI
TL;DR: Various tests on synthetic and experimental images, including a dataset of the 2nd PIV challenge, show that the accuracy of folki is found comparable to that of state-of-the-art FFT-based commercial softwares, while being 50 times faster.
Abstract: Our contribution deals with fast computation of dense two-component (2C) PIV vector fields using Graphics Processing Units (GPUs). We show that iterative gradient-based cross-correlation optimization is an accurate and efficient alternative to multi-pass processing with FFT-based cross-correlation. Density is meant here from the sampling point of view (we obtain one vector per pixel), since the presented algorithm, folki, naturally performs fast correlation optimization over interrogation windows with maximal overlap. The processing of 5 image pairs (1,376 × 1,040 each) is achieved in less than a second on a NVIDIA Tesla C1060 GPU. Various tests on synthetic and experimental images, including a dataset of the 2nd PIV challenge, show that the accuracy of folki is found comparable to that of state-of-the-art FFT-based commercial softwares, while being 50 times faster.

159 citations


Journal ArticleDOI
TL;DR: A new dynamic harmonic estimator is presented as an extension of the fast Fourier transform (FFT), which assumes a fluctuating complex envelope at each harmonic, and is able to estimate harmonics that are time varying inside the observation window.
Abstract: A new dynamic harmonic estimator is presented as an extension of the fast Fourier transform (FFT), which assumes a fluctuating complex envelope at each harmonic. This estimator is able to estimate harmonics that are time varying inside the observation window. The extension receives the name “Taylor-Fourier transform (TFT)” since it is based on the McLaurin series expansion of each complex envelope. Better estimates of the dynamic harmonics are obtained due to the fact that the Fourier subspace is contained in the subspace generated by the Taylor-Fourier basis. The coefficients of the TFT have a physical meaning: they represent instantaneous samples of the first derivatives of the complex envelope, with all of them calculated at once through a linear transform. The Taylor-Fourier estimator can be seen as a bank of maximally flat finite-impulse-response filters, with the frequency response of ideal differentiators about each harmonic frequency. In addition to cleaner harmonic phasor estimates under dynamic conditions, among the new estimates are the instantaneous frequency and first derivatives of each harmonic. Two examples are presented to evaluate the performance of the proposed estimator.

139 citations


Proceedings ArticleDOI
01 Sep 2011
TL;DR: This work proposes an abstract theory of denoising with atomic norms which is specialized to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials with guaranteed bounds on the mean-squared-error.
Abstract: The sub-Nyquist estimation of line spectra is a classical problem in signal processing, but currently popular subspace-based techniques have few guarantees in the presence of noise and rely on a priori knowledge about system model order. Motivated by recent work on atomic norms in inverse problems, we propose a new approach to line spectrum estimation that provides theoretical guarantees for the mean-square-error performance in the presence of noise and without advance knowledge of the model order. We propose an abstract theory of denoising with atomic norms which is specialized to provide a convex optimization problem for estimating the frequencies and phases of a mixture of complex exponentials with guaranteed bounds on the mean-squared-error. In general, our proposed optimization problem has no known polynomial time solution, but we provide an efficient algorithm, called DAST, based on the Fast Fourier Transform that achieves nearly the same error rate. We compare DAST with Cadzow's canonical alternating projection algorithm, which performs marginally better under high signal-to-noise ratios when the model order is known exactly, and demonstrate experimentally that DAST outperforms other denoising techniques, including Cadzow's, over a wide range of signal-to-noise ratios.

139 citations


Journal ArticleDOI
TL;DR: In this paper, the double Fourier integral analysis is used as an analytical approach to determine the phase-leg switched voltage spectrum under condition of natural sampling, and the performance of two proposed switching sequences are subsequently evaluated when the harmonic copper losses are chosen as performance criterion.
Abstract: This paper presents a comprehensive analytical investigation of the four-switch three-phase (B4) voltage source inverter. The double Fourier integral analysis is used as an analytical approach to determine the phase-leg switched voltage spectrum under condition of natural sampling. The performance of two proposed switching sequences are subsequently evaluated when the harmonic copper losses are chosen as performance criterion. For a clear identification of dc-link voltage, the spectrum of dc-link current is characterized by convolution of switching function spectrum and corresponding phase current spectrum, in which fundamental component appears causing offset and fluctuation in dc-link two-capacitor voltages. The offset can be suppressed by certain switching states, whereas the effect of fluctuation must be neutralized through analytical compensation for modulating waveforms. Otherwise, symmetry of three-phase output currents and reliable operation of the system will not be retained. Finally, the analytically calculated spectrum of output voltage and dc-link current are demonstrated by comparing with those obtained by fast Fourier transform (FFT) analysis of simulated waveforms in MATLAB/SIMULINK. The capacitor voltage offset suppression and fluctuation effect neutralization are carried out by simulations and experiments, in which the results confirm the validity of the analytical investigation.

Journal ArticleDOI
TL;DR: In this article, an adaptive convolution of Gaussian white noise with a real space transfer function kernel together with an adaptive multi-grid Poisson solver is used to generate displacements and velocities following first (1LPT) or second order Lagrangian perturbation theory (2LPT).
Abstract: We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological "zoom-in" simulations. The method uses an adaptive convolution of Gaussian white noise with a real space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first (1LPT) or second order Lagrangian perturbation theory (2LPT). The new algorithm achieves RMS relative errors of order 10^(-4) for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real space based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh based codes that is consistent with Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.

Journal ArticleDOI
Gang Xu1, Mengdao Xing1, Lei Zhang1, Yabo Liu1, Yachao Li1 
TL;DR: A novel algorithm of inverse synthetic aperture radar (ISAR) imaging based on Bayesian estimation is proposed, wherein the ISAR imaging joint with phase adjustment is mathematically transferred into signal reconstruction via maximum a posteriori estimation.
Abstract: In this letter, a novel algorithm of inverse synthetic aperture radar (ISAR) imaging based on Bayesian estimation is proposed, wherein the ISAR imaging joint with phase adjustment is mathematically transferred into signal reconstruction via maximum a posteriori estimation. In the scheme, phase errors are treated as model errors and are overcome in the sparsity-driven optimization regardless of the formats, while data-driven estimation of the statistical parameters for both noise and target is developed, which guarantees the high precision of image generation. Meanwhile, the fast Fourier transform is utilized to implement the solution to image formation, promoting its efficiency effectively. Due to the high denoising capability of the proposed algorithm, high-quality image also could be achieved even under strong noise. The experimental results using simulated and measured data confirm the validity.

Journal ArticleDOI
TL;DR: In this article, the authors used several measured profiles of real surfaces having vastly different roughness characteristics to predict contact areas and forces from various elastic contact models and contrast them to a deterministic fast Fourier transform (FFT)-based contact model.
Abstract: The contact force and the real contact area between rough surfaces are important in the prediction of friction, wear, adhesion, and electrical and thermal contact resistance. Over the last four decades various mathematical models have been developed. Built on very different assumptions and underlying mathematical frameworks, model agreement or effectiveness has never been thoroughly investigated. This work uses several measured profiles of real surfaces having vastly different roughness characteristics to predict contact areas and forces from various elastic contact models and contrast them to a deterministic fast Fourier transform (FFT)-based contact model. The latter is considered “exact” because surfaces are analyzed as they are measured, accounting for all peaks and valleys without compromise. Though measurement uncertainties and resolution issues prevail, the same surfaces are kept constant (i.e., are identical) for all models considered. Nonetheless, the effect of the data resolution of measured sur...

Journal ArticleDOI
TL;DR: A functional framework for the design of tight steerable wavelet frames in any number of dimensions and a principal-component-based method for signal adapted wavelet design, which consistently performs best.
Abstract: We present a functional framework for the design of tight steerable wavelet frames in any number of dimensions. The 2-D version of the method can be viewed as a generalization of Simoncelli's steerable pyramid that gives access to a larger palette of steerable wavelets via a suitable parametrization. The backbone of our construction is a primal isotropic wavelet frame that provides the multiresolution decomposition of the signal. The steerable wavelets are obtained by applying a one-to-many mapping (N th-order generalized Riesz transform) to the primal ones. The shaping of the steerable wavelets is controlled by an M × M unitary matrix (where M is the number of wavelet channels) that can be selected arbitrarily; this allows for a much wider range of solutions than the traditional equiangular configuration (steerable pyramid). We give a complete functional description of these generalized wavelet transforms and derive their steering equations. We describe some concrete examples of transforms, including some built around a Mallat-type multiresolution analysis of L2(Rd), and provide a fast Fourier transform-based decomposition algorithm. We also propose a principal-component-based method for signal adapted wavelet design. Finally, we present some illustrative examples together with a comparison of the denoising performance of various brands of steerable transforms. The results are in favor of an optimized wavelet design (equalized principal component analysis), which consistently performs best.

Journal ArticleDOI
TL;DR: From the hardware resource usage numbers it can be concluded that FTN signaling can be used to achieve higher bandwidth efficiency with acceptable complexity overhead.
Abstract: This paper evaluates the hardware aspects of multicarrier faster-than-Nyquist (FTN) signaling transceivers. The choice of time-frequency spacing of the symbols in an FTN system for improved bandwidth efficiency is targeted towards efficient hardware implementation. This work proposes a hardware architecture for the realization of iterative decoding of FTN multicarrier modulated signals. Compatibility with existing systems has been considered for smooth switching between the faster-than-Nyquist and orthogonal signaling schemes. One such being the use of fast Fourier transforms (FFTs) for multicarrier modulation. The performance of the fixed point model is very close to that of the floating point representation. The impact of system parameters such as number of projection points, time-frequency spacing, finite wordlengths and their design tradeoffs for reduced complexity iterative decoders in FTN systems have been investigated. The FTN decoder has been designed and synthesized in both 65 nm CMOS and FPGA. From the hardware resource usage numbers it can be concluded that FTN signaling can be used to achieve higher bandwidth efficiency with acceptable complexity overhead.

Journal ArticleDOI
TL;DR: This paper develops a fast alternating-direction implicit finite difference method for space-fractional diffusion equations in two space dimensions that has a significant reduction of the CPU time from more than 2 months and 1 week consumed by a traditional finite Difference method to 1.5h, using less than one thousandth of memory the standard method does.

Gino Angelo1, Velasco1, Nicki Holighaus1, Monika Dörfler1, Thomas Grill 
01 Jan 2011
TL;DR: An efficient and perfectly invertible signal transform feat uring a constant-Q frequency resolution is presented, based on the idea of the recently introduced nonstationary Gabor frames.
Abstract: An efficient and perfectly invertible signal transform feat uring a constant-Q frequency resolution is presented. The proposed approach is based on the idea of the recently introduced nonstationary Gabor frames. Exploiting the properties of the operator corresponding to a family of analysis atoms, this approach overcomes the problems of the classical implementations of constant-Q transforms, in particular, computational intensity and lack of i nvertibility. Perfect reconstruction is guaranteed by using an easy t o calculate dual system in the synthesis step and computation time is kept low by applying FFT-based processing. The proposed method is applied to real-life signals and evaluated in comparison to a related approach, recently introduced specifically for audio signa ls.

Journal ArticleDOI
13 Mar 2011-JOM
TL;DR: In this paper, a numerical formulation based on fast Fourier transforms was developed over the last 15 years, which can use the voxelized microstructural images of heterogeneous materials as input to predict their micromechanical and effective response.
Abstract: Emerging characterization methods in experimental mechanics pose a challenge to modelers to devise efficient formulations that permit interpretation and exploitation of the massive amount of data generated by these novel methods. In this overview we report on a numerical formulation based on fast Fourier transforms, developed over the last 15 years, which can use the voxelized microstructural images of heterogeneous materials as input to predict their micromechanical and effective response. The focus of this presentation is on applications of the method to plastically-deforming polycrystalline materials.

Journal ArticleDOI
Bo Zeng1, Zhaosheng Teng1, Yulian Cai2, Siyu Guo1, Baiyuan Qing 
TL;DR: An approach for power system harmonic phasor analysis under asynchronous sampling is proposed, based on smoothing sampled data by windowing the signal with the four-term fifth derivative Nuttall (FFDN) window, and then calculating harmonicphasors in the frequency domain with an improved fast Fourier transform (IFFT) algorithm.
Abstract: An approach for power system harmonic phasor analysis under asynchronous sampling is proposed in this paper. It is based on smoothing sampled data by windowing the signal with the four-term fifth derivative Nuttall (FFDN) window, and then calculating harmonic phasors in the frequency domain with an improved fast Fourier transform (IFFT) algorithm. The applicable rectification formulas of the IFFT are obtained by using the polynomial curve fitting, dramatically reducing the computation load. The FFDN window can effectively inhibit the spectral leakage and the picket fence effect can be modified by the IFFT algorithm under asynchronous sampling, and the overall algorithm can easily be implemented in embedded systems. The effectiveness of the proposed method was analyzed by means of simulations and practical experiments for multifrequency signals with the fluctuation of the fundamental frequency and with the presence of white noise and interharmonics.

Journal ArticleDOI
TL;DR: In the absence of visual information, the control group increases the energy at low frequencies, while the group with Down syndrome decreases it, and the lower amount of energy observed in this band under the 'eyes closed' condition may serve to identify abnormalities in the functioning of the vestibular apparatus of individuals with Down Syndrome.

Proceedings ArticleDOI
12 Feb 2011
TL;DR: On a range of NVIDIA GPUs and input sizes, the auto-tuned FFTs outperform the NVIDIA CUFFT 3.0 library by up to 38x and deliver up to 3x higher performance compared to a manually-tuning FFT.
Abstract: We present an auto-tuning framework for FFTs on graphics processors (GPUs). Due to complex design of the memory and compute subsystems on GPUs, the performance of FFT kernels over the range of possible input parameters can vary widely. We generate several variants for each component of the FFT kernel that, for different cases, are likely to perform well. Our auto-tuner composes variants to generate kernels and selects the best ones. We present heuristics to prune the search space and profile only a small fraction of all possible kernels. We compose optimized kernels to improve the performance of larger FFT computations. We implement the system using the NVIDIA CUDA API and compare its performance to the state-of-the-art FFT libraries. On a range of NVIDIA GPUs and input sizes, our auto-tuned FFTs outperform the NVIDIA CUFFT 3.0 library by up to 38x and deliver up to 3x higher performance compared to a manually-tuned FFT.

Journal ArticleDOI
TL;DR: A generalized conflict-free memory addressing scheme for memory-based fast Fourier transform (FFT) processors with parallel arithmetic processing units made up of radix-2q multi-path delay commutator (MDC) is presented.
Abstract: This paper presents a generalized conflict-free memory addressing scheme for memory-based fast Fourier transform (FFT) processors with parallel arithmetic processing units made up of radix-2q multi-path delay commutator (MDC). The proposed addressing scheme considers the continuous-flow operation with minimum shared memory requirements. To improve throughput, parallel high-radix processing units are employed. We prove that the solution to non-conflict memory access satisfying the constraints of the continuous-flow, variable-size, higher-radix, and parallel-processing operations indeed exists. In addition, a rescheduling technique for twiddle-factor multiplication is developed to reduce hardware complexity and to enhance hardware efficiency. From the results, we can see that the proposed processor has high utilization and efficiency to support flexible configurability for various FFT sizes with fewer computation cycles than the conventional radix-2/radix-4 memory-based FFT processors.

Journal ArticleDOI
TL;DR: By using the new current subspace approximation, the proposed FFT-TSOM inherits the merits of the TSOM, better stability during the inversion and better robustness against noise compared to the SOM, and meanwhile has lower computational complexity than the TSom.
Abstract: A fast Fourier transform (FFT) twofold subspace- based optimization method (TSOM) is proposed to solve electromagnetic inverse scattering problems. As mentioned in the original TSOM (Y. Zhong, et al, Inverse Probl., vol. 25, p. 085003, 2009), one is able to efficiently obtain a meaningful coarse result by constraining the induced current to a lower-dimensional subspace during the optimization, and use this result as the initial guess of the optimization with higher-dimensional current subspace. Instead of using the singular vectors to construct the current subspace as in the original TSOM, in this paper, we use discrete Fourier bases to construct a current subspace that is a good approximation to the original current subspace spanned by singular vectors. Such an approximation avoids the computationally burdensome singular value decomposition and uses the FFT to accomplish the construction of the induced current, which reduce the computational complexity and memory demand of the algorithm compared to the original TSOM. By using the new current subspace approximation, the proposed FFT-TSOM inherits the merits of the TSOM, better stability during the inversion and better robustness against noise compared to the SOM, and meanwhile has lower computational complexity than the TSOM. Numerical tests in the two-dimensional TM case and the three-dimensional one validate the algorithm.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that RCTSL has a lower signal-to-noise ratio (SNR) threshold compared with other known estimators, and theoretical bound is obtained, which shows the performance of RCT SL approaches the Cramér-Rao Bound (CRB).
Abstract: A noniterative frequency estimator is proposed in this paper. The maximum bin of fast Fourier transform (FFT) is searched as a coarse estimation, then the rational combination of three spectrum lines (RCTSL) is used as the fine estimation. Based on least square approximation in frequency domain, the combinational weights of RCTSL are found to be constants depending only on data length, therefore the RCTSL is very computational efficient. Theoretical bound is also obtained, which shows the performance of RCTSL approaches the Cramer-Rao Bound (CRB). Simulation results demonstrate that RCTSL has a lower signal-to-noise ratio (SNR) threshold compared with other known estimators.

Journal ArticleDOI
TL;DR: It is shown, for the proposed method, that the error due to frequency domain truncation can be separated from the approximation error added by the fast method, which has the significance that the truncation of the underlying Ewald sum prescribes the size of the grid used in the FFT-based fast method - which clearly is the minimal grid.

Journal ArticleDOI
01 Feb 2011
TL;DR: To eliminate the read-only memories used to store the twiddle factors, the proposed architecture applies a reconfigurable complex multiplier and bit-parallel multipliers to achieve a ROM-less FFT/IFFT processor, thus consuming lower power than the existing works.
Abstract: 4G and other wireless systems are currently hot topics of research and development in the communication field. Broadband wireless systems based on orthogonal frequency division multiplexing (OFDM) often require an inverse fast Fourier transform (IFFT) to produce multiple subcarriers. In this paper, we present the efficient implementation of a pipeline FFT/IFFT processor for OFDM applications. Our design adopts a single-path delay feedback style as the proposed hardware architecture. To eliminate the read-only memories (ROM's) used to store the twiddle factors, the proposed architecture applies a reconfigurable complex multiplier and bit-parallel multipliers to achieve a ROM-less FFT/IFFT processor, thus consuming lower power than the existing works. The design spends about 33.6K gates, and its power consumption is about 9.8mW at 20MHz.

Journal ArticleDOI
TL;DR: It is pointed out that the wavelength selection conditions of AWGs when used as wavelength MUX/DEMUX also enable them to perform FFT/IFFT functions, and previous research on AWGs can now be applied to optical FFT-IFFT circuit design.
Abstract: Arrayed waveguide gratings (AWG) are widely used as wavelength division multiplexers (MUX) and demultiplexers (DEMUX) in optical networks. Here we propose and demonstrate that conventional AWGs can also be used as integrated spectral filters to realize a Fast Fourier transform (FFT) and its inverse form (IFFT). More specifically, we point out that the wavelength selection conditions of AWGs when used as wavelength MUX/DEMUX also enable them to perform FFT/IFFT functions. Therefore, previous research on AWGs can now be applied to optical FFT/IFFT circuit design. Compared with other FFT/IFFT optical circuits, AWGs have less structural complexity, especially for a large number of inputs and outputs. As an important application, AWGs can be used in optical OFDM systems. We propose an all-optical OFDM system with AWGs and demonstrate the simulation results. Overall, the AWG provides a feasible solution for all-optical OFDM systems, especially with a large number of optical subcarriers.

Journal ArticleDOI
TL;DR: It is demonstrated that graphics processing units are capable to calculate fast Fourier transforms much more efficiently than traditional central processing units.

Journal ArticleDOI
TL;DR: This paper proposes a maximum likelihood (ML) approach to covariance estimation, which employs a novel non-linear sparsity constraint to have an eigen decomposition which can be represented as a sparse matrix transform (SMT).
Abstract: Covariance estimation for high dimensional signals is a classically difficult problem in statistical signal analysis and machine learning. In this paper, we propose a maximum likelihood (ML) approach to covariance estimation, which employs a novel non-linear sparsity constraint. More specifically, the covariance is constrained to have an eigen decomposition which can be represented as a sparse matrix transform (SMT). The SMT is formed by a product of pairwise coordinate rotations known as Givens rotations. Using this framework, the covariance can be efficiently estimated using greedy optimization of the log-likelihood function, and the number of Givens rotations can be efficiently computed using a cross-validation procedure. The resulting estimator is generally positive definite and well-conditioned, even when the sample size is limited. Experiments on a combination of simulated data, standard hyperspectral data, and face image sets show that the SMT-based covariance estimates are consistently more accurate than both traditional shrinkage estimates and recently proposed graphical lasso estimates for a variety of different classes and sample sizes. An important property of the new covariance estimate is that it naturally yields a fast implementation of the estimated eigen-transformation using the SMT representation. In fact, the SMT can be viewed as a generalization of the classical fast Fourier transform (FFT) in that it uses “butterflies” to represent an orthonormal transform. However, unlike the FFT, the SMT can be used for fast eigen-signal analysis of general non-stationary signals.