scispace - formally typeset
Search or ask a question

Showing papers on "Fast Fourier transform published in 2009"


Journal ArticleDOI
TL;DR: A simple and efficient algorithm for multichannel image deblurring and denoising, applicable to both within-channel and cross-channel blurs in the presence of additive Gaussian noise is constructed.
Abstract: Variational models with $\ell_1$-norm based regularization, in particular total variation (TV) and its variants, have long been known to offer superior image restoration quality, but processing speed remained a bottleneck, preventing their widespread use in the practice of color image processing In this paper, by extending the grayscale image deblurring algorithm proposed in [Y Wang, J Yang, W Yin, and Y Zhang, SIAM J Imaging Sci, 1 (2008), pp 248-272], we construct a simple and efficient algorithm for multichannel image deblurring and denoising, applicable to both within-channel and cross-channel blurs in the presence of additive Gaussian noise The algorithm restores an image by minimizing an energy function consisting of an $\ell_2$-norm fidelity term and a regularization term that can be either TV, weighted TV, or regularization functions based on higher-order derivatives Specifically, we use a multichannel extension of the classic TV regularizer (MTV) and derive our algorithm from an extended half-quadratic transform of Geman and Yang [IEEE Trans Image Process, 4 (1995), pp 932-946] For three-channel color images, the per-iteration computation of this algorithm is dominated by six fast Fourier transforms The convergence results in [Y Wang, J Yang, W Yin, and Y Zhang, SIAM J Imaging Sci, 1 (2008), pp 248-272] for single-channel images, including global convergence with a strong $q$-linear rate and finite convergence for some quantities, are extended to this algorithm We present numerical results including images recovered from various types of blurs, comparisons between our results and those obtained from the deblurring functions in MATLAB's Image Processing Toolbox, as well as images recovered by our algorithm using weighted MTV and higher-order regularization Our numerical results indicate that the processing speed, as attained by the proposed algorithm, of variational models with TV-like regularization can be made comparable to that of less sophisticated but widely used methods for color image restoration

483 citations


Journal ArticleDOI
TL;DR: This article provides a survey on the mathematical concepts behind the NFFT and its variants, as well as a general guideline for using the library.
Abstract: NFFT 3 is a software library that implements the nonequispaced fast Fourier transform (NFFT) and a number of related algorithms, for example, nonequispaced fast Fourier transforms on the sphere and iterative schemes for inversion. This article provides a survey on the mathematical concepts behind the NFFT and its variants, as well as a general guideline for using the library. Numerical examples for a number of applications are given.

376 citations


Journal ArticleDOI
TL;DR: The alternating minimization algorithm is extended to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise and proves attractive convergence properties, including finite convergence for some variables and $q$-linear convergence rate.
Abstract: We extend the alternating minimization algorithm recently proposed in [Y. Wang, J. Yang, W. Yin, and Y. Zhang, SIAM J. Imag. Sci., 1 (2008), pp. 248-272]; [J. Yang, W. Yin, Y. Zhang, and Y. Wang, SIAM J. Imag. Sci., 2 (2009), pp. 569-592] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation and a data fidelity term measured in the $\ell_1$-norm, and is applicable to both salt-and-pepper and random-valued impulsive noise. We derive the algorithm by applying the well-known quadratic penalty function technique and prove attractive convergence properties, including finite convergence for some variables and $q$-linear convergence rate. Under periodic boundary conditions, the main computational requirements of the algorithm are fast Fourier transforms and a low-complexity Gaussian elimination procedure. Numerical results on images with different blurs and impulsive noise are presented to demonstrate the efficiency of the algorithm. In addition, it is numerically compared to the least absolute deviation method [H. Y. Fu, M. K. Ng, M. Nikolova, and J. L. Barlow, SIAM J. Sci. Comput., 27 (2006), pp. 1881-1902] and the two-phase method [J. F. Cai, R. Chan, and M. Nikolova, AIMS J. Inverse Problems and Imaging, 2 (2008), pp. 187-204] for recovering grayscale images. We also present results of recovering multichannel images.

335 citations


Journal ArticleDOI
TL;DR: Advanced signal-and-data-processing algorithms consist of a proper sample selection algorithm, a Hilbert transformation of the stator-sampled current, and spectral analysis via FFT of the modulus of the resultant time-dependent vector modulus for achieving MCSA efficiently.
Abstract: This paper proposes an online/offline induction motor current signature analysis (MCSA) with advanced signal-and-data-processing algorithms, based on the Hilbert transform. MCSA is a method for motor diagnosis with stator-current signals. Although it is one of the most powerful online methods for diagnosing motor faults, it has some drawbacks that can degrade the performance and accuracy of a motor-diagnosis system. In particular, it is very difficult to detect broken rotor bars when the motor is operating at low slip or under no load, due to fast Fourier transform (FFT) frequency leakage and the small amplitude of the current components related to the fault. Therefore, advanced signal-and-data-processing algorithms are proposed. They consist of a proper sample selection algorithm, a Hilbert transformation of the stator-sampled current, and spectral analysis via FFT of the modulus of the resultant time-dependent vector modulus for achieving MCSA efficiently. Experimental results obtained on a 1.1 kW three-phase squirrel-cage induction motor are discussed.

258 citations


Journal ArticleDOI
Daiyin Zhu1, Ling Wang1, Yusheng Yu1, Qingnian Tao1, Zhaoda Zhu1 
TL;DR: A novel global approach to range alignment for inverse synthetic aperture radar (ISAR) image formation is presented, based on the minimization of the entropy of the average range profile (ARP), and the processing chain is capable of exploiting the efficiency of the fast Fourier transform.
Abstract: In this letter, a novel global approach to range alignment for inverse synthetic aperture radar (ISAR) image formation is presented. The algorithm is based on the minimization of the entropy of the average range profile (ARP), and the processing chain is capable of exploiting the efficiency of the fast Fourier transform. With respect to the existing global methods, the new one requires no exhaustive search operation and eliminates the necessity of the parametric model for the relative offset among the range profiles. The derivation of the algorithm indicates that the presented methodology is essentially an iterative solution to a set of simultaneous equations, and its robustness is also ensured by the iterative structure. Some alternative criteria, such as the maximum contrast of the ARP, can be introduced into the algorithm with a minor change in the entropy-based method. The convergence and robustness of the presented algorithm have been validated by experimental ISAR data.

236 citations


Journal ArticleDOI
Daniel Trad1
TL;DR: In this article, a sparseness constraint on the 4D spatial spectrum obtained from frequency slices of five-dimensional windows is proposed to improve the convergence of the inversion algorithm.
Abstract: Although 3D seismic data are being acquired in larger volumes than ever before, the spatial sampling of these volumes is not always adequate for certain seismic processes. This is especially true of marine and land wide-azimuth acquisitions, leading to the development of multidimensional data interpolation techniques. Simultaneous interpolation in all five seismic data dimensions (inline, crossline, offset, azimuth, and frequency) has great utility in predicting missing data with correct amplitude and phase variations. Although there are many techniques that can be implemented in five dimensions, this study focused on sparse Fourier reconstruction. The success of Fourier interpolation methods depends largely on two factors: (1) having efficient Fourier transform operators that permit the use of large multidimensional data windows and (2) constraining the spatial spectrum along dimensions where seismic amplitudes change slowly so that the sparseness and band limitation assumptions remain valid. Fourier reconstruction can be performed when enforcing a sparseness constraint on the 4D spatial spectrum obtained from frequency slices of five-dimensional windows. Binning spatial positions into a fine 4D grid facilitates the use of the FFT, which helps on the convergence of the inversion algorithm. This improves the results and computational efficiency. The 5D interpolation can successfully interpolate sparse data, improve AVO analysis, and reduce migration artifacts. Target geometries for optimal interpolation and regularization of land data can be classified in terms of whether they preserve the original data and whether they are designed to achieve surface or subsurface consistency.

221 citations


Journal ArticleDOI
TL;DR: The analysis of results from CAPRI, the first community-wide experiment devoted to protein docking, shows that all successful methods consist of multiple stages and that combining computational steps from different methods can improve the reliability and accuracy of results.

203 citations


Journal ArticleDOI
TL;DR: This paper investigates the implementation of the discrete Fourier transform (DFT) in the encrypted domain by using the homomorphic properties of the underlying cryptosystem, and shows that the radix-4 fast Fouriertransform is best suited for an encrypted domain implementation in the proposed scenarios.
Abstract: Signal-processing modules working directly on encrypted data provide an elegant solution to application scenarios where valuable signals must be protected from a malicious processing device. In this paper, we investigate the implementation of the discrete Fourier transform (DFT) in the encrypted domain by using the homomorphic properties of the underlying cryptosystem. Several important issues are considered for the direct DFT: the radix-2 and the radix-4 fast Fourier algorithms, including the error analysis and the maximum size of the sequence that can be transformed. We also provide computational complexity analyses and comparisons. The results show that the radix-4 fast Fourier transform is best suited for an encrypted domain implementation in the proposed scenarios.

186 citations


Journal ArticleDOI
TL;DR: This work develops a novel algorithm with split look-up tables (S-LUT) and implements it on graphics processing unit (GPU) to solve the speed problem of coherent ray trace (CRT) algorithm and memory problem of LUT algorithm without sacrificing reconstructed object quality.
Abstract: In computation of full-parallax computer-generated hologram (CGH), balance between speed and memory usage is always the core of algorithm development. To solve the speed problem of coherent ray trace (CRT) algorithm and memory problem of look-up table (LUT) algorithm without sacrificing reconstructed object quality, we develop a novel algorithm with split look-up tables (S-LUT) and implement it on graphics processing unit (GPU). Our results show that S-LUT on GPU has the fastest speed among all the algorithms investigated in this paper, while it still maintaining low memory usage. We also demonstrate high quality objects reconstructed from CGHs computed with S-LUT on GPU. The GPU implementation of our new algorithm may enable real-time and interactive holographic 3D display in the future.

173 citations


Posted Content
TL;DR: In this paper, the connection between the Orthogonal Optical Codes (OOC) and binary compressed sensing matrices was established and the RIP condition was established by means of coherence, and the simple greedy algorithms such as Matching Pursuit were able to recover the sparse solution from noiseless samples.
Abstract: In this paper we establish the connection between the Orthogonal Optical Codes (OOC) and binary compressed sensing matrices. We also introduce deterministic bipolar $m\times n$ RIP fulfilling $\pm 1$ matrices of order $k$ such that $m\leq\mathcal{O}\big(k (\log_2 n)^{\frac{\log_2 k}{\ln \log_2 k}}\big)$. The columns of these matrices are binary BCH code vectors where the zeros are replaced by -1. Since the RIP is established by means of coherence, the simple greedy algorithms such as Matching Pursuit are able to recover the sparse solution from the noiseless samples. Due to the cyclic property of the BCH codes, we show that the FFT algorithm can be employed in the reconstruction methods to considerably reduce the computational complexity. In addition, we combine the binary and bipolar matrices to form ternary sensing matrices ($\{0,1,-1\}$ elements) that satisfy the RIP condition.

156 citations


Journal ArticleDOI
TL;DR: The background and some of the striking early development of OFDM are described, with explanation of the motivations for using it.
Abstract: Orthogonal frequency-division multiplexing (OFDM) is one of those ideas that had been building for a very long time, and became a practical reality when the appearance of mass market applications coincided with the availability of efficient software and electronic technologies. This article describes the background and some of the striking early development of OFDM, with explanation of the motivations for using it. The author presume a broad definition of OFDM as frequency-division multiplexing (FDM) in which subchannels overlap without interfering. It does not not necessarily require the discrete Fourier transform (DFT) or its fast Fourier transform (FFT) computational method.

Proceedings ArticleDOI
14 Nov 2009
TL;DR: The resulting autotuner is fast and results in performance that essentially beats all 3-D FFT implementations on a single processor to date, and moreover exhibits stable performance irrespective of problem sizes or the underlying GPU hardware.
Abstract: Existing implementations of FFTs on GPUs are optimized for specific transform sizes like powers of two, and exhibit unstable and peaky performance i.e., do not perform as well in other sizes that appear in practice. Our new auto-tuning 3-D FFT on CUDA generates high performance CUDA kernels for FFTs of varying transform sizes, alleviating this problem. Although auto-tuning has been implemented on GPUs for dense kernels such as DGEMM and stencils, this is the first instance that has been applied comprehensively to bandwidth intensive and complex kernels such as 3-D FFTs. Bandwidth intensive optimizations such as selecting the number of threads and inserting padding to avoid bank conflicts on shared memory are systematically applied. Our resulting autotuner is fast and results in performance that essentially beats all 3-D FFT implementations on a single processor to date, and moreover exhibits stable performance irrespective of problem sizes or the underlying GPU hardware.

Journal ArticleDOI
TL;DR: In this paper, the authors presented an investigation of the diesel engine combustion related fault detection capability of crankshaft torsional vibration using the instantaneous angular speed (IAS) waveform.

Journal ArticleDOI
TL;DR: In this article, an all-digital telescope for 21 cm tomography was proposed, which combines key advantages of both single dishes and interferometers, translating into dramatically better sensitivity for large-area surveys.
Abstract: We propose an all-digital telescope for 21 cm tomography, which combines key advantages of both single dishes and interferometers. The electric field is digitized by antennas on a rectangular grid, after which a series of fast Fourier transforms recovers simultaneous multifrequency images of up to half the sky. Thanks to Moore's law, the bandwidth up to which this is feasible has now reached about 1 GHz, and will likely continue doubling every couple of years. The main advantages over a single dish telescope are cost and orders of magnitude larger field-of-view, translating into dramatically better sensitivity for large-area surveys. The key advantages over traditional interferometers are cost (the correlator computational cost for an N-element array scales as Nlog{sub 2}N rather than N{sup 2}) and a compact synthesized beam. We argue that 21 cm tomography could be an ideal first application of a very large fast Fourier transform telescope, which would provide both massive sensitivity improvements per dollar and mitigate the off-beam point source foreground problem with its clean beam. Another potentially interesting application is cosmic microwave background polarization.

Journal ArticleDOI
TL;DR: In this article, the authors compare finite element and fast Fourier transform approaches for the prediction of the micromechanical behavior of polycrystals, using the same visco-plastic single crystal constitutive law.
Abstract: In this work, we compare finite element and fast Fourier transform approaches for the prediction of the micromechanical behavior of polycrystals. Both approaches are full-field approaches and use the same visco-plastic single crystal constitutive law. We investigate the texture and the heterogeneity of the inter- and intragranular stress and strain fields obtained from the two models. Additionally, we also look into their computational performance. Two cases—rolling of aluminum and wire drawing of tungsten—are used to evaluate the predictions of the two models. Results from both the models are similar, when large grain distortions do not occur in the polycrystal. The finite element simulations were found to be highly computationally intensive, in comparison with the fast Fourier transform simulations. Figure 9 was corrected in this article on the 25 August 2009. The corrected electronic version is identical to the print version.

Journal ArticleDOI
TL;DR: The proposed architecture takes advantage of the reduced number of operations of the RFFT with respect to the complex fast Fourier transform (CFFT), and requires less area while achieving higher throughput and lower latency.
Abstract: This paper presents a new pipelined hardware architecture for the computation of the real-valued fast Fourier transform (RFFT). The proposed architecture takes advantage of the reduced number of operations of the RFFT with respect to the complex fast Fourier transform (CFFT), and requires less area while achieving higher throughput and lower latency. The architecture is based on a novel algorithm for the computation of the RFFT, which, contrary to previous approaches, presents a regular geometry suitable for the implementation of hardware structures. Moreover, the algorithm can be used for both the decimation in time (DIT) and decimation in frequency (DIF) decompositions of the RFFT and requires the lowest number of operations reported for radix 2. Finally, as in previous works, when calculating the RFFT the output samples are obtained in a scrambled order. The problem of reordering these samples is solved in this paper and a pipelined circuit that performs this reordering is proposed.

Journal ArticleDOI
TL;DR: In this article, a Taylor expansion of the trigonometric functions is used to estimate the Fourier power spectrum of a periodic point distribution that is a local Poisson realization of an underlying stationary field, and an analytic expression for the spectrum is derived to quantify the biases induced by discreteness and truncation of the Taylor expansion, and to bound the unknown effects of aliasing of the power spectrum.
Abstract: A method to rapidly estimate the Fourier power spectrum of a point distribution is presented. This method relies on a Taylor expansion of the trigonometric functions. It yields the Fourier modes from a number of fast Fourier transforms (FFTs), which is controlled by the order N of the expansion and by the dimension D of the system. In three dimensions, for the practical value N= 3, the number of FFTs required is 20. We apply the method to the measurement of the power spectrum of a periodic point distribution that is a local Poisson realization of an underlying stationary field. We derive an explicit analytic expression for the spectrum, which allows us to quantify – and correct for – the biases induced by discreteness and by the truncation of the Taylor expansion, and to bound the unknown effects of aliasing of the power spectrum. We show that these aliasing effects decrease rapidly with the order N. For N= 3, they are expected to be, respectively, smaller than ∼10−4 and 0.02 at half the Nyquist frequency and at the Nyquist frequency of the grid used to perform the FFTs. The only remaining significant source of errors is reduced to the unavoidable cosmic/sample variance due to the finite size of the sample. The analytical calculations are successfully checked against a cosmological N-body experiment. We also consider the initial conditions of this simulation, which correspond to a perturbed grid. This allows us to test a case where the local Poisson assumption is incorrect. Even in that extreme situation, the third-order Fourier–Taylor estimator behaves well, with aliasing effects restrained to at most the per cent level at half the Nyquist frequency. We also show how to reach arbitrarily large dynamic range in Fourier space (i.e. high wavenumber), while keeping statistical errors in control, by appropriately ‘folding’ the particle distribution.

Journal ArticleDOI
TL;DR: This work describes three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled and describes the form of the sampled chirp functions and their discrete transforms.
Abstract: Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.

Journal ArticleDOI
TL;DR: The FFT system shows promise both as a stand-alone system and especially in combination with approaches that are based on local features, and as an approach using global features, the system possesses many advantages.
Abstract: We present a novel online signature verification system based on the Fast Fourier Transform. The advantage of using the Fourier domain is the ability to compactly represent an online signature using a fixed number of coefficients. The fixed-length representation leads to fast matching algorithms and is essential in certain applications. The challenge on the other hand is to find the right preprocessing steps and matching algorithm for this representation. We report on the effectiveness of the proposed method, along with the effects of individual preprocessing and normalization steps, based on comprehensive tests over two public signature databases. We also propose to use the pen-up duration information in identifying forgeries. The best results obtained on the SUSIG-Visual subcorpus and the MCYT-100 database are 6.2% and 12.1% error rate on skilled forgeries, respectively. The fusion of the proposed system with our state-of-the-art Dynamic Time Warping (DTW) system lowers the error rate of the DTW system by up to about 25%. While the current error rates are higher than state-of-the-art results for these databases, as an approach using global features, the system possesses many advantages. Considering also the suggested improvements, the FFT system shows promise both as a stand-alone system and especially in combination with approaches that are based on local features.

Journal ArticleDOI
TL;DR: A novel adaptable accurate way for calculating polar FFT and log-polar FFT is developed in this paper, named multilayer fractional Fourier transform (MLFFT), which provides a mechanism to increase the accuracy by increasing the user-defined computing level.
Abstract: A novel adaptable accurate way for calculating polar FFT and log-polar FFT is developed in this paper, named multilayer fractional Fourier transform (MLFFT). MLFFT is a necessary addition to the pseudo-polar FFT for the following reasons: It has lower interpolation errors in both polar and log-polar Fourier transforms; it reaches better accuracy with the nearly same computing complexity as the pseudo-polar FFT; it provides a mechanism to increase the accuracy by increasing the user-defined computing level. This paper demonstrates both MLFFT itself and its advantages theoretically and experimentally. By emphasizing applications of MLFFT in image registration with rotation and scaling, our experiments suggest two major advantages of MLFFT: 1) scaling up to 5 and arbitrary rotation angles, or scales up to 10 without rotation can be recovered by MLFFT while currently the result recovered by the state-of-the-art algorithms is the maximum scaling of 4; 2) No iteration is needed to obtain large rotation and scaling values of images by MLFFT, hence it is more efficient than the pseudopolar-based FFT methods for image registration.

Journal ArticleDOI
TL;DR: This article gives an overview on the techniques needed to implement the discrete Fourier transform (DFT) efficiently on current multicore systems and shows and analyzes DFT benchmarks of the fastest libraries available for the considered platforms.
Abstract: This article gives an overview on the techniques needed to implement the discrete Fourier transform (DFT) efficiently on current multicore systems. The focus is on Intel-compatible multicores, but we also discuss the IBM Cell and, briefly, graphics processing units (GPUs). The performance optimization is broken down into three key challenges: parallelization, vectorization, and memory hierarchy optimization. In each case, we use the Kronecker product formalism to formally derive the necessary algorithmic transformations based on a few hardware parameters. Further code-level optimizations are discussed. The rigorous nature of this framework enables the complete automation of the implementation task as shown by the program generator Spiral. Finally, we show and analyze DFT benchmarks of the fastest libraries available for the considered platforms.

Journal ArticleDOI
TL;DR: In this article, a new approach for the synthesis of thinned periodic planar arrays featuring a minimum sidelobe level is presented, which is based on the iterative Fourier technique to derive the array element excitations from the prescribed array factor using successive forward and backward Fourier transforms.
Abstract: A new approach for the synthesis of thinned periodic planar arrays featuring a minimum sidelobe level is presented. The method is based on the iterative Fourier technique to derive the array element excitations from the prescribed array factor using successive forward and backward Fourier transforms. Array thinning is accomplished by setting the amplitudes of a predetermined number of largest element excitations to unity and the others to zero during each iteration cycle. Basically it is the same method successfully applied earlier to the thinning of periodic linear arrays. The effectiveness of the iterative Fourier technique for thinning periodic planar arrays will be demonstrated for a number of large arrays (> 1500 element positions) with a circular aperture using various degree of thinning.

Journal ArticleDOI
TL;DR: In this article, a fast method for solving the problem of 3D arbitrarily shaped inclusions in an isotropic half space is presented, which utilizes the closed-form solution for a cuboidal inclusion in an infinite space by breaking up the arbitrarily-shaped inclusions into multiple cuboids.

Journal ArticleDOI
Paul Embrechts1, Marco Frei1
TL;DR: A survey of recursive methods as well as transform based techniques used for numerical evaluation of compound distributions in insurance mathematics and quantitative risk management is given.
Abstract: Numerical evaluation of compound distributions is an important task in insurance mathematics and quantitative risk management. In practice, both recursive methods as well as transform based techniques are widely used. We give a survey of these tools, point out the respective merits and provide some numerical examples.

Book ChapterDOI
01 Jan 2009
TL;DR: Orthogonal frequency division multiplexing is a multicarrier transport technology for high data rate communication system based on spreading the high speed data to be transmitted over a large number of low rate carriers.
Abstract: Orthogonal frequency division multiplexing (OFDM) is a multicarrier transport technology for high data rate communication system. The OFDM concept is based on spreading the high speed data to be transmitted over a large number of low rate carriers. The carriers are orthogonal to each other and frequency spacing between them are created by using the fast Fourier transform (FFT).

Journal ArticleDOI
TL;DR: In this paper, a wavelet packet transform (WPT) is used to represent EPS waveforms in a time-frequency domain, and phase and overall crest factors are redefined in the timefrequency domain using WPT while a new crest factor is introduced in this paper.
Abstract: Three-phase power-quality indices (PQIs) can be used to quantify and hence evaluate the quality of the electric power system (EPS) waveforms. The recommended PQIs are defined based on the fast Fourier transform (FFT) which can only provide accurate results in case of stationary waveforms, however in case of nonstationary waveforms even under sinusoidal operating conditions, the FFT produces large errors due to spectral leakage phenomenon. Moreover, FFT is incapable of providing any time-related information which is a required property when dealing with time-evolving waveforms since it can provide only an amplitude-frequency spectrum. Since wavelet packet transform (WPT), which is a generalization of the wavelet transform, can represent EPS waveforms in a time-frequency domain, it is used in this study to define and formulate three-phase PQIs. In order to handle the unbalanced three-phase case, the concept of equivalent voltage and current is used to calculate those indices. The results of four numerical examples considering stationary and nonstationary, balanced and unbalanced three-phase systems in sinusoidal and nonsinusoidal situations indicate that the new WPT-based PQIs are closer to the true values. In addition, phase and overall crest factors are redefined in the time-frequency domain using WPT while a new crest factor is introduced in this paper. The redefined crest factors and the new crest factor help identifying and quantifying the waveform impact based on the time-frequency information obtained from the WPT. New crest factor can only be determined via WPT, which proves the powerful of this method and its suitability to define three-phase PQIs in nonstationary operating conditions.

Journal ArticleDOI
TL;DR: An efficient method for computing the discrete orthonormal Stockwell transform (DOST) and derives that the computational complexity of the algorithm is $\mathcal{O}(N\log N)$, putting it in the same category as the FFT.
Abstract: We present an efficient method for computing the discrete orthonormal Stockwell transform (DOST). The Stockwell transform (ST) is a time-frequency decomposition transform that is showing great promise in various applications, but is limited because its computation is infeasible for most applications. The DOST is a nonredundant version of the ST, solving many of the memory and computational issues. However, computing the DOST of a signal of length $N$ using basis vectors is still $\mathcal{O}(N^2)$. The computational complexity of our method is $\mathcal{O}(N\log N)$, putting it in the same category as the FFT. The algorithm is based on a simple decomposition of the DOST matrix. We also explore the way to gain conjugate symmetry for the DOST and propose a variation of the parameters that exhibits symmetry, akin to the conjugate symmetry of the FFT of a real-valued signal. Our fast method works equally well on this symmetric DOST. In this paper, we provide a mathematical proof of our results and derive that the computational complexity of our algorithm is $\mathcal{O}(N\log N)$. Timing tests also confirm that the new method is orders-of-magnitude faster than the brute-force DOST, and they demonstrate that our fast DOST is indeed $\mathcal{O}(N\log N)$ in complexity.

Journal ArticleDOI
TL;DR: In this paper, a nonlinear structure formation in brane-induced gravity was studied using body simulations, and it was shown that the Vainshtein mechanism does operate as anticipated, with the density power spectrum approaching that of standard gravity within a modified background evolution in the nonlinear regime.
Abstract: We use $N$-body simulations to study the nonlinear structure formation in brane-induced gravity, developing a new method that requires alternate use of Fast Fourier Transforms and relaxation. This enables us to compute the nonlinear matter power spectrum and bispectrum, the halo mass function, and the halo bias. From the simulation results, we confirm the expectations based on analytic arguments that the Vainshtein mechanism does operate as anticipated, with the density power spectrum approaching that of standard gravity within a modified background evolution in the nonlinear regime. The transition is very broad and there is no well defined Vainshtein scale, but roughly this corresponds to ${k}_{*}\ensuremath{\simeq}2h\text{ }\text{ }{\mathrm{Mpc}}^{\ensuremath{-}1}$ at redshift $z=1$ and ${k}_{*}\ensuremath{\simeq}1h\text{ }\text{ }{\mathrm{Mpc}}^{\ensuremath{-}1}$ at $z=0$. We checked that while extrinsic curvature fluctuations go nonlinear, and the dynamics of the brane-bending mode $C$ receives important nonlinear corrections, this mode does get suppressed compared to density perturbations, effectively decoupling from the standard gravity sector. At the same time, there is no violation of the weak field limit for metric perturbations associated with $C$. We find good agreement between our measurements and the predictions for the nonlinear power spectrum presented in paper I, that rely on a renormalization of the linear spectrum due to nonlinearities in the modified gravity sector. A similar prediction for the mass function shows the right trends. Our simulations also confirm the induced change in the bispectrum configuration dependence predicted in paper I.

Journal ArticleDOI
01 Aug 2009
TL;DR: Experimental results show that the SGW-based edge-detection algorithm can achieve a similar performance level to that using GWs, while the runtime required for feature extraction using SGWs is faster than that with GWs with the use of the fast Fourier transform.
Abstract: Gabor wavelets (GWs) have been commonly used for extracting local features for various applications, such as recognition, tracking, and edge detection. However, extracting the Gabor features is computationally intensive, so the features may be impractical for real-time applications. In this paper, we propose a set of simplified version of GWs (SGWs) and an efficient algorithm for extracting the features for edge detection. Experimental results show that our SGW-based edge-detection algorithm can achieve a similar performance level to that using GWs, while the runtime required for feature extraction using SGWs is faster than that with GWs with the use of the fast Fourier transform. When compared to the Canny and other conventional edge-detection methods, our proposed method can achieve a better performance in the terms of detection accuracy and computational complexity.

Journal ArticleDOI
TL;DR: A real-time display of processed OCT images is demonstrated using a linear-in-wave-number (linear-k) spectrometer and a graphics processing unit (GPU) to avoid calculating the resampling process.
Abstract: Fourier domain optical coherence tomography (FD-OCT) requires resampling of spectrally resolved depth information from wavelength to wave number, and the subsequent application of the inverse Fourier transform. The display rates of OCT images are much slower than the image acquisition rates due to processing speed limitations on most computers. We demonstrate a real-time display of processed OCT images using a linear-in-wave-number (linear-k) spectrometer and a graphics processing unit (GPU). We use the linear-k spectrometer with the combination of a diffractive grating with 1200 lines/mm and a F2 equilateral prism in the 840-nm spectral region to avoid calculating the resampling process. The calculations of the fast Fourier transform (FFT) are accelerated by the GPU with many stream processors, which realizes highly parallel processing. A display rate of 27.9 frames/sec for processed images (2048 FFT size×1000 lateral A-scans) is achieved in our OCT system using a line scan CCD camera operated at 27.9 kHz.