scispace - formally typeset
Search or ask a question

Showing papers on "Fast Fourier transform published in 1990"


Journal ArticleDOI
TL;DR: In a common framework several algorithms that have been proposed recently, in order to improve the voice quality of a text-to-speech synthesis based on acoustical units concatenation based on pitch-synchronous overlap-add approach are reviewed.

1,438 citations


Journal ArticleDOI
TL;DR: Note: V. Madisetti, D. B. Williams, Eds.

862 citations


Book ChapterDOI
01 Jan 1990
TL;DR: In this article, Chen, Smith, and Fralick developed a real arithmetic and recursive algorithm for efficient implementation of the discrete cosine transform (DCT), which is based on the discrete Fourier transform (DFT).
Abstract: Publisher Summary This chapter presents discrete cosine transform. The development of fast algorithms for efficient implementation of the discrete Fourier transform (DFT) by Cooley and Tukey in 1965 has led to phenomenal growth in its applications in digital signal processing (DSP). The discovery of the discrete cosine transform (DCT) in 1974 has provided a significant impact in the DSP field. While the original DCT algorithm is based on the FFT, a real arithmetic and recursive algorithm, developed by Chen, Smith, and Fralick in 1977, was the major breakthrough in the efficient implementation of the DCT. A less well-known but equally efficient algorithm was developed by Corrington. Subsequently, other algorithms, such as the decimation-in-time (DIT),decimation-in-frequency (DIF), split radix, DCT via other discrete transforms such as the discrete Hartley transform (DHT) or the Walsh-Hadamard transform (WHT), prime factor algorithm (PFA), a fast recursive algorithm, and planar rotations, which concentrate on reducing the computational complexity and/or improving the structural simplicity, have been developed. The dramatic development of DCT-based DSP is by no means an accident.

382 citations


Journal ArticleDOI
TL;DR: The fast Fourier transform (FFT) technique is a very powerful tool for the efficient evaluation of gravity field convolution integrals as mentioned in this paper, which can handle heterogeneous and noisy data, and thus presents a very attractive alternative to the classical, time consuming approaches, provided gridded data are available.
Abstract: SUMMARY The fast Fourier transform (FFT) technique is a very powerful tool for the efficient evaluation of gravity field convolution integrals It can handle heterogeneous and noisy data, and thus presents a very attractive alternative to the classical, time consuming approaches, provided gridded data are available This paper reviews the mathematics of the FFT methods as well as their practical problems, and presents examples from physical geodesy where the application of these methods is especially advantageous The spectral evaluation of Stokes’, Vening Meinesz’ and Molodensky’s integrals, least-squares collocation in the frequency domain, integrals for terrain reductions and for airborne gravity gradiometry , and the computation of covariance and power spectral density functions are treated in detail Numerical examples illustrate the efficiency and accuracy of the FFT methods Key words: FFT, physical geodesy, spectral methods 1 INTRODUCTION Physical geodesy is the branch of geodesy which uses measured gradients of the anomalou6gravity potential T to determine a unique and coherent representation of the terrestrial gravity field at the Earth’s surface and in outer space The anomalous potential T is the difference between the actual gravity potential of the Earth and the reference potential of an ellipsoid with the same mass, flattening, and angular rotation rate as the Earth An approximation of T is needed to model geodetic measurements, to predict perturbations of satellite orbits, to determine global ocean circulation patterns, to assist global geophysics, and to support oil and mineral exploration In recent years, the amount of data available for the solution of this problem has increased dramatically, both in quantity and in type This has made the data processing problems more severe and has created a demand for efficient numerical solutions Since much of the data is available in gridded form, the use of fast spectral techniques was clearly appropriate Progress in the application of these methods to geodetic problems has been rapid during the last three years and it is almost certain that, because of their efficiency and accuracy, they will become standard procedures for a number of applications However, it has also become clear that geodetic and, more generally, geophysical data often present specific problems not usually encountered in typical electrical engineering applications The problems are with the heterogeneity of the data, the complicated surface on which they are given, the uneven spatial distribution, and the non-uniformity of the data noise This paper will discuss the use of

300 citations


Journal ArticleDOI
TL;DR: Advanced techniques for computing an ordered FFT on a computer with external or hierarchical memory that require as few as two passes through the external data set, employ strictly unit stride, long vector transfers between main memory and external storage, and are well suited for vector and parallel computation are described.
Abstract: Conventional algorithms for computing large one-dimensional fast Fourier transforms (FFTs), even those algorithms recently developed for vector and parallel computers, are largely unsuitable for systems with external or hierarchical memory. The principal reason for this is the fact that most FFT algorithms require at least m complete passes through the data set to compute a 2 m -point FFT. This paper describes some advanced techniques for computing an ordered FFT on a computer with external or hierarchical memory. These algorithms (1) require as few as two passes through the external data set, (2) employ strictly unit stride, long vector transfers between main memory and external storage, (3) require only a modest amount of scratch space in main memory, and (4) are well suited for vector and parallel computation. Performance figures are included for implementations of some of these algorithms on Cray supercomputers. Of interest is the fact that a main memory version outperforms the current Cray library FFT routines on the CRAY-2, the CRAY X-MP, and the CRAY Y-MP systems. Using all eight processors on the CRAY Y-MP, this main memory routine runs at nearly two gigaflops.

247 citations


Journal ArticleDOI
TL;DR: The discrete wavelet transform can be implemented in VLSI more efficiently than the FFT, and a single chip implementation is described.
Abstract: The wavelet transform is a very effective signal analysis tool for many problems for which Fourier based methods have been inapplicable, expensive for real-time applications, or can only be applied with difficulty. The discrete wavelet transform can be implemented in VLSI more efficiently than the FFT. A single chip implementation is described.

198 citations


Journal ArticleDOI
TL;DR: A functional-level concurrent error-detection scheme is presented for such VLSI signal processing architectures as those proposed for the FFT and QR factorization, and it is shown that the error coverage is high with large word sizes.
Abstract: The increasing demands for high-performance signal processing along with the availability of inexpensive high-performance processors have results in numerous proposals for special-purpose array processors for signal processing applications. A functional-level concurrent error-detection scheme is presented for such VLSI signal processing architectures as those proposed for the FFT and QR factorization. Some basic properties involved in such computations are used to check the correctness of the computed output values. This fault-detection scheme is shown to be applicable to a class of problems rather than a particular problem, unlike the earlier algorithm-based error-detection techniques. The effects of roundoff/truncation errors due to finite-precision arithmetic are evaluated. It is shown that the error coverage is high with large word sizes. >

179 citations


Journal ArticleDOI
TL;DR: In this article, an autoregressive model is fitted to the signal, and low-pass filtering is performed in the frequency domain by a linear phase FIR filter and differentiation is performed on the high-frequency noise magnification.
Abstract: Smoothing and differentiation of noisy signals are common problems whenever it is difficult or impossible to obtain derivatives by direct measurement. In biomechanics body displacements are frequently assessed and these measurements are affected by noise. To avoid high-frequency noise magnification, data filtering before differentiation is needed. In the approach reported here an autoregressive model is fitted to the signal. This allows the evaluation of the filter bandwidth and the extrapolation of the data. The extrapolation also reduces edge effects. Low-pass filtering is performed in the frequency domain by a linear phase FIR filter and differentiation is performed in the frequency domain. The reported results illustrate the accuracy of the algorithm and its speed (mainly due to the use of the FFT algorithm). Automatic bandwidth selection also guarantees the homogeneity of the results.

166 citations


Journal ArticleDOI
TL;DR: Frequency cells comprising a subset, or gate, of the spectral bins from fast Fourier transform (FFT) processing are identified with the states of the hidden Markov chain and analyzed in terms of physically meaningful quantities.
Abstract: Frequency cells comprising a subset, or gate, of the spectral bins from fast Fourier transform (FFT) processing are identified with the states of the hidden Markov chain. An additional zero state is included to allow for the possibility of track initiation and termination. Analytic expressions for the basic parameters of the hidden Markov model (HMM) are obtained in terms of physically meaningful quantities, and optimization of the HMM tracker is discussed. A measurement sequence based on a simple threshold detector forms the input to the tracker. The outputs of the HMM tracker are a discrete Viterbi track, a gate occupancy probability function, and a continuous mean cell occupancy track. The latter provides an estimate of the mean signal frequency as a function of time. The performance of the HMM tracker is evaluated for two sets of simulated data. The HMM tracker is compared to earlier, related trackers, and possible extensions are discussed. >

164 citations


Journal ArticleDOI
TL;DR: In this paper, a theoretical study and a model for the numerical simulation of the nonlinear electrical response, including the harmonic generation rate calculation, of a p-i-n InGaAs photodiode under high-illumination conditions are discussed.
Abstract: A theoretical study and a model for the numerical simulation of the nonlinear electrical response, including the harmonic-generation rate calculation, of a p-i-n InGaAs photodiode under high-illumination conditions are discussed. The device structure is described. An algorithm, which is based on a finite-difference calculation, is used to calculate the temporal electrical response of the device to a microwave optical input signal. The different harmonics in the power spectrum are obtained using the fast Fourier transform (FFT) calculation. This model is a tool for designing the p-i-n photodiode and determining the conditions for its utilization in order to avoid the electrical response nonlinearity. >

158 citations


Journal ArticleDOI
TL;DR: Fourier transform algorithms are described using tensor (Kronecker) products and an associated class of permutations to derive variants of the Cooley-Tukey fast Fourier transform algorithm.
Abstract: Fourier transform algorithms are described using tensor (Kronecker) products and an associated class of permutations. Algebraic properties of tensor products and the related permutations are used to derive variants of the Cooley-Tukey fast Fourier transform algorithm. These algorithms can be implemented by translating tensor products and permutations to programming constructs. An implementation can be matched to a specific computer architecture by selecting the appropriate variant. This methodology is carried out for the Cray X-MP and the AT&T DSP32.

Journal ArticleDOI
TL;DR: A spread-spectrum code acquisition technique for a direct-sequence (DS) system in the presence of Doppler effect and data modulation is investigated and the use of theoretical results to estimate the hardware complexity of an actual system is illustrated step by step, showing that implementation is feasible with existing technology.
Abstract: A spread-spectrum code acquisition technique for a direct-sequence (DS) system in the presence of Doppler effect and data modulation is investigated. Both the carrier-frequency offset and code-frequency offset due to severe Doppler effect are considered. The code-chip slipping during the correlation process caused by code-frequency offset can degrade the acquisition performance significantly. However, this issue can be alleviated by compensating code-frequency offset in an appropriate manner. Results are presented for the cases with and without data modulation. Coherent detection is considered when there is no data modulation. If data modulation is present, the authors partition the correlation time into subintervals and the integration results in these subintervals are square-law noncoherently combined for detection. The implementation of this code acquisition technique using the fast Fourier transform (FFT) algorithm is described. The use of theoretical results to estimate the hardware complexity of an actual system is illustrated step by step, showing that implementation is feasible with existing technology. The tradeoff between hardware complexity and acquisition performance is discussed. >

Journal ArticleDOI
TL;DR: In this paper, a numerical inversion of the Laplace transform is proposed for the analysis of lossy coupled transmission lines with arbitrary linear terminal and interconnecting networks. But the inversion technique is equivalent to high-order, numerically stable integration methods.
Abstract: A novel method based on numerical inversion of the Laplace transform is presented for the analysis of lossy coupled transmission lines with arbitrary linear terminal and interconnecting networks. The formulation of the network equations is based on a Laplace-domain admittance stamp for the transmission line. The transmission line stamp can be used to formulate equations representing arbitrarily complex networks of transmission lines and interconnects. These equations can be solved to get the frequency-domain response of the network. Numerical inversion of the Laplace transform allows the time-domain response to be calculated directly from Laplace-domain equations. This method is an alternative to calculating the frequency-domain response and using the fast Fourier transform to obtain the time-domain response. The inversion technique is equivalent to high-order, numerically stable integration methods. Numerical examples showing the general application of the method are presented. It is shown that the inverse Laplace technique is able to calculate the step response of a network. The time-domain independence of the solution is exploited by an efficient calculation of the propagation delay of the network. >

Journal ArticleDOI
TL;DR: The authors propose the detection and location of faulty processors concurrently with the actual execution of parallel applications on the hypercube using a novel scheme of algorithm-based error detection, which allows the authors to isolate and replace faulty processors with spare processors.
Abstract: The design of fault-tolerant hypercube multiprocessor architecture is discussed. The authors propose the detection and location of faulty processors concurrently with the actual execution of parallel applications on the hypercube using a novel scheme of algorithm-based error detection. System-level error detection mechanisms have been implemented for three parallel applications on a 16-processor Intel iPSC hypercube multiprocessor: matrix multiplication, Gaussian elimination, and fast Fourier transform. Schemes for other applications are under development. Extensive studies have been done of error coverage of the system-level error detection schemes in the presence of finite-precision arithmetic, which affects the system-level encodings. Two reconfiguration schemes are proposed that allow the authors to isolate and replace faulty processors with spare processors. >

Journal ArticleDOI
TL;DR: In this article, a general package for harmonic-domain computation is described, consisting of a set of routines which can be used by developers of programs for power system harmonic applications, and the most basic routines have been listed.
Abstract: A general package for harmonic-domain computation is described. It consists of a set of routines which can be used by developers of programs for power system harmonic applications. The most basic routines have been listed. The package represents nonlinear characteristics by fitting the characteristic with a polynomial, for which special harmonic domain processing via convolutions has been developed, or by directly applying a fast Fourier transform. A model in the form of a differential equation is derived for the electric arc. It is based on simple energy balance considerations and therefore is expected to be generally valid. The computational results compare well with existing measurements. The arc model can be used for discharge lamps or for arc furnaces. >

Journal ArticleDOI
TL;DR: An algorithm for detecting moving targets by imaging sensors and estimating their trajectories using directional filtering in the frequency domain, using a bank of filters for all possible target directions, resulting in an improved signal-to-noise ratio.
Abstract: An algorithm for detecting moving targets by imaging sensors and estimating their trajectories is proposed. The algorithm is based on directional filtering in the frequency domain, using a bank of filters for all possible target directions. The directional filtering effectively integrates the target signal, resulting in an improved signal-to-noise ratio. Working in the frequency domain facilitates a considerable reduction in computational requirements compared to time-domain algorithms. The algorithm is described in detail, and its false alarm and detection probabilities are analyzed. >

Patent
24 Jan 1990
TL;DR: In this article, a low probability intercept communication system (CCSKS) is proposed, in which information signals are transmitted onto an inverse fast Fourier transformation of a large number of simultaneous frequencies that have been determined to be reasonably quiet within a given system bandwidth, so as to produce a time domain pulse waveform.
Abstract: A low probability of intercept communication system (CCSK)--modulates information signals onto an inverse fast Fourier transformation of a large number of simultaneous frequencies that have been determined to be reasonably `quiet` within a given system bandwidth, so as to produce a time domain pulse waveform. The amplitude of each transmitted frequency is weighted. Within the receiver equipment of each participant in the system, the incoming pulse waveform produced by the inverse fast Fourier transformation mechanism at the source is coupled to a fast Fourier transform operator, so as to separate the time domain signal into a plurality of frequency components that contain the modulated data. These components are then convolved with a replica of the plurality of quiet channels to derive a time domain output waveform from which the data modulation can be identified and recovered. Even if a jamming threat is injected into one or more of the `quiet` channels that has been selected as a participating carrier, by virtue of the signal analysis and recovery process employed by each unit for incoming signals, jamming spikes are effectively excised.

Journal ArticleDOI
TL;DR: The method treats Fast Fourier Transforms of multichannel EEGs so that they can be used for intracerebral source localizations and finds the least square deviation sum between the entry positions and their orthogonal projections onto the straight line.

Journal ArticleDOI
TL;DR: A fast technique for automatic 3-D shape measurement that can automatically and accurately obtain the phase map or the height information of a measured object at every pixel point without assigning fringe orders and interpreting data in the regions between the fringe orders is proposed and verified by experiments.
Abstract: A fast technique for automatic 3-D shape measurement is proposed and verified by experiments. The technique, based on the principle of phase measurement of the deformed grating pattern which carries the 3-D information of the measured object, can automatically and accurately obtain the phase map or the height information of a measured object at every pixel point without assigning fringe orders and interpreting data in the regions between the fringe orders. Only one image pattern is sufficient for obtaining the phase map. In contrast to the fast Fourier transform based technique, the technique processes a fringe pattern in the real-signal domain instead of the frequency domain by using demodulation and convolution techniques, can process an arbitrary number of pixel points, and is much faster. Theoretical analysis, simulation results, and experimental results are presented.

Journal ArticleDOI
TL;DR: A synthetic aperture radar (SAR) processor approach based on two-dimensional fast Fourier transform (FFT) codes coupled with an asymptotic evaluation of the unit response function is presented, enabling an effective reference filter to be designed.
Abstract: A synthetic aperture radar (SAR) processor approach based on two-dimensional fast Fourier transform (FFT) codes coupled with an asymptotic evaluation of the unit response function is presented. For the latter, no approximation is made to the distance function, so that the full range of geometric aberrations is analytically considered, enabling an effective reference filter to be designed. The two-dimensional FFTs were designed as to run on computers of very limited memory: the required FFT is computed by means of FFTs of lower order. Two FFT codes were considered: one is faster and allows full or reduced (quick look or multilook) resolution performance to be obtained easily; the second is slower but allows the use of a space-varying filter and/or investigations on limited portions (zoom) of the image. Both codes are suited to parallel processing, e.g. by a transputer net. A full discussion on computer memory and time requirements is presented as well as first examples of image processing results. >

Journal ArticleDOI
TL;DR: The Fourier transform technique developed for the design of variable refractiveindex coatings such as rugate filters is improved to achieve an accurate correspondence between optical properties and the refractive index profile.
Abstract: The Fourier transform technique developed for the design of variable refractive index coatings such as rugate filters is improved to achieve an accurate correspondence between optical properties and the refractive index profile. An application to the design of narrowband reflectors is presented.

Journal ArticleDOI
TL;DR: Using additionally a low-resolution intensity image from a telescope with a small aperture, a fine-resolution image of a general object can be reconstructed in a two-step approach using a modified algorithm that employs an expanding weighting function on the Fourier modulus.
Abstract: It is difficult to reconstruct an image of a complex-valued object from the modulus of its Fourier transform (i.e., retrieve the Fourier phase) except in some special cases. By using additionally a low-resolution intensity image from a telescope with a small aperture, a fine-resolution image of a general object can be reconstructed in a two-step approach. First the Fourier phase over the small aperture is retrieved, using the Gerchberg–Saxton algorithm. Then that phase is used, in conjunction with the Fourier modulus data over a large aperture together with a support constraint on the object, to reconstruct a fine-resolution image (retrieve the phase over the large aperture) by the iterative Fourier-transform algorithm. The second step requires a modified algorithm that employs an expanding weighting function on the Fourier modulus.

Journal ArticleDOI
TL;DR: Two algorithms are presented for computing the discrete cosine transform (DCT) on existing VLSI structures and a new prime factor DCT algorithm is presented for the class of DCTs of length N=N/ sub 1/*N/sub 2/, where N/sub 1/ and N/ sub 2/ are relatively prime and odd numbers.
Abstract: Two algorithms are presented for computing the discrete cosine transform (DCT) on existing VLSI structures. First, it is shown that the N-point DCT can be implemented on the existing systolic architecture for the N-point discrete Fourier transform (DFT) by introducing some modifications. Second, a new prime factor DCT algorithm is presented for the class of DCTs of length N=N/sub 1/*N/sub 2/, where N/sub 1/ and N/sub 2/ are relatively prime and odd numbers. It is shown that the proposed algorithm can be implemented on the already existing VLSI structures for prime factor DFT. The number of multipliers required is comparable to that required for the other fast DCT algorithms. It is shown that the discrete sine transform (DST) can be computed by the same structure. >

Journal ArticleDOI
TL;DR: In this article, the authors proposed a technique for extracting the singularity of the Green's function that appears within the integrands of the matrix diagonal, further enhancing the usefulness of the FFT.
Abstract: The enhancement of the computational efficiency of the body of revolution scattering problem is discussed with a view of making it practical for solving large body problems. The problem of the electromagnetic scattering by a perfectly conducting body is considered, although the methods provided can be extended to multilayered dielectric bodies as well. Typically, the generation of the elements of the moment method matrix consumes a major portion of the computational time. It is shown how this time can be significantly reduced by manipulating the expression for the matrix elements in a manner that allows one to compute them efficiently by using the fast Fourier transform (FFT). A technique for extracting the singularity of the Green's function that appears within the integrands of the matrix diagonal is also presented, further enhancing the usefulness of the FFT. It is shown that, with the use of the method discussed here, the computational time can be improved by at least an order of magnitude for large bodies in comparison to that for previous algorithms. >

Journal ArticleDOI
TL;DR: A fast recursive algorithm for the discrete sine transform (DST) is developed that can be considered as a generalization of the Cooley-Tukey FFT (fast Fourier transform) algorithm.
Abstract: A fast recursive algorithm for the discrete sine transform (DST) is developed. An N-point DST can be generated from two identical N/2-point DSTs. Besides being recursive, this algorithm requires fewer multipliers and adders than other DST algorithms. It can be considered as a generalization of the Cooley-Tukey FFT (fast Fourier transform) algorithm. The structure of the algorithm is suitable for VLSI implementation. >

Journal ArticleDOI
01 Dec 1990
TL;DR: By means of the Kronecker matrix product representation, the 1-D algorithms introduced in the paper can readily be generalised to compute transforms of higher dimensions and are more stable than and have fewer arithmetic operations than similar algorithms proposed by Yip and Rao.
Abstract: According to Wang, there are four different types of DCT (discrete cosine transform) and DST (discrete sine transform) and the computation of these sinusoidal transforms can be reduced to the computation of the type-IV DCT. As the algorithms involve different sizes of transforms at different stages they are not so regular in structure. Lee has developed a fast cosine transform (FCT) algorithm for DCT-III similar to the decimation-in-time (DIT) Cooley–Tukey fast Fourier transform (FFT) with a regular structure. A disadvantage of this algorithm is that it involves the division of the trigonometric coefficients and may be numerically unstable. Recently, Hou has developed an algorithm for DCT-II which is similar to a decimation-in-frequency (DIF) algorithm and is numerically stable. However, an index mapping is needed to transform the DCT to a phase-modulated discrete Fourier transform (DFT), which may not be performed in-place. In the paper, a variant of Hou's algorithm is presented which is both in-place and numerically stable. The method is then generalised to compute the entire class of discrete sinusoidal transforms. By making use of the DIT and DIF concepts and the orthogonal properties of the DCTs, it is shown that simple algebraic formulations of these algorithms can readily be obtained. The resulting algorithms are regular in structure and are more stable than and have fewer arithmetic operations than similar algorithms proposed by Yip and Rao. By means of the Kronecker matrix product representation, the 1-D algorithms introduced in the paper can readily be generalised to compute transforms of higher dimensions. These algorithms, which can be viewed as the vector-radix generalisation of the present algorithms, share the in-place and regular structure of their 1-D counterparts.

Journal ArticleDOI
TL;DR: Two-dimensional systolic array implementations for computing the discrete Hartley transform and the discrete cosine transform when the transform size N is decomposable into mutually prime factors are proposed.
Abstract: Two-dimensional systolic array implementations for computing the discrete Hartley transform (DHT) and the discrete cosine transform (DCT) when the transform size N is decomposable into mutually prime factors are proposed. The existing two-dimensional formulations for DHT and DCT are modified, and the corresponding algorithms are mapped into two-dimensional systolic arrays. The resulting architecture is fully pipelined with no control units. The hardware design is based on bit serial left to right MSB (most significant bit) to LSB (least significant bit) binary arithmetic. >

Journal ArticleDOI
01 Apr 1990
TL;DR: In this article, a general solution technique that relies on converting the system to an equivalent 1-D two-point boundary-value descriptor system (TPBVDS) of large dimension is proposed, for which a recursive and stable solution technique is developed.
Abstract: The solution and linear estimation of 2-D nearest-neighbor models (NNMs) are considered. The class of problems that can be described by NNMs is quite large, as models of this type arise whenever partial differential equations are discretized with finite-difference methods. A general solution technique that relies on converting the system to an equivalent 1-D two-point boundary-value descriptor system (TPBVDS) of large dimension, for which a recursive and stable solution technique is developed, is proposed. Under slightly restrictive assumptions, an even faster procedure can be obtained by using the fast Fourier transform (FFT) with respect to one of the space dimensions to convert the 1-D TPBVDS into a set of decoupled TPBVDS of low order, which can be solved in parallel. The smoothing problem for 2-D random fields described by stochastic NNMs is also examined. The smoother is expressed as a Hamiltonian system of twice the dimension of the original system and is also in NNM form. NNM solution techniques are therefore directly applicable to this solution. The results are illustrated by two examples corresponding to the discretized Poisson and heat equations, respectively. >

Journal ArticleDOI
TL;DR: It is shown that the complex phase of Q (sigma) is a key parameter which can be exploited to reduce significantly the thickness of the synthesized films and to control the shape of the refractive index profiles without affecting the spectral performance.
Abstract: Several errors inherent to the Fourier transform method for optical thin film synthesis, including the inaccuracy of the spectral functions Q (sigma) used in the Fourier transforms, are compensated numerically by using successive approximations. We show that the complex phase of Q (sigma) is a key parameter which can be exploited to reduce significantly the thickness of the synthesized films and to control the shape of the refractive index profiles without affecting the spectral performance. This method is compared to other well established thin film design techniques.

Proceedings ArticleDOI
26 Jun 1990
TL;DR: A novel algorithm-based fault tolerance scheme is proposed for fast Fourier transform (FFT) networks and it is shown that the proposed scheme achieves 100% fault coverage theoretically.
Abstract: A novel algorithm-based fault tolerance scheme is proposed for fast Fourier transform (FFT) networks. It is shown that the proposed scheme achieves 100% fault coverage theoretically. An accurate measure of the fault coverage for FFT networks is provided by taking the roundoff error into account. It is shown that the proposed scheme maintains the low hardware overhead and high throughput of J.Y. Jou and J.A. Abraham's scheme and, at the same time, increases the fault coverage significantly. >