scispace - formally typeset
Search or ask a question

Showing papers on "Time–frequency analysis published in 1991"


Proceedings ArticleDOI
01 Dec 1991
TL;DR: Renyi information of the third order is used to provide an information measure on time-frequency distributions and the results suggest that even though this new measure tracks time-bandwidth results for two Gabor log-ons separated in time and/or frequency, the information measure is more general and provides a quantitative assessment of the number of resolvable components in a time frequency representation.
Abstract: The well-known uncertain principle is often invoked in signal processing. It is also often considered to have the same implications in signal analysis as does the uncertainty principle in quantum mechanics. The uncertainty principle is often incorrectly interpreted to mean that one cannot locate the time-frequency coordinates of a signal with arbitrarily good precision, since, in quantum mechanics, one cannot determine the position and momentum of a particle with arbitrarily good precision. Renyi information of the third order is used to provide an information measure on time-frequency distributions. The results suggest that even though this new measure tracks time-bandwidth results for two Gabor log-ons separated in time and/or frequency, the information measure is more general and provides a quantitative assessment of the number of resolvable components in a time frequency representation. As such, the information measure may be useful as a tool in the design and evaluation of time-frequency distributions.

141 citations


Journal ArticleDOI
TL;DR: Algorithms to compute coefficients of the finite double sum expansion of time-varying nonstationary signals and to synthesize them from a finite number of expansion coefficients are presented.
Abstract: Algorithms to compute coefficients of the finite double sum expansion of time-varying nonstationary signals and to synthesize them from a finite number of expansion coefficients are presented. The algorithms are based on the computation of the discrete Zak transform (DZT). Fast algorithms to compute DZT are presented. Modifications to the algorithms which increase robustness are given. >

119 citations


Journal ArticleDOI
TL;DR: Well-known block transforms and perfect reconstruction orthonormal filter banks are evaluated based on their frequency behavior and energy compaction and it is shown that the filter banks outperform the block transforms for the signal sources considered.
Abstract: Well-known block transforms and perfect reconstruction orthonormal filter banks are evaluated based on their frequency behavior and energy compaction. The filter banks outperform the block transforms for the signal sources considered. Although the latter are simpler to implement and already the choice of the existing video coding standards, filter banks with simple algorithms may well become the signal decomposition technique for the next generation video codecs, which require a multiresolution signal representation.

83 citations


Journal ArticleDOI
TL;DR: In this article, elementary cells assuming oblique forms (chirplets) are proposed, together with an adaptive method for selecting their aspect ratio and obliquity to suit the data.
Abstract: Dynamic spectra, which exhibit the spectral content of a signal as time elapses, are based on subdivision of the time–frequency plane into minimum-area rectangular cells. The cell dimensions in time and frequency are usually held constant throughout. A more general spectral analysis would allow the cells to change aspect ratio with time. Elementary cells assuming oblique forms (chirplets) are proposed, together with an adaptive method for selecting their aspect ratio and obliquity to suit the data.

78 citations


Proceedings ArticleDOI
14 Apr 1991
TL;DR: An optimization formulation for designing signal-dependent kernels that are based on radially Gaussian functions is presented, and it is shown that the optimal-kernel TFD offers excellent performance for a larger class of signals than any current fixed-kernel representation.
Abstract: An optimization formulation for designing signal-dependent kernels that are based on radially Gaussian functions is presented. The method is based on optimality criteria and is not ad hoc. The procedure is automatic. The optimization criteria are formulated so that the resulting time-frequency distribution (TFD) is insensitive to the time scale and orientation of the signal in time-frequency. Examples demonstrate that the optimal-kernel TFD offers excellent performance for a larger class of signals than any current fixed-kernel representation. The technique performs well in the presence of substantial additive noise, which suggests that it may prove useful for automatic detection of unknown signals in noise. The cost of this technique is only a few times greater than that of the fixed-kernel methods and the 1/0 optimal kernel method. >

68 citations


Proceedings ArticleDOI
14 Apr 1991
TL;DR: Two properties are presented that constrain the cross-terms of Cohen-class time-frequency representations (TFRs) to appear only at signal frequencies, and only when the signal is nonzero, i.e. the TFR is zero everywhere the signal or its spectrum is zero.
Abstract: Two properties are presented that constrain the cross-terms of Cohen-class time-frequency representations (TFRs) to appear only at signal frequencies, and only when the signal is nonzero. These properties thus guarantee strong finite support, i.e. the TFR is zero everywhere the signal or its spectrum is zero. When combined with cross-term attenuation, one can obtain TFRs with spectrogram-like interference suppression, but without the inherent time-frequency resolution tradeoff of the spectrogram. >

16 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors propose a new distance metric for a radial basis functions (RBF) neural network, which is based on the well-known expectation maximization (EM) algorithm.
Abstract: We propose a new distance metric for a radial basis functions (RBF) neural network. We consider a two-dimensional space of time and frequency. In the usual context of RBF, a two- dimensional space would imply a two-dimensional feature vector. In our paradigm, however, the input feature vector may be of any length, and is typically a time series (say, 512 samples). We also propose a rule for positioning the centers in time-frequency (TF) space, which is based on the well-known expectation maximization (EM) algorithm. Our algorithm, for which we have coined the term `log-on expectation maximization' (LEM) adapts a number of centers in TF space in such a way as to fit the input distribution. We propose two variants, LEM1, which works in one dimension at a time, and LEM2, which works in both dimensions simultaneously. We allow these `circles' (somewhat circular TF contours) to move around in the two-dimensional space, but we also allow them to dilate into ellipses of arbitrary aspect ratio. We then have a generalization which embodies both the Weyl-Hiesenberg (e.g., sliding window FFT) and affine (e.g., wavelet) spaces as special cases. Later we allow the `ellipses' to adaptively `tilt.' (In other words we allow the time series associated with each center to chirp, hence the name `chirplet transform.') It is possible to view the process in a different space, for which we have coined the term `bow-tie' space. In that space, the adaptivity appears as a number of bow-tie shaped centers which also move about to fit the input distribution in this new space. We use our chirplet transform for time-frequency analysis of Doppler radar signals. The chirplet essentially embodies a constant acceleration physical model. This model almost perfectly matches the physics of constant force, constant mass objects (such as cars with fixed throttle starting off at a stoplight). Our transform resolves general targets (those undergoing nonconstant acceleration) better than the classical Fourier Doppler periodogram. Since it embodies the constant velocity (Doppler periodogram) as a special case, its extra degrees of freedom better capture the physics of moving objects than does classical Fourier processing. By making the transform adaptive, we may better represent the signal with fewer transform coefficients.

16 citations


Proceedings ArticleDOI
14 Apr 1991
TL;DR: A novel time-frequency representation, the principal short-time Fourier transform (PSTFT), is introduced as a reduced-data characterization of the nonstationary spectral content of a sequence, which allows for exact reconstruction of the original sequence from its PSTFT representation.
Abstract: The authors introduce a novel time-frequency representation, the principal short-time Fourier transform (PSTFT), as a reduced-data characterization of the nonstationary spectral content of a sequence. To form the PSTFT, the authors apply principal components analysis to STFT frequency slices and retain only the first principal component information. This information alone allows for exact reconstruction of the original sequence from its PSTFT representation. The PSTFT retains explicit information about the nonstationary spectral content of a sequence, as well as implicit information necessary for reconstruction of the sequence. The results obtained are extended to other time-frequency distributions, which include the Wigner-Ville distribution, the complex energy density, and the radar ambiguity function. >

14 citations


Proceedings ArticleDOI
01 Dec 1991
TL;DR: In this paper, the Wigner-Ville distribution is smoothed into a signal dependent spectrogram using an iterative algorithm, and a generalized uncertainty principle is used to remove signal uncertainty in the time-frequency plane.
Abstract: This paper presents a review of some concepts associated with time-frequency distributions-- the instantaneous frequency, group delay, instantaneous bandwidth, and marginal properties-- and generalizes them in time-frequency via rotation of coordinates. This work emphasizes the need to examine time-frequency distributions in the general time-frequency plane, rather than restricting oneself to a time and/or frequency framework. This analysis leads to a generalized uncertainty principle, which has previously been introduced in radar theory. This uncertainty principle is invariant under rotation in the time-frequency plane, and should be used instead of the traditional definition of Gabor. It is desired to smooth a time-frequency distribution that is an energy density function into one that is an energy function. Most distributions are combinations of density and energy functions but the Wigner-Ville distribution is purely a density function. By using a local version of the generalized uncertainty principle, the Wigner- Ville distribution is smoothed into a signal dependent spectrogram using an iterative algorithm. It is believed that this procedure may represent, in some way an optimum removal of signal uncertainty in the time-frequency plane.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

11 citations


Proceedings ArticleDOI
04 Nov 1991
TL;DR: The application of this approach to a vowel classification task using the spectrogram is presented and is shown to provide performance which is favorable compared to that obtained with conventional methods.
Abstract: The multicomponent nature of many naturally occurring signals, such as speech, is exploited to provide a new means of detection and classification. A component of a multicomponent signal is defined in terms of the local bandwidth about the instantaneous frequency in the time-frequency distribution. The components are isolated by an adaptive partitioning algorithm which is constrained to overcome the interference terms often present in such distributions. The redundancy which may be present in the individual components of the signal is discussed along with the means to detect and classify the signal based upon this redundancy. The application of this approach to a vowel classification task using the spectrogram is presented and is shown to provide performance which is favorable compared to that obtained with conventional methods. >

11 citations


Journal ArticleDOI
TL;DR: In this article, the Wigner transform, the Gabor coefficients, Weyl-Heisenberg wavelet transform, and ambiguity function are investigated and the effects of more general time-frequency domain processing are examined.
Abstract: Many scientific and engineering applications require tools to perform analysis on a signal in the time-frequency domain. This paper establishes connections between several different time-frequency transforms: the Wigner transform, the Gabor coefficients, Weyl–Heisenberg wavelet transforms, and the ambiguity function. Both continuous and discrete versions of the transforms are investigated and the effects of more general time-frequency domain processing are examined. While this paper is primarily expository in nature, it also presents the resolution differences between various time-frequency transform techniques in a unified framework and suggests some improvements. Some examples from acoustics are provided.

Journal ArticleDOI
TL;DR: The conditional moments of frequency of a time-frequency distribution are discussed, and it is shown that they may be related to the derivative of the log of the corresponding signal, a complex function whose imaginary part is the instantaneous frequency.
Abstract: The conditional moments of frequency of a time-frequency distribution are discussed, and it is shown that they may be related to the derivative of the log of the corresponding signal, a complex function whose imaginary part is the instantaneous frequency. For the complex energy spectrum, a general form for the moments is obtained. For a signal which is the sum of several other signals, the instantaneous frequency will always demonstrate erratic behavior. >

Proceedings ArticleDOI
04 Nov 1991
TL;DR: In this paper, the authors demonstrate the application of the Wigner-Ville time-frequency distribution, the bispectrum, the time-varying bispectral, and Gerr's third-order WIGNer distribution to some underwater acoustic data and demonstrate the merit of including higher-order spectral information when signaturing underwater acoustic sources.
Abstract: The authors demonstrate the application of the Wigner-Ville time-frequency distribution, the bispectrum, the time-varying bispectrum, and Gerr's third-order Wigner distribution to some underwater acoustic data. They also demonstrate the merit of including higher-order spectral information when signaturing underwater acoustic sources. It is pointed out that conventional signal analysis procedures do not utilize all the information available in many practical signal analysis problems. It is shown that the use of higher-order spectral analysis improves time, frequency, and time-frequency analysis methods and provides the analyst with important additional information. Time-varying higher-order spectra are not well developed and are difficult to interpret. Phase coupling cannot be detected without some form of ensemble averaging. This important feature of the stationary bispectrum, where for single signal realization problems ensemble averages are often replaced by temporal averages, does not carry through to time-varying higher-order spectra. >

Journal ArticleDOI
TL;DR: It is demonstrated through the software simulations that the predicted noise-to-signal ratio is a good closed-form estimate of the 'true' roundoff error.
Abstract: The issue of roundoff noise effects in the implementation of the discrete Wigner distribution using fixed-point arithmetic is addressed. The sign-magnitude number representation is assumed throughout the analysis. The measure of roundoff noise effects in an algorithm is the output noise-to-signal ratio. Using a statistical model, an analytical expression of the noise-to-signal ratio is derived as a function of the wordlength b and the transform length N. The noise-to-signal ratio is obtained by evaluating the signal and noise powers at different points in the algorithm, then reflecting to the output both signal and noise powers. Based on the derived noise-to-signal ratio is is noted that if the transform length is doubled, then) one additional bit is required in the wordlength to maintain a constant noise-to-signal ratio. It is demonstrated through the software simulations that the predicted noise-to-signal ratio is a good closed-form estimate of the 'true' roundoff error. It is also found from the simulation that the wordlength b and the transform length N=2/sup v/ must satisfy the condition b-v>or=4. >

Proceedings ArticleDOI
04 Nov 1991
TL;DR: The authors study the performance of detection algorithms based on linear time-frequency transforms, for transient signals characterized by linear and nonlinear parametric signal models, and by a 'model mismatch' representing the difference between the model and the actual signal.
Abstract: The authors study the performance of detection algorithms based on linear time-frequency (or time-scale) transforms, for transient signals characterized by linear and nonlinear parametric signal models, and by a 'model mismatch' representing the difference between the model and the actual signal. The transients are assumed to undergo a noninvertible linear transformation prior to the application of the detection algorithm. Examples of such transforms include the short-time Fourier transform and the wavelet transform. Closed-form expressions are derived for the worst-case detection performance for all possible mismatch signals of a given energy. These expressions make it possible to evaluate and compare the performance of various transient detection algorithms, for both single- and multichannel data. Both linear and nonlinear signal models are considered. >

Proceedings ArticleDOI
04 Nov 1991
TL;DR: It is concluded that D.L. Johnson's (1982) approach to bearing estimation as a form of spectral estimation readily generalizes whenever a statistically consistent estimator of a signal attribute can be found.
Abstract: It is noted that power spectral estimation provides one approach to direction-finding. This approach readily generalizes to produce a collection of newer direction-finding algorithms. Estimation of the bispectrum yields a bispectral direction finder. Estimates of time-frequency distributions produce Wigner-Ville and Gabor direction-finders. Some types of nonstationary time series admit spectral estimators which can be used to localize a source. Chaotic signals can also be localized using recently developed parameter estimators. It is concluded that D.L. Johnson's (1982) approach to bearing estimation as a form of spectral estimation readily generalizes whenever a statistically consistent estimator of a signal attribute can be found. >

Proceedings ArticleDOI
04 Nov 1991
TL;DR: In this article, the use of the cone kernel time-frequency representation (CK-TFR) for shift-keyed signal identification is introduced, which can distinguish between phase shifts of 180, +90, and -90 degrees.
Abstract: The use of the cone kernel time-frequency representation (CK-TFR) for shift-keyed signal identification is introduced. In electronic warfare surveillance systems, the identification of phase shifts in a signal is an important topic due to the prevalent use of phase-shift keyed signalling schemes. Examples of TFR patterns for shift-keyed signals are presented. It is observed that, in addition to providing baud rate information about the signal, the CK-TFR clearly distinguishes between phase shifts of 180, +90, and -90 degrees . A 'spectral peak splitting' effect is observed in the CK-TFR coincident with phase shifts in a signal. The mathematical foundation for the spectral peak splitting is explored. >

01 May 1991
TL;DR: In this article, a new set of reduced-data, time-frequency representations, called principal representations, are introduced that capture the coherent amplitude and frequency modulations of component sinusoids of an underlying signal, while retaining enough implicit information for reconstruction of the underlying sequence.
Abstract: The problem addressed in this thesis is that of finding time-frequency representations that capture the coherent amplitude and frequency modulations in the component sinusoids of an underlying signal, while keeping the data requirement of the representation minimal, and ensuring reconstructibility of the underlying sequence from the representation. Classical time-frequency representations such as the short-time Fourier transform, Wigner-Ville distribution, complex energy density, and radar ambiguity function capture the amplitude and frequency modulations of component sinusoids of an underlying signal and retain enough information for invertibility, but at the cost of great data expansion. Reduced-data representations derived from feature extraction algorithms often represent the relevant amplitude and frequency modulations of component sinusoids, but at the cost of noninvertibility. This thesis introduces a new set of reduced-data, time-frequency representations, called principal representations, that capture the coherent amplitude and frequency modulations of component sinusoids of an underlying signal, while retaining enough implicit information for reconstruction of the underlying sequence. Principal representations are formed by applying principal components analysis to the frequency slices of classical time-frequency representations and retaining only the first component information. Certain mild conditions that vary with each principal representation are shown to be sufficient for unique sequence representation with a principal representation. Furthermore, various algorithms are presented for stable reconstruction and signal estimation from modified principal representations of the STFT. The utility of those algorithms is illustrated through their application to problems such as time-scale modification of speech and musical sound synthesis.

Proceedings ArticleDOI
TL;DR: This work fixes a window and provides tools for measuring how well various subspaces of signals can be analyzed relative to the window and whether these coefficients can be used to provide a (discrete) time—frequency representation of the signal relative to a fixed window independent of their role in a Weyl—Heisenberg expansion.
Abstract: Several recent works [1,2,3) have discussed the role of the Gabor transform as a tool for signal detection and feature extraction in the presence of noise. Gabor, or more generally Weyl—Heisenbrerg, expansions of signals provide a greater degree of sensitivity to local phenomena such as local frequency changes as compared to classical Fourier methods. To be an effective tool, such expansions should highlight relatively few components and not spread signal energy throughout an unreasonably large number of components. An important first step in applying the Gabor transform to detect a signal is to maximally exploit a priori signal knowledge to design an appropriate weighting function as a window on input data. The effect of such a window in practice is to reduce the dimension of the signal search space. If complete signal information is known at the outset, then the optimal signal processing strategy is the matched filter, which reduces the search—space to one dimension. In this work, we fix a window and provide tools for measuring how well various subspaces of signals can be analyzed relative to the window. In a Fourier signal processing strategy, signal decay rate (along with that of its Fourier transform) and signal smoothness usually serve as the essential a priori information necessary to set sampling rates and establish error estimates. This information will not be sufficient for an effective application of the Gabor transform. The Zak transform plays a key role in several works [4,5,6,7] dealing with the Gabor transform. The function formed from the ratio of the Zak transform of a signal to the Zak transform of the window will provide essential information about the window's effectiveness in compactly representing signal information. Various subspaces of signals will be defined by functional properties of the quotient. In general, the quotient is doubly periodic but is not necessarily continuous. However, under rather general conditions, Fourier coefficients can be defined which in some cases determine the coefficients of a Weyl-Heisenberg expansion. It is natural to ask whether these coefficients can be used to provide a (discrete) time—frequency representation of the signal relative to the fixed window independent of their role in a Weyl—Heisenberg expansion.

Proceedings ArticleDOI
14 Apr 1991
TL;DR: A fast algorithm for computing the pseudo-Wigner distribution using the fast Hartley transform (FHT) is presented, and the computation complexity is reduced from three complex FFTs to three real FHTs.
Abstract: A fast algorithm for computing the pseudo-Wigner distribution using the fast Hartley transform (FHT) is presented. Unlike the conventional fast Fourier transform (FFT) approach, this algorithm performs entirely in the real domain. Many efficient FHT algorithms can be applied, and the computation complexity is reduced from three complex FFTs to three real FHTs. >

Proceedings ArticleDOI
23 Sep 1991
TL;DR: Doppler processing is more suited to objects, such as icebergs, which remain within the same range cell for an extended period of time, while the spatiotemporalprocessing is more suitable for objects,such as ships, which move through multiple range cells.
Abstract: In our problem of identifying iceberg fragments in marine radar, we have previously applied Gabor’s expansion of a signal onto a set of Gaussian windowed sinusoids (Gabor functions). A somewhat sinusoidal signature in time-frequency space characterized the near circular movement of any floating object under the influence of ocean waves. Methods based on an adaptive version of this time-frequency processing have been shown[l] to track this sinusoidal nonstationarity and thus were very effective for detecting wave driven objects. An even better means of performing the detection, using “chirplets” was later developed[2]. The dynamics of the motion were modeled, first by a constant acceleration (expansion on windowed linear FM basis functions), then by an expansion onto a set of sinusoidal chirplets. (The sound of a police siren is a member of this set of bases). The sinusoidal chirplet model embodies the linear chirplet as a special case. Ordinary range-based processing assumes a piecewise stationary underlying model. The sliding window Doppler Fourier processing, assumes a more general underlying model, namely that of piecewise constant velocity. The linear chirplet generalizes further to constant acceleration. Finally the sinusoidal chirplet matches the physics’ of floating objects very closely and provides the best performance. Each one embodies the previous ones as special cases. We compare our new methods of Doppler processing with spatiotemporal processing[3]. Doppler processing is more suited to objects, such as icebergs, which remain within the same range cell for an extended period of time, while the spatiotemporal processing is more suitable for objects, such as ships, which move through multiple range cells.