scispace - formally typeset
Search or ask a question

Showing papers on "Time–frequency analysis published in 1982"


Journal ArticleDOI
TL;DR: A signal processor has been designed using bit-slice microprocessor techniques and the analysis which is performed at present is a fast Fourier transform (FFT) of ultrasonic blood-velocity signals with graphic display of the results.
Abstract: A signal processor has been designed using bit-slice microprocessor techniques. The processor has two data bases for input/output, a central processing unit and a memory. The microprogram in the processor can be changed to suit individual needs. The analysis which is performed at present is a fast Fourier transform (FFT) of ultrasonic blood-velocity signals with graphic display of the results. The FFT operates on 256 sampled points, thus giving 128 frequency components of the signal. Each transform is calculated in less than 4·5 ms. The algorithm of the FFT uses base 2, and the real-valued signal is transformed into a complex sequence to simplify the program. The microprocessor has been interfaced to an ultrasound doppler blood-velocity meter. Results from measurements on arteries are shown.

13 citations


Proceedings ArticleDOI
01 May 1982
TL;DR: This mechanism is shown to involve principal component (or Loeve-Karhunen) analysis as an intermediate step in the complete canonical coordinate determination process, and can lead to a substantial simplification in the computational complexity that is entailed in handling a class of non-euclidean error criteria.
Abstract: This paper describes the use of a canonical signal compression and modelling technique that permits the minimization of certain non-euclidean types of signal resynthesis error criteria. The technique is based on the construction of a non-orthogonal transformation from the original sampled signal representation, in either the time, frequency or spatial domain, to a special canonical coordinate domain. The parameters characterizing this transformation are then chosen to minimize the specified error criterion, for each level of truncation of the canonical coordinate based signal representation. A mechanism for factoring this canonical coordinate transformation into an eigenvector-eigenvalue based correlation simplification process and an error metric simplification process, is described. This mechanism is shown to involve principal component (or Loeve-Karhunen) analysis as an intermediate step in the complete canonical coordinate determination process, and can lead to a substantial simplification in the computational complexity that is entailed in handling a class of non-euclidean error criteria.

2 citations