scispace - formally typeset
Search or ask a question

Showing papers on "Multidimensional signal processing published in 1984"


Journal ArticleDOI
TL;DR: A new algorithm is introduced for the 2m-point discrete cosine transform that reduces the number of multiplications to about half of those required by the existing efficient algorithms, and it makes the system simpler.
Abstract: A new algorithm is introduced for the 2m-point discrete cosine transform. This algorithm reduces the number of multiplications to about half of those required by the existing efficient algorithms, and it makes the system simpler.

661 citations


Proceedings ArticleDOI
19 Mar 1984
TL;DR: The paper presents a revised functional description of Volder's Coordinate Rotation Digital Computer algorithm (CORDIC), as well as allied VLSI implementable processor architectures, and benefits the execution speed in array configurations, since it will allow pipelining at the bit level.
Abstract: The paper presents a revised functional description of Volder's Coordinate Rotation Digital Computer algorithm (CORDIC), as well as allied VLSI implementable processor architectures. Both pipelined and sequential structures are considered. In the general purpose or multi-function case, pipeline length (number of cycles), function evaluation time and accuracy are all independent of the various executable functions. High regularity and minimality of data-paths, simplicity of control circuits and enhancement of function evaluation speed are ensured, partly by mapping a unified set of micro-operations, and partly by invoking a natural encoding of the angle parameters. The approach benefits the execution speed in array configurations, since it will allow pipelining at the bit level, thereby providing fast VLSI implementations of certain algorithms exhibiting substantial structural pipelining or parallelism.

124 citations


Patent
20 Jul 1984
TL;DR: In this paper, a modified Walsh-Hadamard transform is used to remove noise and preserve image structure in a sampled image, where image signals representative of the light value of elements of the image are grouped into signal arrays corresponding to blocks of image elements.
Abstract: An improved image processing method uses a modified Walsh-Hadamard transform to remove noise and preserve image structure in a sampled image. Image signals representative of the light value of elements of the image are grouped into signal arrays corresponding to blocks of image elements. These signals are mapped into larger signal arrays such that one or more image signals appear two or more times in each larger array. The larger arrays are transformed by Walsh-Hadamard combinations characteristic of the larger array into sets of coefficient signals. Noise is reduced by modifying--i.e., coring or clipping--and inverting selected coefficient signals so as to recover processed signals--less noise--representative of each smaller signal array. The results exhibit acceptable rendition of low contrast detail while at the same time reducing certain processing artifacts characteristic of the unimproved Walsh-Hadamard block transform.

58 citations


Journal ArticleDOI
01 Jul 1984
TL;DR: Algorithms that can be implemented optically using outer-product concepts include matrix multiplication, convolution/correlation, binary arithmetic operations for higher accuracy, matrix decompositions, and similarity transformations of images.
Abstract: A row vector when left-multiplied by a column vector produces a two-dimensional rank-one matrix in an operation commonly called an outer product between the two vectors. The outer product operation can form the basis for a large variety of higher order algorithms in linear algebra, signal processing, and image processing. This operation can be best implemented in a processor having two-dimensional (2-D) parallelism and a global interaction among the elements of the input vectors. Since optics is endowed with exactly these features, an optical processor can perform the outer product operation in a natural fashion using orthogonally oriented one-dimensional (1-D) input devices such as acoustooptic cells. Algorithms that can be implemented optically using outer-product concepts include matrix multiplication, convolution/correlation, binary arithmetic operations for higher accuracy, matrix decompositions, and similarity transformations of images. Implementation is shown to be frequently tied to time-integrating detection techniques. These and other hardware issues in the implementation of some of these algorithms are discussed.

56 citations


DOI
01 Feb 1984
TL;DR: A cost function is developed for making quantitative comparisons between digital algorithm implementations including control and overheads and the results show the advantage gained by minimising the true costs including, particularly, control overheads, instead of just the number of arithmetic operations.
Abstract: The advent of digital VLSI technology demands a reappraisal of the most suitable algorithms for a given processing function. A cost function is developed for making quantitative comparisons between digital algorithm implementations including control and overheads. This provides a tool allowing different implementations of the same algorithm, and also different algorithms for the same function, to be compared. The cost function is chosen to characterise the algorithm and associated logic design independently of the circuit technology that might be used. In this way the techonology options can be introduced separately in designing a practical system. Also, configurations which achieve a higher throughput by using a proportionally larger quantity of hardware or higher logic speed are assessed as having the same effectiveness or cost. As an illustration, the costing is applied to FFT and Winograd Fourier-transform algorithms (WFTA). The results show the advantage gained by minimising the true costs including, particularly, control overheads, instead of just the number of arithmetic operations. Despite its fewer arithmetic operations, the WFTA is shown to be less efficient than the FFT except in the most fully parallel case.

45 citations


Patent
21 Feb 1984
TL;DR: In this paper, an X-ray imaging system is used to reduce extraneous signals or artifacts in a multiple measurement noise reducing system by processing a plurality of measurements to obtain a first image signal of an object representing a desired parameter such as a blood vessel.
Abstract: Extraneous signals or artifacts are reduced in a multiple measurement noise reducing system such as an X-ray imaging system by processing a plurality of measurements to obtain a first image signal of an object representing a desired parameter such as a blood vessel, processing the plurality of measurements to provide a second image signal having increased signal-to-noise, low pass filtering the first image signal, high pass filtering the second image signal, and then combining the two filtered signals. The filter frequencies are varied in response to the presence of artifacts to minimize effects of the artifact on the combined signal.

41 citations


Proceedings ArticleDOI
27 Feb 1984
TL;DR: In this article, two types of local frequency spectra are presented: the Wigner distribution function and the sliding-window spectrum, the latter having the form of a crossambiguity function.
Abstract: . The description of a signal by means of a local frequency spec-trum resembles such things as the score in music, the phase space in mechanics, and the ray concept in geometrical optics. Two types of local frequency spectra are presented: the Wigner distribution function and the sliding-window spectrum, the latter having the form of a cross-ambiguity function. The Wigner distribution function in particular can provide a link between Fourier optics and geometrical optics; many properties of the Wigner distribution function, and the way in which it prop-agates through linear systems, can be interpreted in geometric-optical terms. The Wigner distribution function is linearly related to other signal representations like Woodward's ambiguity function, Rihaczek's complex energy density function, and Mark's physical spectrum. An advantage of the Wigner distribution function and its related signal representations is that they can be applied not only to deterministic signals but to stochastic signals as well, leading to such things as Walther's generalized radiance and Sudarshan's Wolf tensor. On the other hand, the sliding-window spectrum has the advantage that a sampling theorem can be formulated for it: the sliding-window spectrum is completely determined by its values at the points of a certain space-frequency lattice, which is exactly the lattice suggested by Gabor in 1946. The sliding-window spectrum thus leads naturally to Gabor's expansion of a signal into a discrete set of properly shifted and modulated versions of an elementary signal, which is again another space-frequency signal representation, and which is related to the degrees of freedom of the signal.© (1984) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

35 citations


Journal ArticleDOI
01 Oct 1984
TL;DR: In this paper, the state-of-the-art multidimensional filtering has been applied during recording and processing of seismic reflection data since the earliest days of analog recording on paper records, and more sophisticated multichannel filters have been developed these include simple "mixes" (spatial convolution with small operators), two-dimensional Fourier transforms with appropriate limits in spatial and temporal frequencies, and geometrically, geophysically, meaningful Radon transform techniques.
Abstract: Multidimensional filtering has been applied during recording and processing of seismic reflection data since the earliest days of analog recording on paper records As the state of the art has evolved to digital recording and processing, and acquisition has expanded to include dense spatial sampling over a large number of channels, more sophisticated multichannel filters have been developed These include simple "mixes" (spatial convolution with small operators), two-dimensional Fourier transforms with appropriate limits in spatial and temporal frequencies, and more geometrically, as well as geophysically, meaningful Radon transform techniques All multidimensional filtering limits the data in some fashion, be it temporal frequency bandwidth, spatial frequency bandwidth, limits in apparent horizontal phase velocity across a recording array (antenna), or limits in apparent wave-propagation velocity These limits generally are defined to pass regions of high signal level and reject regions of high noise levels As more recent techniques have emerged, such as Tau-p transforms (special cases of the Radon transform), filter limits may be described in terms of geophysical knowledge as well as signal characteristics Thus additional information, derived from regional geophysical knowledge, may be added to the data processing sequence Many new considerations and potential problems have arisen as new multidimensional filtering techniques have been developed, including spatial sampling, aliasing with different transforms, maintenance of dynamic range, and effects of multidimensional filtering at different points in the processing sequence

31 citations


Proceedings ArticleDOI
01 Mar 1984
TL;DR: This paper presents a non-linear system modelling technique based on the 3-section block model which may be reconfigured to represent the non- linearities present in many practical situations.
Abstract: The existing theory relating to the analysis and modelling of non-linear systems relies on the Wiener model which is unnecessarily complex in many practical situations. This paper presents a non-linear system modelling technique based on the 3-section block model which may be reconfigured to represent the non-linearities present in many practical situations.

29 citations


Journal ArticleDOI
TL;DR: To design sampling patterns that reduce aliasing, the sequence of sampling points is mapped into several shorter subsequences via the chinese remainder theorem, and a pairwise exchange algorithm then finds the best ordering of each subsequence.
Abstract: The aliasing that results from time-sequential sampling of spatiotemporal signals is strongly dependent on the order in which the spatial points are sampled. To design sampling patterns that reduce aliasing, the sequence of sampling points is mapped into several shorter subsequences via the chinese remainder theorem. A pairwise exchange algorithm then finds the best ordering of each subsequence. The patterns obtained with this procedure perform substantially better than those known previously, and perform as well as the optimal patterns that can be expressed in closed form when the signal is termporally undersampled by less than a factor of 2.

28 citations


Proceedings ArticleDOI
01 Mar 1984
TL;DR: The Integrated Signal Processing System is a Lisp machine-based workstation which provides a unified environment for signal data processing and the development of signal processing algorithms.
Abstract: The Integrated Signal Processing System (ISP) is a Lisp machine-based workstation which provides a unified environment for signal data processing and the development of signal processing algorithms. ISP is based on a model of signal processing computation in which the fundamental activities are creating and manipulating abstract signal objects. ISP consists of three main subsystems. The signal representation language (SRL) formalizes the semantic foundation of ISP and provides a set of facilities for defining signal classes and creating instances of signal objects. The ISP environment provides a signal stack, signal pictures and signal display layouts which are used to create and view selected signals from the universe defined by SRL. Finally, the user interface consists of a number of interactive mechanisms for manipulating components of the environment.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: A system for speech analysis and enhancement which combines signal processing and symbolic processing in a closely coupled manner and attempts to reconstruct the original speech waveform using symbolic processing to help model the signal and to guide reconstruction.
Abstract: This paper describes a system for speech analysis and enhancement which combines signal processing and symbolic processing in a closely coupled manner. The system takes as input both a noisy speech signal and a symbolic description of the speech signal. The system attempts to reconstruct the original speech waveform using symbolic processing to help model the signal and to guide reconstruction. The system uses various signal processing algorithms for parameter estimation and reconstruction.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: A unified overview of fast sequential algorithms for LS FIR filters, implemented using a direct form realization, in the case of prewindowed multichannel signals is offered.
Abstract: Sequential Least-Squares (LS) methods play a prominent role in many digital signal processing applications. The conventional implementation of these schemes requires an amount of operations proportional to the square of the number of estimated parameters. In contrast a variety of existing fast algorithms offer a computational complexity proportional to the number of estimated parameters. Such schemes exist for both direct form and lattice-ladder filter structures. This paper offers a unified overview of fast sequential algorithms for LS FIR filters, implemented using a direct form realization, in the case of prewindowed multichannel signals. Although all these algorithms are theoretically equivalent, in practice they exhibit different performance due to round-off noise, incorrect initialization etc. The performance evaluation of all these shemes is still an area to be explored.

01 Oct 1984
TL;DR: A LISP-based signal processing package for integrated numeric and symbolic manipulation of discrete-time signals is described, based on the concept of 'signal abstraction' in which a signal is defined by its non-zero domain and by a method for computing its samples.
Abstract: : A LISP-based signal processing package for integrated numeric and symbolic manipulation of discrete-time signals is described. The package is based on the concept of 'signal abstraction' in which a signal is defined by its non-zero domain and by a method for computing its samples. Most common signal processing operations are defined in the package and the package provides simple methods for the definition of new operators. The package provides facilities for the manipulation of infinite duration signals and periodic signals, for the efficient computation of signals over intervals, and for the catching of signal values. The package is currently being expanded to provide for manipulation of continuous-time signals and symbolic signal transformations, such as the Fourier transform, to form the basis of knowledge-based signal processing systems.

Journal ArticleDOI
TL;DR: The reconstruction of bar codes from the imperfect CCD signal by digital signal processing techniques and schemes for encoding digital data in the form of bar code reading are discussed.
Abstract: Charge coupled device (CCD) image sensors are currently used in many image processing applications such as facsimile and bar code reading. Because the photodetectors in the device have finite areas, the output signal from the CCD image sensor is only an approximation to the sampled value of the image. In this paper we discuss the reconstruction of bar codes from the imperfect CCD signal by digital signal processing techniques. Schemes for encoding digital data in the form of bar codes are also discussed.

Journal ArticleDOI
TL;DR: The main computational burden of checking stability of a multidimensional system is to check whether a multivariable polynomial has zeros on the distinguished boundary of a certain region of analyticity as mentioned in this paper.
Abstract: The main computational burden of checking stability of a multidimensional system is to check whether a multivariable polynomial has zeros on the distinguished boundary of a certain region of analyticity. A transformation is performed yielding an expression which has zeros on the distinguished boundary if and only if the original polynomial has such zeros. In some cases, especially where a linear dependence of certain auxiliary variables defined in the text can be presented in terms of nonnegative integers, the transformed expression is considerably simpler, and its test is easier to perform than the original one. Examples are provided.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: The signal processing capabilities of CUSP, its simple programmability, and I/O interface are discussed, and simulation results which quantify the signal-to-noise ratio advantages of the bit-serial architecture are presented.
Abstract: The Cornell University Signal Processor (CUSP) is a high performance CMOS processor which has been custom designed in VLSI to efficiently compute digital signal processing algorithms based on the Cooley-Tukey Radix-4 Fast Fourier Transform. One CUSP chip can be used as a stand-alone peripheral in a microprocessor system, or CUSP units can be combined into arrays in order to process signals with sampling rates of several MHz. It can attain a high numerical accuracy while maintaining a throughput superior to most available systems. In this paper we discuss the signal processing capabilities of CUSP, its simple programmability, and I/O interface. We also present simulation results which quantify the signal-to-noise ratio advantages of the bit-serial architecture.


Journal ArticleDOI
TL;DR: An extended fast Fourier transform algorithm which entirely eliminates or greatly reduces such operations is introduced and the derived algorithm has been applied to ARMA spectral estimation and its effectiveness compared to other methods.
Abstract: The conventional FFT algorithm can be used for the computation of ARMA spectral estimates, but a large number of operations would involve zeros. An extended fast Fourier transform algorithm which entirely eliminates or greatly reduces such operations is introduced in this paper. Subsequently, the derived algorithm has been applied to ARMA spectral estimation and its effectiveness compared to other methods.

Journal ArticleDOI
TL;DR: A simple algorithm for the bilinear transformation of multivariable polynomials on the lines of the method given by Davies is proposed.
Abstract: A simple algorithm for the bilinear transformation of multivariable polynomials on the lines of the method given by Davies is proposed

Book ChapterDOI
01 Jan 1984
TL;DR: It is shown that rational functions expressed in the form of Schur type continued fractions have poles which contain the desired information in the input signals.
Abstract: Lattice digital filters are used as models in machine analysis and synthesis of signals such as speech. It is shown that rational functions expressed in the form of Schur type continued fractions have poles which contain the desired information in the input signals. Results are given to locate these poles in various regions (e.g., disks, annuli, or complements of disks) without having to compute the poles.

Proceedings ArticleDOI
27 Feb 1984
TL;DR: Conditions for decomposition are described, and a variety of architectures, including those for the discrete Fourier transform, chirp-z transform, beam forming, and cross-ambiguity function calculation, are discussed.
Abstract: . Many signal processing architectures that exploit characteristics of current device technology can be devised by decomposing linear transform kernels and by employing chirp implementations of the Fourier transform. These methods allow complex algorithms to be implemented by devices with relatively fewer degrees of freedom. Dimensionality-changing transformations play an especially important role. Conditions for decomposition are described, and a variety of architectures, including those for the discrete Fourier transform, chirp-z transform, beam forming, and cross-ambiguity function calculation, are discussed.© (1984) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
N. Aoshima1
TL;DR: A new method of measuring impulse and frequency response of a linear system has been developed that makes it possible to use large signal power compared with ordinary pulse method, and still has advantages of pulse signal.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: This paper discusses two custom integrated circuits designed to perform the functions of signal correlation and lattice filtering (MA or AR).
Abstract: This paper discusses two custom integrated circuits designed to perform the functions of signal correlation and lattice filtering (MA or AR). Each circuit is decomposed into P operators, each being a direct implementation of the equations. To allow concurrent use of an arbitrary number of operators and to simplify inter-module connections (both within and between chips), a bit-serial architecture was adopted. These chips can be cascaded; computation speed is independent of model order in both types of calculations. These chips have been designed to operate at a sample frequency between 0 and 300 kHz for the correlator, 0 and 150 kHz for the lattice filter.

Proceedings ArticleDOI
19 Mar 1984
TL;DR: This work is an attempt to quantify the important parameters in VLSI implementations for DSP problems and apply the theory to the task of digital filtering.
Abstract: The development of VLSI has changed the important parameters in signal processing algorithms and structures. Technology developments are dictating new and different criteria for "efficient" realizations. In VLSI the number of computations performed per output sample is not enough to measure the "goodness" of a realization/implementation. This work is an attempt to quantify the important parameters in VLSI implementations for DSP problems and apply the theory to the task of digital filtering.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: This paper presents an architecture for computing the Fast Fourier Transform (FFT) using a systolic processor which incorporates an elevator concept to circumvent the requirements for global communication inherent in conventional FFT implementations.
Abstract: Systolic architectures for signal processing are of great interest as they offer a considerable speed improvement over traditional Von-Nuemann computing architectures, and are particularly suitable for VLSI implementation due to ensuing simple and regular communication structures [1]. This paper presents an architecture for computing the Fast Fourier Transform (FFT) using a systolic processor which incorporates an elevator concept to circumvent the requirements for global communication inherent in conventional FFT implementations. The proposed algorithm is shown to be highly efficient in terms of both hardware and computation time. Architectures are further suggested for the real time computation of 2D functions such as the Wigner Distribution and the Complex Ambiguity function by using the systolic FFT processor coupled with an input characterising array.


Proceedings ArticleDOI
S. Smith1
01 Mar 1984
TL;DR: An approach to musical signal processing, based on additive synthesis, is presented and it is shown that these techniques are becoming economically attractive for analysis and synthesis of sound.
Abstract: Digital hardware may be used to generate waveforms as well as to process already-existing waveforms. Several digital synthesis techniques are in use today, most of whose advantages lie in their computational efficiency rather than their utility. Only the methods of synthesis known as additive and subtractive, however, are accompanied by a suitable analysis technique, allowing the accurate extraction of parameters from real waveforms for subsequent use in synthesis. With the advent of VLSI, these techniques are becoming economically attractive for analysis and synthesis of sound. An approach to musical signal processing, based on additive synthesis, is presented.

Proceedings ArticleDOI
27 Feb 1984
TL;DR: This work shows how folded spectrum analysis can be extended to more than two dimensions using either optical or other processing technologies, and promises a rich variety of computing architectures for the immediate future.
Abstract: A folded spectrum is a one-dimensional spectrum that has been reformatted into two dimensions by cutting it into segments of equal length and arranging the segments in sequential order into a two-dimensional array. Folded spectrum analysis in optical processing allows both spatial dimensions of the optical processor to be used effectively and therefore allows a greater number of spectrum elements to be displayed in parallel. The folded spectrum in optical processing is remarkably similar to the fast Fourier transform (FFT) algorithm in digital processing. The similarities unify many processing concepts and give physical and intuitive insights intc the FFT. More importantly, they show how folded spectrum analysis can be extended to more than two dimensions using either optical or other processing technologies. That result promises a rich variety of computing architectures for the immediate future.© (1984) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: A multiband filter bank, based on complementary filters, polyphase processing, and two separate FFT's, is presented, along with the details of its design and its relevance to psychoacoustic parameters.
Abstract: Digital signal processing can be applied to the task of subjectively improving the quality of existing recordings which are degraded by noise or distortion. An efficient method is based on a multiband filter bank which ensures flat overall response. Such a filter bank, based on complementary filters, polyphase processing, and two separate FFT's, is presented, along with the details of its design and its relevance to psychoacoustic parameters. The filter bank allows the use of multi-band expansion techniques which effectively suppress noise components. The achieved signal enhancement is described and illustrated by audio signals. Although complex in design, the filter bank can be implemented efficiently. The design path from theory to high-level-language simulation to architecture and microcode definition, to microcode justification and to hardware is briefly described.