scispace - formally typeset
Search or ask a question

Showing papers on "Upsampling published in 1994"


PatentDOI
TL;DR: In this article, the peak-level increase (PLI) caused by signal processing such as perceptual coding is considered. But the peak amplitude of the signal reproduced by the perceptual decoder may sometimes exceed the capabilities of the broadcast transmitter, even though the signal amplitude of input to the perceptual encoder is properly limited.
Abstract: The invention relates to limiting the peak amplitude of an audio signal in one or more frequency subbands while preserving the apparent loudness. Applications such as a Studio-Transmitter Link (STL) for broadcasting sometimes use perceptual coding to deliver an audio signal originating from a studio to a broadcast transmitter. The peak amplitude of the audio signal will have been limited by means of a limiter or otherwise, and a perceptual encoder reduces the informational capacity requirements of the audio signal for transmission across a link to a broadcast transmitter. A perceptual decoder receives the coded signal from the link and reproduces the audio signal for the broadcast transmitter. The peak-amplitude of the signal reproduced by the perceptual decoder may sometimes exceed the capabilities of the broadcast transmitter even though the peak amplitude of the audio signal input to the perceptual encoder is properly limited. This increase in peak level is referred to as "peak-level increase" or PLI. Transmitter overload resulting from PLI can create audible distortion and/or impermissible broadcast conditions such as excessive FM deviation. Various embodiments of apparatus and method are described which estimate PLI caused by signal processing such as perceptual coding and which apply corrective gain to the portions of the audio signal bandwidth so as to limit peak amplitude while preserving apparent loudness.

77 citations


Journal ArticleDOI
TL;DR: A complete theory for the analysis of arbitrary combinations of upsampling, downsamplers and filters in multiple dimensions is developed and a number of new results in the theory of integer matrices that are relevant to the filter bank problem are developed.
Abstract: Solutions to the problem of design of rational sampling rate filter banks in one dimension has previosly been proposed. The ability to interchange the operations of upsampling, downsampling, and filtering plays an important role in these solutions. The present paper develops a complete theory for the analysis of arbitrary combinations of upsamplers, downsamplers and filters in multiple dimensions. Although some of the simpler results are well known, the more difficult results concerning swapping upsamplers and downsamplers and variations thereof are new. As an application of this theory, the authors obtain algebraic reductions of the general multidimensional rational sampling rate problem to a multidimensional uniform filter bank problem. However, issues concerning the design of the filters themselves are not addressed. In multiple dimensions, upsampling and downsampling operators are determined by integer matrices (as opposed to scalars in one dimension), and the noncommutativity of matrices makes the problem considerably more difficult. Cascades of upsamplers and downsamplers in one dimension are easy to analyze. The new results for the analysis of multidimensional upsampling and downsampling operators are derived using the Aryabhatta/Bezout identity over integer matrices as a fundamental tool. A number of new results in the theory of integer matrices that a relevant to the filter bank problem are also developed. Special cases of some of the results pertaining to the commutativity of upsamplers/downsamplers have been obtained in parallel by several authors. >

62 citations


Journal ArticleDOI
TL;DR: Two approaches for reducing the computation time of discrete-time TFDs are introduced, including approximations to real-valued DTFDs that admit fast evaluations over sparse sets of time-frequency samples and frequency downsampling.
Abstract: Cohen's class of time-frequency distributions (TFDs) have significant potential for the analysis of complex signals. In order to evaluate the TFD of a signal using its samples, discrete-time TFDs (DTFDs) have been defined as the Fourier transform of a smoothed discrete autocorrelation. Existing algorithms evaluate real-valued DTFDs using FFTs of the conjugate-symmetric autocorrelation. Although the computation required to smooth the autocorrelation is often greater than that for the FFT, there are no widely applicable fast algorithms for this part of the processing. Since the FFT is relatively inexpensive, downsampling is ineffective for reducing computation. If the DTFD needs only to be evaluated at a few frequencies for each time instant, the cost per time-frequency sample can be extremely high. The authors introduce two approaches for reducing the computation time of DTFDs. First, they define approximations to real-valued DTFDs, using spectrograms, that admit fast, space-saving evaluations. Frequency downsampling reduces the computation time of these approximations. Next, they define DTFDs that admit fast evaluations over sparse sets of time-frequency samples. A single short time Fourier transform is calculated in order for DTPD time-frequency samples to be evaluated at an additional, fixed cost per sample. >

47 citations


Journal ArticleDOI
TL;DR: The intra-block filtering techniques are revised to enlighten the limitations implied by small block dimensions and hybrid techniques, using variable length FIR filters after the discard of low order DCT coefficients, are introduced to increase the computational efficiency.
Abstract: The extensive use of discrete cosine transform (DCT) techniques in image coding suggests the investigation on filtering and downsampling methods directly acting on the DCT domain. As DCT image transforms usually operate on blocks, it is useful that the DCT filtering techniques preserve the block dimension. In this context the present paper first revises the intra-block filtering techniques to enlighten the limitations implied by small block dimensions. To overcome the artefacts introduced by this method and to satisfy the filtering design constraints which are usually defined in the Fourier domain, inter-block techniques are developed starting from the implementation of FIR filtering. Inter-block schemes do not exhibit any limitation but their computational cost has to be taken into account. In addition, hybrid techniques, using variable length FIR filters after the discard of low order DCT coefficients, are introduced to increase the computational efficiency; in this case, the introduced aliasing has to be kept at tolerable values. The amount of the tolerable aliasing strictly depends on the subsequent operations applied to the filtered and downsampled image. The numerical examples reported could form a basis for error estimation and evaluation of trade-off between performance and computational complexity.

22 citations


Proceedings ArticleDOI
09 Jun 1994
TL;DR: The Frequency domain Replication and Downsampling (FReD) algorithm is discussed, which enables the acquisition of data at normal spotlight-mode rates and which does not require the computation of FFTs any larger than those required for normal Spotlight-mode processing.
Abstract: Migration processing exactly accounts for the wavefront curvature over the imaged scene. Migration processing is therefore capable of forming high resolution SAR images when the data is acquired over a large synthetic aperture collection angle. Because migration processing requires phase compensation to a line corresponding to the nominal SAR flight path, the phase history is chirped over a very large bandwidth, requiring a very high sample rate to prevent aliasing in the frequency spectrum. The sample rate is determined by the size of the synthetic aperture collection angle. When migration processing is applied to a spotlight-mode SAR, this sampling rate can be much higher than that required for normal spotlight-mode processing. In this latter case, the phase history is motion compensated to scene center and the sample rate is determined by the spot size. Higher sampling rates result in large FFTs and may cause range ambiguity problems. ERIM has pursued the development of a variation on migration processing, which we call the Frequency domain Replication and Downsampling (FReD) algorithm, which enables the acquisition of data at normal spotlight-mode rates and which does not require the computation of FFTs any larger than those required for normal spotlight-mode processing. The FReD algorithm is based on the fact that when a discrete, aliased spectrum is replicated a sufficient number of times, the resultant spectrum will contain the desired signal spectrum. What we call the FReD algorithm is discussed in two articles by Prati, et al. Subsequent processing steps extract the desired portion of the spectrum to form an image. This paper will review migration processing, discuss the FReD algorithm and present expressions for the number of operations required for its implementation. Migration-processed, spotlight-mode SAR imagery derived from airborne collected data and demonstrating the utility of FReD are presented.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

13 citations


Journal ArticleDOI
Engel Roza1
TL;DR: It is shown that recursive bit-stream conversion is a generalization of a digital sigma-delta modulator or noise shaper and its capability to perform the major part of the required signal processing at the lower frequency, rather than at the higher frequency as in conventional schemes.
Abstract: Further results are presented of the recursive bit-stream conversion technique. In particular the sample rate conversion problem is studied to convert a low frequency bit parallel sequence with high word accuracy (such as a PCM signal) into a high frequency sequence with low word accuracy (ultimately to the 1-bit bit-stream format). It is shown that recursive bit-stream conversion is a generalization of a digital sigma-delta modulator or noise shaper. Two important advantages of recursive bit-stream conversion are emphasized. One of these is the property that upsampling and noise shaping are simultaneously performed, set that in theory separate upsampling filters are superfluous. It is shown that a modest performance penalty has to be paid for this property. The other advantage of recursive bit-stream conversion is its capability to perform the major part of the required signal processing at the lower frequency, rather than at the higher frequency as in conventional schemes. The developed theory has been verified with simulation examples. >

9 citations