scispace - formally typeset
Search or ask a question
Book•

Discrete-Time Signal Processing

TL;DR: In this paper, the authors provide a thorough treatment of the fundamental theorems and properties of discrete-time linear systems, filtering, sampling, and discrete time Fourier analysis.
Abstract: For senior/graduate-level courses in Discrete-Time Signal Processing. THE definitive, authoritative text on DSP -- ideal for those with an introductory-level knowledge of signals and systems. Written by prominent, DSP pioneers, it provides thorough treatment of the fundamental theorems and properties of discrete-time linear systems, filtering, sampling, and discrete-time Fourier Analysis. By focusing on the general and universal concepts in discrete-time signal processing, it remains vital and relevant to the new challenges arising in the field --without limiting itself to specific technologies with relatively short life spans.
Citations
More filters
Journal Article•DOI•
01 Aug 2008
TL;DR: This paper advocates the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction.
Abstract: Many recent computational photography techniques decompose an image into a piecewise smooth base layer, containing large scale variations in intensity, and a residual detail layer capturing the smaller scale details in the image. In many of these applications, it is important to control the spatial scale of the extracted details, and it is often desirable to manipulate details at multiple scales, while avoiding visual artifacts.In this paper we introduce a new way to construct edge-preserving multi-scale image decompositions. We show that current basedetail decomposition techniques, based on the bilateral filter, are limited in their ability to extract detail at arbitrary scales. Instead, we advocate the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction. After describing this operator, we show how to use it to construct edge-preserving multi-scale decompositions, and compare it to the bilateral filter, as well as to other schemes. Finally, we demonstrate the effectiveness of our edge-preserving decompositions in the context of LDR and HDR tone mapping, detail enhancement, and other applications.

1,381 citations

Book•
Richard Szeliski1•
31 Dec 2006
TL;DR: In this article, the basic motion models underlying alignment and stitching algorithms are described, and effective direct (pixel-based) and feature-based alignment algorithms, and blending algorithms used to produce seamless mosaics.
Abstract: This tutorial reviews image alignment and image stitching algorithms. Image alignment algorithms can discover the correspondence relationships among images with varying degrees of overlap. They are ideally suited for applications such as video stabilization, summarization, and the creation of panoramic mosaics. Image stitching algorithms take the alignment estimates produced by such registration algorithms and blend the images in a seamless manner, taking care to deal with potential problems such as blurring or ghosting caused by parallax and scene movement as well as varying image exposures. This tutorial reviews the basic motion models underlying alignment and stitching algorithms, describes effective direct (pixel-based) and feature-based alignment algorithms, and describes blending algorithms used to produce seamless mosaics. It ends with a discussion of open research problems in the area.

1,226 citations

Journal Article•DOI•
TL;DR: A new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components that supports the empirical observations, and a detailed theoretical analysis of the system's performance is provided.
Abstract: Wideband analog signals push contemporary analog-to-digital conversion (ADC) systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the band limit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its band limit in hertz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W hertz. In contrast to Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system's performance that supports the empirical observations.

1,138 citations

Journal Article•DOI•
TL;DR: It is shown in Part 1 how conditions on the $c_k $ lead to approximation properties and orthogonality properties of the wavelets, and the recursive algorithms that decompose and reconstruct f.
Abstract: Wavelets are new families of basis functions that yield the representation $f(x) = \sum {b_{jk} W(2^j x - k)} $. Their construction begins with the solution $\phi (x)$ to a dilation equation with coefficients $c_k $. Then W comes from $\phi $, and the basis comes by translation and dilation of W. It is shown in Part 1 how conditions on the $c_k $ lead to approximation properties and orthogonality properties of the wavelets. Part 2 describes the recursive algorithms (also based on the $c_k $) that decompose and reconstruct f. The object of wavelets is to localize as far as possible in both time and frequency, with efficient algorithms

1,028 citations

Book Chapter•DOI•
14 Apr 1998
TL;DR: A number of attacks are presented that enable the information hidden by copyright marks and other information in digital pictures, video, audio and other multimedia objects to be removed or otherwise rendered unusable.
Abstract: In the last few years, a large number of schemes have been proposed for hiding copyright marks and other information in digital pictures, video, audio and other multimedia objects. We describe some contenders that have appeared in the research literature and in the field; we then present a number of attacks that enable the information hidden by them to be removed or otherwise rendered unusable.

1,004 citations