About: Filter bank is a research topic. Over the lifetime, 10465 publications have been published within this topic receiving 189911 citations. The topic is also known as: filterbank.
Papers published on a yearly basis
01 Jul 1992
TL;DR: In this paper, a review of Discrete-Time Multi-Input Multi-Output (DIMO) and Linear Phase Perfect Reconstruction (QLP) QMF banks is presented.
Abstract: 1. Introduction 2. Review of Discrete-Time Systems 3. Review of Digital Filters 4. Fundamentals of Multirate Systems 5. Maximally Decimated Filter Banks 6. Paraunitary Perfect Reconstruction Filter Banks 7. Linear Phase Perfect Reconstruction QMF Banks 8. Cosine Modulated Filter Banks 9. Finite Word Length Effects 10. Multirate Filter Bank Theory and Related Topics 11. The Wavelet Transform and Relation to Multirate Filter Banks 12. Multidimensional Multirate Systems 13. Review of Discrete-Time Multi-Input Multi-Output LTI Systems 14. Paraunitary and Lossless Systems Appendices Bibliography Index
TL;DR: A "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information is pursued and it is shown that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves.
Abstract: The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a "true" two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using nonseparable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and, thus, it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuous-domain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications.
01 Jan 1996
••01 Sep 2009
TL;DR: It is shown that using non-linearities that include rectification and local contrast normalization is the single most important ingredient for good accuracy on object recognition benchmarks and that two stages of feature extraction yield better accuracy than one.
Abstract: In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where the filters in one or both stages are learned in supervised or unsupervised mode This paper addresses three questions: 1 How does the non-linearities that follow the filter banks influence the recognition accuracy? 2 does learning the filter banks in an unsupervised or supervised manner improve the performance over random filters or hardwired filters? 3 Is there any advantage to using an architecture with two stages of feature extraction, rather than one? We show that using non-linearities that include rectification and local contrast normalization is the single most important ingredient for good accuracy on object recognition benchmarks We show that two stages of feature extraction yield better accuracy than one Most surprisingly, we show that a two-stage system with random filters can yield almost 63% recognition rate on Caltech-101, provided that the proper non-linearities and pooling layers are used Finally, we show that with supervised refinement, the system achieves state-of-the-art performance on NORB dataset (56%) and unsupervised pre-training followed by supervised refinement produces good accuracy on Caltech-101 (≫ 65%), and the lowest known error rate on the undistorted, unprocessed MNIST dataset (053%)
TL;DR: It turns out that EMD acts essentially as a dyadic filter bank resembling those involved in wavelet decompositions, and the hierarchy of the extracted modes may be similarly exploited for getting access to the Hurst exponent.
Abstract: Empirical mode decomposition (EMD) has recently been pioneered by Huang et al. for adaptively representing nonstationary signals as sums of zero-mean amplitude modulation frequency modulation components. In order to better understand the way EMD behaves in stochastic situations involving broadband noise, we report here on numerical experiments based on fractional Gaussian noise. In such a case, it turns out that EMD acts essentially as a dyadic filter bank resembling those involved in wavelet decompositions. It is also pointed out that the hierarchy of the extracted modes may be similarly exploited for getting access to the Hurst exponent.
Trending Questions (10)
Related Topics (5)