scispace - formally typeset
Search or ask a question

Showing papers on "Upsampling published in 2006"


Journal ArticleDOI
TL;DR: This paper presents a simple yet efficient algorithm for multifocus image fusion, using a multiresolution signal decomposition scheme based on a nonlinear wavelet constructed with morphological operations.

144 citations


Journal ArticleDOI
TL;DR: In this article, the authors demonstrate how the popular overlap-save (OS) fast convolution filtering technique can be extended to create a flexible and computationally efficient bank of filters, with frequency translation and decimation implemented in the frequency domain.
Abstract: This paper demonstrates how the popular overlap-save (OS) fast convolution filtering technique can be extended to create a flexible and computationally efficient bank of filters, with frequency translation and decimation implemented in the frequency domain. The paper also provides some tips for choosing appropriate fast Fourier transform (FFT) size. It also presents implementation guidance to streamline this powerful, multichannel filtering, down-conversion and decimation process

59 citations


Patent
Shijun Sun1
28 Apr 2006
TL;DR: In this paper, the authors present methods and systems for residual upsampling for spatially scalable video coding, which are based on block-based residual up-sampling techniques.
Abstract: Embodiments of the present invention comprise methods and systems for block-based residual upsampling. Some embodiments of the present invention comprise methods and systems for residual upsampling for spatially scalable video coding.

53 citations


Proceedings ArticleDOI
19 Jan 2006
TL;DR: This paper considers the following three video compression models and describes the application of super-resolution techniques as a way to post-process and upsample a compressed video sequences.
Abstract: The term super-resolution is typically used in the literature to describe the process of obtaining a high resolution (HR) image or a sequence of HR images from a set of low resolution (LR) observations. This term has been applied primarily to spatial and temporal resolution enhancement. However, intentional pre-processing and downsampling can be applied during encoding and super-resolution techniques to upsample the image can be applied during decoding when video compression is the main objective. In this paper we consider the following three video compression models. The first one simply compresses the sequence using any of the available standard compression methods, the second one pre-processes (without downsampling) the image sequence before compression, so that post-processing (without upsampling) is applied to the compressed sequence. The third model includes downsampling in the pre-processing stage and the application of a super resolution technique during decoding. In this paper we describe these three models but concentrate on the application of super-resolution techniques as a way to post-process and upsample a compressed video sequences. Experimental results are provided on a wide range of bitrates for two very important applications: format conversion between different platforms and scalable video coding.

34 citations


Journal Article
TL;DR: A new signal-processing analysis of the bilateral filter is proposed which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator and develops a novel bilateral filtering acceleration using downsampling in space and intensity.
Abstract: The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and a fast version has been proposed. Unfortunately, little is known about the accuracy of such acceleration. In this paper, we propose a new signal-processing analysis of the bilateral filter, which complements the recent studies that analyzed it as a PDE or as a robust statistics estimator. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using a downsampling in space and intensity. This affords a principled expression of the accuracy in terms of bandwidth and sampling. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Tlie bilateral filter can then be expressed as simple linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive simple criteria for down-sampling the key operations and to achieve important acceleration of the bilateral filter. We show that, for the same running time, our method is significantly more accurate than previous acceleration techniques.

32 citations


Patent
17 Mar 2006
TL;DR: In this paper, a spatially scalable coding system is improved by effectively estimating a high resolution (enhanced layer) video from a video (or image) 22 of a low resolution lower layer employing adaptable upsample filtering.
Abstract: PROBLEM TO BE SOLVED: To permit correct upsampling of different image blocks in the same frame. SOLUTION: The compression efficiency of the spatially scalable coding system is improved by effectively estimating a high resolution (enhanced layer) video (or image) 46 from a video (or image) 22 of a low resolution lower layer employing adaptable upsample filtering. An image 38 through excellent upsampling is produced by selecting a different upsampling filter coping with local image characteristics with respect to a different part in a low resolution frame 28. Selection of different upsample filter is determined by an information utilizable for both of an encoder and a decoder. COPYRIGHT: (C)2007,JPO&INPIT

26 citations


Patent
09 Mar 2006
TL;DR: In this article, the NL point IFFT is further optimized by exploiting the fact that (N−1) L of the frequency domain symbols are zero, which enables an embodiment that consists of a pre-processor that multiplies the input samples by complex phase factors, followed by L point IffTs.
Abstract: Systems and methods are provided for transmitting OFDM information via IFFT up-sampling components that transmit data at a higher sampling rate than conventional systems to simplify filter requirements and mitigate leakage between symbols. In one embodiment, an NL point IFFT is performed on a zero inserted set of frequency domain symbols. In another embodiment, the NL point IFFT is further optimized by exploiting the fact that (N−1) L of the frequency domain symbols are zero. This enables an embodiment that consists of a pre-processor that multiplies the input samples by complex phase factors, followed by L point IFFTs.

26 citations


Patent
Shijun Sun1
11 Sep 2006
TL;DR: In this article, the phase offset position in a higher resolution picture relative to a lower resolution picture is determined and interpolation filter coefficients for some filters may then be selected based on the filter offset.
Abstract: Aspects of the present invention relate to systems, methods and devices for upsampling images and design of upsampling filters. Some aspects relate to a determination of a phase offset position in a higher resolution picture relative to a lower resolution picture. Interpolation filter coefficients for some filters may then be selected based on the filter offset. Other aspects relate to selection of coefficients for filters that are not dependent on the phase offset. In certain implementations, a weighting factor may be used to combine the effects of a phase-offset-dependent filter and an independent filter.

23 citations


Patent
10 Apr 2006
TL;DR: In this paper, a system and method of decompressing a video signal can include the steps of receiving a compressed video signal, the video signal including frames, analyzing, for each frame, the macroblock-by-macroblock level, determining whether to upsample a macroblock residual for each of the macroblocks, selectively upsampling a macro block residual for some of the microblocks, and decoding the macro blocks.
Abstract: A system and method of compressing a video signal can include the steps of: receiving a video signal, the video signal including frames; analyzing, for each frame, the video signal on a macroblock-by-macroblock level; determining whether to downsample a macroblock residual for each of the macroblocks; selectively downsampling a macroblock residual for some of the macroblocks; and coding the macroblocks. A system and method of decompressing a video signal can include the steps of receiving a compressed video signal, the video signal including frames; analyzing, for each frame, the video signal on a macroblock-by-macroblock level; determining whether to upsample a macroblock residual for each of the macroblocks; selectively upsampling a macroblock residual for some of the macroblocks; and decoding the macroblocks.

23 citations


Patent
20 Mar 2006
TL;DR: In this paper, a method for improving the performance of the BLSkip mode in SVC includes the steps of upsampling the motion field of the base layer, interpolating the motion vectors for the intra MBs, and generating a MV predictor for a 4x4 block using neighbor candidates.
Abstract: A method for improving the performance of the BLSkip mode in SVC includes the steps of upsampling the motion field of the base layer, interpolating the motion vectors for the intra MBs, interpolating the 8x8 block motion field to a 4x4 block motion field, and generating a MV predictor for a 4x4 block in BLSkip mode using neighbor candidates.

22 citations


Patent
29 Nov 2006
TL;DR: In this article, a transmitter converts digital input data into combined-OFDM signals and a receiver recovers data from the transmitted combined OFDM signals by affixing cyclic prefixes and performing spectral shaping of the analog signal.
Abstract: In one embodiment, a transmitter converts digital input data into combined-OFDM signals and a receiver recovers data from the transmitted combined-OFDM signals. For transmission, digital data is mapped into data symbols using a commonly known modulation technique, such as QAM or DQPSK. The data symbols are subsequently divided into two or more groups according to a specified grouping pattern. Each group of data symbols is then converted into a separate OFDM subsymbol using IFFT processing. The OFDM subsymbols are then combined according to a specified combining pattern to create a combined-OFDM symbol. Combined-OFDM symbols are then prepared for transmission by affixing cyclic prefixes, converting the symbols to analog format, and performing spectral shaping of the analog signal. Upsampling may be employed to increase the signal bandwidth. In alternative embodiments, OFDM subsymbols may be combined using interleaving to create an interleaved-OFDM symbol.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: This paper considers the resampling design problem within an optimization framework for spatially scalable video codecs and shows how the operation must make trade-offs between coding efficiency, image quality and computational complexity.
Abstract: Resampling is a fundamental issue in the design of a spatially scalable video codec. The resampling procedure is responsible for down-sampling the high-resolution video sequence to generate lower resolution data, as well as upsampling the transmitted lower resolution data to predict the original high-resolution frames. In both cases, the resampling operation must make trade-offs between coding efficiency, image quality and computational complexity. In this paper, we consider the resampling design problem within an optimization framework.

Proceedings ArticleDOI
22 Mar 2006
TL;DR: Experiments show that random filtering is effective at acquiring sparse and compressible signals and has the potential for implementation in analog hardware, and so it may have a role to play in new types of analog/digital converters.
Abstract: This paper discusses random filtering, a recently proposed method for directly acquiring a compressed version of a digital signal. The technique is based on convolution of the signal with a fixed FIR filter having random taps, followed by downsampling. Experiments show that random filtering is effective at acquiring sparse and compressible signals. This process has the potential for implementation in analog hardware, and so it may have a role to play in new types of analog/digital converters.

Patent
Anil Ubale1, Partha Sriram1
14 Sep 2006
TL;DR: In this article, a combined synthesis and analysis filterbank is used to generate transformed frequency-band coefficients indicative of at least one sample of the input audio data by transforming frequency coefficients in a manner equivalent to upsampling the frequencyband coefficients and filtering the resulting up-sampled values.
Abstract: Methods and systems for transcoding input audio data in a first encoding format to generate audio data in a second encoding format, and filterbanks for use in such systems. Some such systems include a combined synthesis and analysis filterbank (configured to generate transformed frequency-band coefficients indicative of at least one sample of the input audio data by transforming frequency-band coefficients in a manner equivalent to upsampling the frequency-band coefficients and filtering the resulting up-sampled values to generate the transformed frequency-band coefficients, where the frequency-band coefficients are partially decoded versions of input audio data that are indicative of the at least one sample) and a processing subsystem configured to generate transcoded audio data in the second encoding format in response to the transformed frequency-band coefficients. Some such methods include the steps of: generating frequency-band coefficients indicative of at least one sample of input audio data by partially decoding frequency coefficients of the input audio data; generating transformed frequency-band coefficients indicative of the at least one sample of the input audio data by transforming the frequency-band coefficients in a manner equivalent to upsampling the frequency-band coefficients to generate up-sampled values and filtering the up-sampled values; and in response to the transformed frequency-band coefficients, generating the transcoded audio data so that the transcoded audio data are indicative of each sample of the input audio data.

Proceedings ArticleDOI
24 Jul 2006
TL;DR: A systematic way to determine the sufficient (or optimal) upsampling factor for simulation of a communication system with a nonlinear system and a pulse-shaping filter will be presented and the tradeoff between fidelity and speed of communication system simulation will be shown.
Abstract: Traveling-wave tube amplifiers (TWTAs) and solid-state power amplifiers (SSPAs) are used in many communication systems and exhibit nonlinear distortions in both amplitude (AM/AM) and phase (AM/PM) In general, a TWTA or SSPA is part of an overall communication system that consists of other elements, such as a pulse-shaping filter The analytical study of a system consisting of linear and nonlinear devices is often intractable and thus simulation is usually used to determine performance of such systems An important aspect in the simulation of a nonlinear system is the selection of the sampling rate The sampling rate must be sufficiently high to avoid aliasing distortion due to spectrum folding, which degrades the fidelity of the simulation However, at the same time, the sampling rate should be set low to reduce computation burden and minimize simulation times A systematic way to determine the sufficient (or optimal) upsampling factor for simulation of a communication system with a nonlinear system and a pulse-shaping filter will be presented The tradeoff between fidelity and speed of communication system simulation will be shown We will also show how a nonlinear system can be implemented using a hardware co-simulation technique

Proceedings ArticleDOI
04 Sep 2006
TL;DR: Simulation results show that PC-OFDM performs better than existing precoded OFDM and Pulse OFDM systems and is found to be equally good over Gaussian and fading channels where it achieves the maximum diversity gain of the channel.
Abstract: Orthogonal frequency division multiplexing (OFDM) provides a viable solution to communicate over frequency selective fading channels. However, in the presence of frequency nulls in the channel response, the uncoded OFDM faces serious symbol recovery problems. As an alternative to previously reported error correction techniques in the form of pre-coding for OFDM, we propose the use of post-coding of OFDM symbols in order to achieve frequency diversity. Our proposed novel post-coded OFDM (PC-OFDM) comprises of two steps: 1) upsampling of OFDM symbols and 2) subsequent multiplication of each symbol with unit magnitude complex exponentials. It is important to mention that PC-OFDM introduces redundancy in OFDM symbols while precoded OFDM introduces redundancy in data symbols before performing the IFFT operation. The main advantages of this scheme are reduction in system complexity by having a simple encoder/decoder, smaller size IFFT/FFT (inverse fast Fourier transform/fast Fourier transform) modules, and lower clock rates in the receiver and transmitter leading to lower energy consumption. The proposed system is found to be equally good over Gaussian and fading channels where it achieves the maximum diversity gain of the channel. Simulation results show that PC-OFDM performs better than existing precoded OFDM and Pulse OFDM systems.

Book ChapterDOI
18 Sep 2006
TL;DR: In this paper, the same interpolation algorithm was used in both the encoder and decoder for up-sampling of the base layer of a 3D image for decoding the enhancement layer of spatial scalability.
Abstract: When the reconstructed image of the base layer is up-sampled for decoding the enhancement layer of spatial scalability, direction information derived during decoding the spatially lower layer is used. That is direction information used for Intra prediction which is used for up-sampling again. In most cases, it shows 0.1-0.5dB quality improvement in images up-sampled by using directional filtering compared to those up-sampled conventionally. The same interpolation algorithm should be used in both the encoder and decoder.

Proceedings ArticleDOI
30 Aug 2006
TL;DR: A generalization of the contourlet and the fully sampled a trous algorithm that provides approximate shift-invariance with an acceptable level of redundancy is described and the advantages of applying contour let transforms to stereo matching are discussed.
Abstract: In this paper, we reformulate the conventional area-based stereo matching algorithm suffering from the windowing problem and solve it using shift-invariance contourlet transform. Multiple scale analysis has long been adopted in vision research. Investigation of the contourlet transform suggests that provide changeable window areas associated with the signal frequency components and hierarchically represent signals with multi-scale and multi-direction structure. The contourlet transform employs Laplacian pyramids to achieve multi-resolution decomposition and directional filter banks to achieve directional decomposition. It can capture the intrinsic geometrical structure that is key in visual information. Due to downsampling and upsampling, the contourlet transform is shift-variant. However, shift-invariance is desirable in stereo matching. In this paper we describe a generalization of the contourlet and the fully sampled a trous algorithm that provides approximate shift-invariance with an acceptable level of redundancy, and also discusses the advantages of applying contourlet transforms to stereo matching. An image pyramid is generated and used in the hierarchical stereo matching. That method consists of multiple passes which compute stereo matches with a coarse-to-fine and sparse-to-dense paradigm. Experimental results with real images are presented

Proceedings ArticleDOI
22 Mar 2006
TL;DR: This paper examines a new approach that combines digital interpolation and natural sampling conversion that uses poly-phase implementation of thedigital interpolation filter and digital differentiators for pulse width modulation for high-fidelity audio amplifiers.
Abstract: Digital pulse width modulation has been considered for high-fidelity and high-efficiency audio amplifiers for several years now. It has been shown that the distortion, can be reduced and the implementation of the system can be simplified if the switching frequency is much higher than the Nyquist rate of the modulating waveform. Hence, the input digital source is normally upsampled to a higher frequency. It was also proved that converting uniform samples to natural samples will decrease the harmonic distortion. Thus, in this paper, we examine a new approach that combines digital interpolation and natural sampling conversion. This approach uses poly-phase implementation of the digital interpolation filter and digital differentiators. We will show that the structure consists of an FIR-type linear stage and a nonlinear stage. Some spectral simulation results of a pulse width modulation system based on this approach will also be presented. Finally, we will discuss the improvement of the new approach over old algorithms.

Journal ArticleDOI
TL;DR: A block-based frequency scalable technique for efficient hierarchical coding that divides an image into its multiple resolution versions, based on the spectral properties of discrete cosine transform (DCT) kernels, is proposed.
Abstract: In this paper, we propose a block-based frequency scalable technique for efficient hierarchical coding. The proposed technique divides an image into its multiple resolution versions, based on the spectral properties of discrete cosine transform (DCT) kernels. We present that spectral decomposition, downsampling, and DCT operations are performed effectively over input DCT coefficients of one-dimensional (1-D) and two-dimensional (2-D) signals by using the proposed transform matrices. The proposed image coder is observed to reduce the computational complexity and the memory buffer size with a higher peak signal-to-noise ratio (PSNR), when compared with the traditional hierarchical image coder. In addition, the proposed architecture can preserve compatibility easily with the previous DCT-based image coder

Proceedings ArticleDOI
22 Nov 2006
TL;DR: Investigation of how motion models of different super-resolution reconstruction algorithms affect reconstruction error and face recognition rates in a surveillance environment shows that lower reconstruction error doesn't necessarily imply better recognition rates and the use of local motion models yields better recognition rate than global motion models.
Abstract: Although the use of super-resolution techniques has demonstrated the ability to improve face recognition accuracy when compared to traditional upsampling techniques, they are difficult to implement for real-time use due to their complexity and high computational demand. As a large portion of processing time is dedicated to registering the lowresolution images, many have adopted global motion models in order to improve efficiency. The drawback of such global models is that they can not accommodate for complex local motions, such as multiple objects moving independently across and static or dynamic background as frequently occurs in a surveillance environment. Local methods like optical flow can compensate for these situations, although it is achieved at the expense of computation time. In this paper, experiments have been carried out to investigate how motion models of different super-resolution reconstruction algorithms affect reconstruction error and face recognition rates in a surveillance environment. Results show that lower reconstruction error doesn?t necessarily imply better recognition rates and the use of local motion models yields better recognition rates than global motion models.

Til Aach1
01 Jan 2006
TL;DR: Criteria for the quantification of time-varying effects of sampling rate conversion in multirate filter banks is provided, and a variety of paraunitary and biorthogonal perfect reconstruction filter banks as well as orthogonal block transforms are compared.
Abstract: Sampling rate conversion in multirate filter banks leads to time-varying phenomena, which differ between deterministic and stationary random signals. More specifically, downsampling causes deterministic signals to become periodically shift variant, while upsampling turns stationary random signals into cyclostationary signals. We provide criteria for the quantification of these effects, and compare a variety of paraunitary and biorthogonal perfect reconstruction filter banks as well as orthogonal block transforms. Our criteria also permit frequency-resolved evaluations.

Patent
King Wai Thomas Lau1
30 Aug 2006
TL;DR: In this paper, a servo field preamble detector includes an upsampling module that generates a plurality of upsampled read samples by up-sampling a read signal by an upampling factor.
Abstract: A servo field preamble detector includes an upsampling module that generates a plurality of upsampled read samples by upsampling a read signal by an upsampling factor. An interpolation filter module generates a plurality of interpolated read samples from the plurality of upsampled read samples. A peak detection module identifies a plurality of peak samples from the plurality of interpolated read samples. A magnitude estimation module generates a magnitude estimation signal from the plurality of peak samples. A comparison module compares the magnitude estimation signal to a magnitude threshold and asserts a servo preamble detection signal when the magnitude estimation signal compares favorably to the magnitude threshold.

Patent
29 Sep 2006
TL;DR: In this article, the reverse distortion is added by a reverse distortion addition section 3 and the fine adjustment of comparison timing to the transmission digital baseband signal is performed by adjusting a thin-out point in the CIC decimator 13 so that the error is minimized.
Abstract: PROBLEM TO BE SOLVED: To add reverse distortion, where timing adjustment is performed to nonlinear distortion in a power amplifier for transmission precisely, to a transmission signal for correction. SOLUTION: A baseband signal subjected to digital modulation is upsampled by a CIC interpolator 4, is modulated orthogonally at a digital orthogonal modulation section 5 for converting to an RF signal by a mixer 7, and is transmitted from an antenna 9 by amplifying power at a high-frequency amplification section 8. One portion of a transmission RF signal is converted to a feedback IF signal by a mixer 10. Downsampling is performed by a CIC decimator 13. The error between the amplitude of a feedback digital baseband signal and that of a transmission digital baseband signal is calculated by an error calculation section 14. According to the error, the reverse distortion is calculated by a reverse distortion calculation section 15. The reverse distortion is added by a reverse distortion addition section 3. The fine adjustment of comparison timing to the transmission digital baseband signal of the feedback digital baseband signal at the error calculation section 14 is performed by adjusting a thin-out point in the CIC decimator 13 so that the error is minimized. COPYRIGHT: (C)2008,JPO&INPIT

Proceedings ArticleDOI
14 May 2006
TL;DR: A novel approach to convert image resolution with arbitrary ratios in the DCT (discrete cosine transform) domain is proposed, which exploits the relationship of a block and its subblocks with differing size.
Abstract: To meet with the different client end devices and network bandwidth transmission requirements, it is necessary to convert the high definition resolution images and videos to standard definition resolution format. A novel approach to convert image resolution with arbitrary ratios in the DCT (Discrete Cosine Transform) domain is proposed, which exploits the relationship of a block and its subblocks with differing size. It can realize arbitrary unequal ratios in the horizontal and vertical direction, respectively. Unlike the present methods, it does not need upsampling- downsampling process. It can perform downsizing directly to the original data and confirms to the standard decoder. The proposed approach is computationally fast and memory efficient and produces visually better images with higher PSNR compared to the spatial methods.

Patent
30 Mar 2006
TL;DR: In this paper, a method of detecting a watermark included in a signal by way of quantization index modulation (QIM) is provided. But the method is not suitable for the detection of watermarks in images, since the image may have been geometrically transformed (e.g. spatially or temporally scaled) prior to detection.
Abstract: There is provided a method of detecting a watermark included in a signal by way of quantization index modulation (QIM). The signal with the embedded watermark may have been geometrically transformed (e.g. spatially or temporally scaled) prior to detection. In order to detect the watermark even in such case, the embedder imposes an autocorrelation structure onto the embedded watermark data, for example by tiling. Initially, the detector applies conventional QIM detection. This step yields a first symbol vector, which corresponds to the embedded data when the signal was not tampered with, but does not correspond to the embedded data when the signal was subject to scaling. For example, when one data bit is embedded in each pixel of an image, 50% upsampling of the image causes a QIM detector to retrieve 3 data bits out of 3 received pixels, that is 3 data bits out of 2 original image pixels. Surprisingly, the autocorrelation of the first symbol vector will give a peak for a particular geometric transformation (e.g. the particular scaling factor). In accordance with the invention, the detector calculates said autocorrelation iunction, and uses the result to apply the inverse of the transformation, i.e. undo the scaling. A second pass of the conventional QIM detection will subsequently receive the embedded data.

Book ChapterDOI
07 May 2006
TL;DR: In this paper, an alias-free upsampling scheme was proposed to remove aliasing in a scene from a single observation by designing an alias free up-sampling scheme.
Abstract: In this paper we study the possibility of removing aliasing in a scene from a single observation by designing an alias-free upsampling scheme. We generate the unknown high frequency components of the given partially aliased (low resolution) image by minimizing the total variation of the interpolant subject to the constraint that part of unaliased spectral components in the low resolution observation are known precisely and under the assumption of sparsity in the data. This provides a mathematical basis for exact reproduction of high frequency components with probability approaching one, from their aliased observation. The primary application of the given approach would be in super-resolution imaging.

Proceedings ArticleDOI
25 Jun 2006
TL;DR: In this article, a high-speed programmable upsampler for the upsampling of a broadband signal is presented, where several filter architectures and types of logic are compared.
Abstract: This paper presents the design of a high-speed programmable upsampler for the upsampling of a broadband signal. Several filter architectures and types of logic are compared. A cascaded integrator comb (CIC) filter has been selected to achieve a power efficient upsampler with a operating speed of 3 GHz with an estimated 250 mW power consumption while at the same time keeping the amount of generated noise low to avoid interference with the analog part of the IC

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper shows that the error inhomogeneity is caused by asymmetrical filtering of quantization errors after the upsampling step in wavelet synthesis process, and develops a model that also allows predicting the amount of inhomogeneous for a given wavelet.
Abstract: Despite the popularity of wavelet-based image compression, its error inhomogeneity - the error that is different for even and odd pixel locations, has not been previously analyzed and formally addressed. The difference on PSNR performance can be substantial, up to 3.4dB for some images and compression ratios. In this paper, we show that the error inhomogeneity is caused by asymmetrical filtering of quantization errors after the upsampling step in wavelet synthesis process. We also develop a model that also allows predicting the amount of inhomogeneity for a given wavelet. Furthermore, we show how to redesign wavelet filters to reduce the error inhomogeneity.

01 Jan 2006
TL;DR: A novel architecture is advanced for the high-speed computation of orthogonal one-dimensional two-channel discrete wavelet transforms that preserves orthogonality even when the coeffi- cients are quantized.
Abstract: A novel architecture is advanced for the effi- cient high-speed computation of orthogonal one-dimensional two-channel discrete wavelet transforms. The structure is derived when the multirate operations (downsampling and upsampling) are staggered. Compared with the polyphase and lattice architec- tures the developed architecture requires the minimum number of multiplications. It preserves orthogonality even when the coeffi- cients are quantized. For multilevel discrete wavelet transform, the proposed architecture allows balanced pipeline implementations.