scispace - formally typeset
Search or ask a question

Showing papers by "Karen Egiazarian published in 2005"


Book ChapterDOI
19 Jun 2005
TL;DR: DCT based image compression using blocks of size 32x32 is considered and an effective method of bit-plane coding of quantized DCT coefficients is proposed to provide the quality of decoding images higher than for JPEG2000 by up to 1.9 dB.
Abstract: DCT based image compression using blocks of size 32x32 is considered. An effective method of bit-plane coding of quantized DCT coefficients is proposed. Parameters of post-filtering for removing of blocking artifacts in decoded images are given. The efficiency of the proposed method for test images compression is analyzed. It is shown that the proposed method is able to provide the quality of decoding images higher than for JPEG2000 by up to 1.9 dB.

104 citations


Journal ArticleDOI
TL;DR: A novel nonparametric regression method for deblurring noisy images based on the local polynomial approximation of the image and the paradigm of intersecting confidence intervals (ICI) that is applied to define the adaptive varying scales (window sizes) of the LPA estimators.
Abstract: We propose a novel nonparametric regression method for deblurring noisy images. The method is based on the local polynomial approximation (LPA) of the image and the paradigm of intersecting confidence intervals (ICI) that is applied to define the adaptive varying scales (window sizes) of the LPA estimators. The LPA-ICI algorithm is nonlinear and spatially adaptive with respect to smoothness and irregularities of the image corrupted by additive noise. Multiresolution wavelet algorithms produce estimates which are combined from different scale projections. In contrast to them, the proposed ICI algorithm gives a varying scale adaptive estimate defining a single best scale for each pixel. In the new algorithm, the actual filtering is performed in signal domain while frequency domain Fourier transform operations are applied only for calculation of convolutions. The regularized inverse and Wiener inverse filters serve as deblurring operators used jointly with the LPA-design directional kernel filters. Experiments demonstrate the state-of-art performance of the new estimators which visually and quantitatively outperform some of the best existing methods.

85 citations


Book ChapterDOI
20 Sep 2005
TL;DR: Experimental results show that the proposed 3D DCT-based video-denoising algorithm provides a competitive performance with state-of-the-art video denoising methods both in terms of PSNR and visual quality.
Abstract: The problem of denoising of video signals corrupted by additive Gaussian noise is considered in this paper. A novel 3D DCT-based video-denoising algorithm is proposed. Video data are locally filtered in sliding/running 3D windows (arrays) consisting of highly correlated spatial layers taken from consecutive frames of video. Their selection is done by the use of a block matching or similar techniques. Denoising in local windows is performed by a hard thresholding of 3D DCT coefficients of each 3D array. Final estimates of reconstructed pixels are obtained by a weighted average of the local estimates from all overlapping windows. Experimental results show that the proposed algorithm provides a competitive performance with state-of-the-art video denoising methods both in terms of PSNR and visual quality.

64 citations


Journal ArticleDOI
TL;DR: This paper deals with examining texture feature preserving properties of different filters by examining locally adaptive three-state schemes and an appropriate trade-off of the designed filter properties is provided.
Abstract: Textural features are one of the most important types of useful information contained in images. In practice, these features are commonly masked by noise. Relatively little attention has been paid to texture preserving properties of noise attenuation methods. This stimulates solving the following tasks: (1) to analyze the texture preservation properties of various filters; and (2) to design image processing methods capable to preserve texture features well and to effectively reduce noise. This paper deals with examining texture feature preserving properties of different filters. The study is performed for a set of texture samples and different noise variances. The locally adaptive three-state schemes are proposed for which texture is considered as a particular class. For "detection" of texture regions, several classifiers are proposed and analyzed. As shown, an appropriate trade-off of the designed filter properties is provided. This is demonstrated quantitatively for artificial test images and is confirmed visually for real-life images.

41 citations


Book ChapterDOI
20 Sep 2005
TL;DR: Image pre-filtering is shown to be expedient for coded image quality improvement and/or increase of compression ratio and some recommendations on how to set the compression ratio to provide quasioptimal quality of coded images are given.
Abstract: Lossy compression of noise-free and noisy images differs from each other. While in the first case image quality is decreasing with an increase of compression ratio, in the second case coding image quality evaluated with respect to a noise-free image can be improved for some range of compression ratios. This paper is devoted to the problem of lossy compression of noisy images that can take place, e.g., in compression of remote sensing data. The efficiency of several approaches to this problem is studied. Image pre-filtering is shown to be expedient for coded image quality improvement and/or increase of compression ratio. Some recommendations on how to set the compression ratio to provide quasioptimal quality of coded images are given. A novel DCT-based image compression method is briefly described and its performance is compared to JPEG and JPEG2000 with application to lossy noisy image coding.

40 citations


Proceedings ArticleDOI
14 Nov 2005
TL;DR: A spatially adaptive image deblurring algorithm that adapts to the unknown image smoothness by using local polynomial approximation (LPA) kernel estimates of varying scale and direction based on the intersection of confidence intervals (ICI) rule is presented.
Abstract: A spatially adaptive image deblurring algorithm is presented for Poisson observations. It adapts to the unknown image smoothness by using local polynomial approximation (LPA) kernel estimates of varying scale and direction based on the intersection of confidence intervals (ICI) rule. The signal-dependant characteristics of the Poissonian noise are exploited to accurately compute the pointwise variances of the directional estimates. The results show that this accurate pointwise adaptive algorithm significantly improves the image restoration quality.

34 citations


Proceedings ArticleDOI
18 Mar 2005
TL;DR: A method based on the matching pursuits algorithm for the extraction of time-frequency features that can be used for classification of various abnormal heartbeats and the usefulness of independent component analysis for extracting additional spatial features from multichannel electrocardiographic recordings is investigated.
Abstract: We present a method based on the matching pursuits algorithm for the extraction of time-frequency features that can be used for classification of various abnormal heartbeats. Further, we investigate the usefulness of independent component analysis for extracting additional spatial features from multichannel electrocardiographic recordings. The performance of these two different sets of features is assessed using the 48 recordings of the MIT-BIH arrhythmia database.

33 citations


01 Jan 2005
TL;DR: A novel approach to image-denoising based on the shapeadaptive DCT (SA-DCT) is presented: the anisotropic LPAICI technique is used in order to de-shaped the shape of the transform’s support in a pointwise adaptive manner.
Abstract: A novel approach to image-denoising based on the shapeadaptive DCT (SA-DCT) is presented. The anisotropic LPAICI technique is used in order to deÞne the shape of the transform’s support in a pointwise adaptive manner. It means that for each point in the image an adaptive estimation neighborhood is found. For each one of these neighborhoods a SADCT is performed. The thresholded SA-DCT coefÞcients are used to reconstruct a local estimate of the signal within the adaptive-shape region. Since regions corresponding to different points are in general overlapping (and thus generate an overcomplete representation of the signal), the local estimates are averaged together using adaptive weights that depend on the region’s statistics. A Wiener Þltering procedure in SA-DCT domain is also proposed. Simulation experiments conÞrm the advanced quality of the Þnal estimate. Not only objective criteria scores are high, but also the visual appearence of the estimate is superior: edges are clean, and no unpleasant ringing artifacts are introduced by the Þtted transform.

22 citations


Proceedings ArticleDOI
08 Sep 2005
TL;DR: New methods for pointwise spatially-adaptive filtering of anisotropic multivariable signals based on the local quasi-likelihood, incorporating the directional-windowed local polynomial approximations (LPA) of the signal are presented.
Abstract: We present new methods for pointwise spatially-adaptive filtering of anisotropic multivariable signals. It is assumed that the observations are given by a broad class of models with a signal-dependent variance. The proposed methods are based on the local quasi-likelihood, incorporating the directional-windowed local polynomial approximations (LPA) of the signal. The intersection of confidence intervals (ICI) rule is used in order to determine the adaptive size of the directional windows. In this way we obtain multi-directional estimates which are spatially adaptive to unknown smoothness and anisotropy of the signal. Simulation experiments confirm the advanced performance of these new algorithms.

19 citations


Proceedings ArticleDOI
24 Oct 2005
TL;DR: The task of additional compression of images earlier coded using JPEG is considered and a novel efficient method for coding quantized DCT coefficients is proposed, based on coefficient separation into bit planes, taking into account correlation between the values of neighbor coefficients in blocks.
Abstract: The task of additional compression of images earlier coded using JPEG is considered. A novel efficient method for coding quantized DCT coefficients is proposed. It is based on coefficient separation into bit planes, taking into account correlation between the values of neighbor coefficients in blocks, between the values of the corresponding coefficients of neighbor blocks as well as between the corresponding coefficients of different color layers. It is shown that the designed technique allows for images already compressed by JPEG to additionally increase compression ratio by 1.3...2.3 times without introducing additional losses.

16 citations


Proceedings ArticleDOI
TL;DR: A near-lossless compression algorithm for Color Filter Arrays (CFA) images that allows higher compression ratio than any strictly lossless algorithm for the price of some small and controllable error.
Abstract: In this contribution, we propose a near-lossless compression algorithm for Color Filter Arrays (CFA) images. It allows higher compression ratio than any strictly lossless algorithm for the price of some small and controllable error. In our approach a structural transformation is applied first in order to pack the pixels of the same color in a structure appropriate for the subsequent compression algorithm. The transformed data is compressed by a modified version of the JPEG-LS algorithm. A nonlinear and adaptive error quantization function is embedded in the JPEG-LS algorithm after the fixed and context adaptive predictors. It is step-like and adapts to the base signal level in such a manner that higher error values are allowed for lighter parts with no visual qua lity loss. These higher error values are then suppressed by gamma correction applied during the image reconstruction stage. The algorithm can be adjusted for arbitrary pixel resolution, gamma value and allowable error range. The compression performance of the proposed algorithm has been tested for real CFA raw data. The results are presented in terms of compression ratio versus reconstruction error and the visual quality of the reconstructed images is demonstrated as well. Keywords: Bayer pattern, color filter array, near-lossless compression, JPEG-LS

Proceedings ArticleDOI
TL;DR: The local maximum likelihood technique allows to deal with quite general statistical models of signal dependent observations, relaxes the standard parametric modelling of the standard maximum likelihood, and results in flexible nonparametric regression estimation of the signal.
Abstract: We consider a signal restoration from observations corrupted by random noise. The local maximum likelihood technique allows to deal with quite general statistical models of signal dependent observations, relaxes the standard parametric modelling of the standard maximum likelihood, and results in flexible nonparametric regression estimation of the signal. We deal with the anisotropy of the signal using multi-window directional sectorial local polynomial approximation. The data-driven sizes of the sectorial windows, obtained by the intersection of confidence interval (ICI) algorithm, allow to form starshaped adaptive neighborhoods used for the pointwise estimation. The developed approach is quite general and is applicable for multivariable data. A fast adaptive algorithm implementation is proposed. It, is applied for photon-limited imaging with the Poisson distribution of data. Simulation experiments and comparison with some of the best results in the field demonstrate an advanced performance of the developed algorithms.

Proceedings ArticleDOI
TL;DR: The DT-CWT is a recently suggested transform, which provides good directional selectivity in six different fixed orientations at dyadic scales with the ability to distinguish positive and negative frequencies and arises as a good candidate to replace Gabor transform in applications, where the speed is a critical issue.
Abstract: In this paper two complex wavelet transforms, namely the Gabor wavelet transform and Kingsbury's Dual-Tree Complex wavelet transform (DT-CWT) are compared for their capabilities to extract facial features. The Gabor wavelets extract directional features from images and find frequent applications in computer vision problems of face detection and face recognition. The transform involves convolving an image with an ensemble of Gabor kernels, scale and directionally parameterized. As a result, a redundant image representation is obtained, where the number of transformed images is equal to the number of Gabor kernels used. However, repetitive convolution with 2-D Gabor kernels is a rather slow computational operation. The DT-CWT is a recently suggested transform, which provides good directional selectivity in six different fixed orientations at dyadic scales with the ability to distinguish positive and negative frequencies. It has a limited redundancy of four for images and is much faster than the Gabor transform to compute. Therefore, it arises as a good candidate to replace Gabor transform in applications, where the speed (i.e. on-line implementation) is a critical issue. We involve the two wavelet families in facial landmarks detection and compare their performance by statistical tests, e.g. by building Receiver Operating Characteristic (ROC) curves and by measuring the sensitivity of a particular feature extractor. We also compare results of Bayesian classification for the two families of feature extractors involved.

Proceedings ArticleDOI
TL;DR: An alternative image formation chain for applications demanding high quality imaging is presented, using three techniques for structural transformation of the Bayer pattern CFA and two techniques for color transformation, compressed by the JPEG-LS algorithm.
Abstract: This paper is devoted to the lossless compression of Bayer pattern color filer array (CFA) data. An alternative image formation chain for applications demanding high quality imaging is presented. Three techniques for structural transformation of the Bayer pattern CFA and two techniques for color transformation are suggested. The transformed data are then compressed by the JPEG-LS algorithm. The experimental results obtained by processing real Bayer CFA raw data captured by a digital camera are quite promising, achieving an average compression ratio of 1.68.

01 Jan 2005
TL;DR: This paper reviews the most popular distortion measures that have been used to assess the performance of different BSS algorithms and shows that the common distortion measures are not suitable for the degenerate blind separation of sparse sources.
Abstract: An important issue in Blind Source Separation (BSS) is how to measure the similarity between a true source and its estimate. This is a simple but not completely trivial topic that has been a bit overlooked in the literature. Special problems arise when the source signals are sparse and the BSS problem is degenerate (i.e. we have more sources than observed mixtures). In this paper we review the most popular distortion measures that have been used to assess the performance of different BSS algorithms. We show that the common distortion measures are not suitable for the degenerate blind separation of sparse sources. Finally we propose a class of alternative distortion measures for sparse sources.

Proceedings ArticleDOI
TL;DR: This paper deals with using learning with clustering in order to make the procedure of locally adaptive filter design more automatic and less subjective and has been tested for mixed Gaussian multiplicative+impulse noise environment.
Abstract: Image filtering or denoising is a problem widely addressed in optical, infrared and radar remote sensing data processing. Although a large number of methods for image denoising exist, the choice of a proper, efficient filter is still a difficult problem and requires wide a priori knowledge. Locally adaptive filtering of images is an approach that has been widely investigated and exploited during recent 15 years. It has demonstrated a great potential. However, there are still some problems in design of locally adaptive filters that is generally too heuristic. This paper puts forward a new approach to get around this shortcoming. It deals with using learning with clustering in order to make the procedure of locally adaptive filter design more automatic and less subjective. The performance of this approach to learning and locally adaptive filtering has been tested for mixed Gaussian multiplicative+impulse noise environment. Its advantages in comparison to another learning methods and the efficiency of the considered component filters is demonstrated by both numerical simulation data and real-life radar image processing examples.

Proceedings ArticleDOI
TL;DR: In this contribution, adaptive boosting is used as criterion for selecting optimal atoms as features in frontal face detection system and a Bayesian classifier is use as a weak learner instead of a simple threshold, thus ensuring a higher accuracy for slightly increased computational cost during the detection stage.
Abstract: Atomic decompositions are lower-cost alternatives to the principal component analysis (PCA) in tasks where sparse signal representation is required. In pattern classifications tasks, e.g. face detection, a careful selection of atoms is needed in order to ensure an optimal and fast-operating decomposition to be used in the feature extraction stage. In this contribution, adaptive boosting is used as criterion for selecting optimal atoms as features in frontal face detection system. The goal is to speed up the learning process by a proper combination of a dictionary of atoms and a weak learner. Dictionaries of anisotropic wavelet packets are used where the total number of atoms is still feasible for large-size images. In the adaptive boosting algorithm a Bayesian classifier is used as a weak learner instead of a simple threshold, thus ensuring a higher accuracy for slightly increased computational cost during the detection stage. The experimental results obtained for four different dictionaries are quite promising based on the good localization properties of the anisotropic wavelet packet functions.

Book ChapterDOI
07 Feb 2005
TL;DR: In this article, the problem of constructing multi-wavelets with more than one scaling and wavelet function was addressed. But this problem was not addressed for the case of arbitrary, non-dyadic time interval splitting.
Abstract: We address the problem of constructing multi-wavelets, that is, wavelets with more than one scaling and wavelet function. We generalize the algorithm, proposed by Alpert [1] for generating discrete Legendre multi-wavelets to the case of arbitrary, non-dyadic time interval splitting.

Posted Content
TL;DR: In this article, the problem of constructing a fast lossless code in the case when the source alphabet is large is addressed, where the main idea is to group letters with small probabilities in subsets and use time consuming coding for these subsets only.
Abstract: We address the problem of constructing a fast lossless code in the case when the source alphabet is large. The main idea of the new scheme may be described as follows. We group letters with small probabilities in subsets (acting as super letters) and use time consuming coding for these subsets only, whereas letters in the subsets have the same code length and therefore can be coded fast. The described scheme can be applied to sources with known and unknown statistics.

Journal Article
TL;DR: The algorithm proposed by Alpert for generating discrete Legendre multi-wavelets is generalized to the case of arbitrary, non-dyadic time interval splitting.
Abstract: We address the problem of constructing multi-wavelets, that is, wavelets with more than one scaling and wavelet function. We generalize the algorithm, proposed by Alpert [1] for generating discrete Legendre multi-wavelets to the case of arbitrary, non-dyadic time interval splitting.

Proceedings ArticleDOI
TL;DR: The best anisotropic basis search framework is applied to the problem of recognition of characters captured from gray-scale pictures of car license plates to show its superiority to PCA as it yields equal and even lower classification error rate with considerably reduced computational costs.
Abstract: The best basis paradigm is a lower cost alternative to the principal component analysis (PCA) for feature extraction in pattern recognition applications. Its main idea is to build a collection of bases and search for the best one in terms of e.g. best class separation. Recently, fast best basis search algorithms have been generalized for anisotropic wavelet packet bases. Anisotropy is preferable for 2-D objects since it helps capturing local image features in a better way. In this contribution, the best anisotropic basis search framework is applied to the problem of recognition of characters captured from gray-scale pictures of car license plates. The goals are to simplify the classifier and to avoid a preliminary binarization stage by extracting features directly from the gray-scale images. The collection of bases is formed by anisotropic wavelet packets. The search algorithm seeks for a basis providing the lowest-dimensional data representation preserving the inter-class separability for given training data set, measured as Euclidean distance between class centroids. The relationship between the feature extractor and classifier complexity is clarified by training neural networks for different local bases. The proposed methodology shows its superiority to PCA as it yields equal and even lower classification error rate with considerably reduced computational costs.


Journal ArticleDOI
TL;DR: A general and efficient algorithm for decomposition of binary matrices and the corresponding fast transform is developed and it is demonstrated that, in typical applications, the proposed algorithm is significantly more efficient than the conventional Walsh-Hadamard transform.
Abstract: Binary matrices or (± 1)-matrices have numerous applications in coding, signal processing, and communications. In this paper, a general and efficient algorithm for decomposition of binary matrices and the corresponding fast transform is developed. As a special case, Hadamard matrices are considered. The difficulties of the construction of 4n-point Hadamard transforms are related to the Hadamard problem: the question of the existence of Hadamard matrices. (It is not known whether for every integer n, there is an orthogonal 4n × 4n matrix with elements ± 1.) In the derived fast algorithms, the number of real operations is reduced from O(N2) to O(N log N) compared to direct computation. The proposed scheme requires no zero padding of the input data. Comparisions revealing the efficiency of the proposed algorithms with respect to the known ones are given. In particular, it is demonstrated that, in typical applications, the proposed algorithm is significantly more efficient than the conventional Walsh-Hadamard transform. Note that for Hadamard matrices of orders ≥ 96 the general algorithm is more efficient than the classical Walsh-Hadamard transform whose order is a power of 2. The algorithm has a simple and symmetric structure. The results of numerical examples are presented.

Journal ArticleDOI
TL;DR: Using this decomposition the linear least squares problem is reduced to solving two linear systems and it is shown that this approach can provide computational savings.
Abstract: In this paper we present a partially orthogonal decomposition for a matrix A. Using this decomposition the linear least squares problem is reduced to solving two linear systems. The matrix of the first system is symmetric and positive definite, and the matrix of the second system is nonsingular upper triangular. We show that this approach can provide computational savings.

Proceedings ArticleDOI
TL;DR: A modification of the phase-correlation ME algorithm based on Complex Discrete Wavelet Transform (CDWT) is presented and it is shown that the proposed approach exhibits lower complexity and higher accuracy.
Abstract: Motion estimation (ME) is the most time consuming part in contemporary video compression algorithms and standards. In recent years, certain transform domain "phase-correlation" ME algorithms based on Complex-valued Wavelet Transforms have been developed to achieve lower complexity than the previous approaches. In the present paper we describe an implementation of the basic phase-correlation ME techniques on a fixed-point dual-core processor architecture such as the TI OMAP one. We aim at achieving low computational complexity and algorithm stability without affecting the estimation accuracy. The first stage of our ME algorithm is a multiscale complex-valued transform based on all-pass filters. We have developed wave digital filter (WDF) structures to ensure better performance and higher robustness in fixed-point arithmetic environments. For higher efficiency the structures utilize some of the dedicated filtering instructions present in the 'C5510 DSP part of the dual-core processor. The calculation of motion vectors is performed using maximum phase-correlation criteria. Minimum subband squared difference is estimated for every subband level of the decomposition. To minimize the number of real-time computations we have adapted this algorithm to the functionality of the hardware extensions present in the 'C5510. We consider our approach quite promising for realizing video coding standards on mobile devices, as many of them utilize fixed-point DSP architectures.

Proceedings ArticleDOI
08 Sep 2005
TL;DR: Experiments with synthetic image sequences demonstrate that by properly designing the start-up filter, the proposed technique provides, with a considerably reduced number of computations, a performance similar to that in a recently introduced method.
Abstract: This contribution introduces a computationally-efficient scheme for phase-based motion estimation. The local phase for consecutive dyadic scales and six different directions is retrieved through a complex-valued subband decomposition. It is obtained by a successive use of a recursive Hilbert transformer and recursive power-complementary half-band filter pairs. The so-called approximately linear-phase recursive half-band filter proposed by Renfors and Saramaki is used as a start-up filter for generating both the Hilbert transformer and the half-band filter pairs. Experiments with synthetic image sequences demonstrate that by properly designing the start-up filter, the proposed technique provides, with a considerably reduced number of computations, a performance similar to that in a recently introduced method.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: An approach to fractal image compression that provides fast decoding of the compressed image in one iteration and allows knowing the accurate value of the error contributed by each range block to the collage error at each step of partition scheme optimization is proposed.
Abstract: We propose an approach to fractal image compression that provides fast decoding of the compressed image in one iteration and allows knowing the accurate value of the error contributed by each range block to the collage error at each step of partition scheme optimization. A modification of this method assuming equal sizes of domain and range blocks is considered. The results of the proposed approach application to test images are analyzed. The further research directions are discussed.

Patent
10 Oct 2005
TL;DR: In this paper, a prefix and a suffix stream are formed using codewords for said at least one symbol in the source text and each subsequent prefix pair in said prefix stream is concatenated to form a concatenate prefix stream.
Abstract: The invention relates to method for coding data in an electronic device. In the method a source text is obtained in the memory. At least one symbol in the source text is grouped in at least one symbol group based on the probability of said at least one symbol in the source text. A prefix and a suffix stream are formed using codewords for said at least one symbol. Each subsequent prefix pair in said prefix stream is concatenated to form a concatenated prefix stream. The concatenated prefix stream is set as a new source text in said memory and the coding procedure is repeated if the number of symbol groups among said at least one symbol group is less than or equal to a predefined threshold value.