scispace - formally typeset
Search or ask a question

Showing papers on "Wavelet published in 2009"


Posted Content
TL;DR: In this paper, the spectral graph wavelet operator is defined based on spectral decomposition of the discrete graph Laplacian, and a wavelet generating kernel and a scale parameter are used to localize this operator to an indicator function.
Abstract: We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian $\L$. Given a wavelet generating kernel $g$ and a scale parameter $t$, we define the scaled wavelet operator $T_g^t = g(t\L)$. The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on $g$, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing $\L$. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.

1,119 citations


Journal ArticleDOI
TL;DR: A novel image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) is proposed, aiming at solving the fusion problem of multifocus images, and significantly outperforms the traditional discrete wavelets transform-based and the discrete wavelet frame transform- based image fusion methods.

593 citations


Journal ArticleDOI
TL;DR: A fast, powerful and stable filter based on combined wavelet and Fourier analysis for the elimination of horizontal or vertical stripes in images is presented and compared with other types of destriping filters.
Abstract: A fast, powerful and stable filter based on combined wavelet and Fourier analysis for the elimination of horizontal or vertical stripes in images is presented and compared with other types of destriping filters. Strict separation between artifacts and original features allowing both, suppression of the unwanted structures and high degree of preservation of the original image information is endeavoured. The results are validated by visual assessments, as well as by quantitative estimation of the image energy loss. The capabilities and the performance of the filter are tested on a number of case studies related to applications in tomographic imaging. The case studies include (i) suppression of waterfall artifacts in electron microscopy images based on focussed ion beam nanotomography, (ii) removal of different types of ring artifacts in synchrotron based X-ray microtomography and (iii) suppression of horizontal stripe artifacts from phase projections in grating interferometry.

570 citations


Journal ArticleDOI
TL;DR: A new measure of image similarity called the complex wavelet structural similarity (CW-SSIM) index is introduced and its applicability as a general purpose image similarity index is shown and it is demonstrated that it is computationally less expensive and robust to small rotations and translations.
Abstract: We introduce a new measure of image similarity called the complex wavelet structural similarity (CW-SSIM) index and show its applicability as a general purpose image similarity index. The key idea behind CW-SSIM is that certain image distortions lead to consistent phase changes in the local wavelet coefficients, and that a consistent phase shift of the coefficients does not change the structural content of the image. By conducting four case studies, we have demonstrated the superiority of the CW-SSIM index against other indices (e.g., Dice, Hausdorff distance) commonly used for assessing the similarity of a given pair of images. In addition, we show that the CW-SSIM index has a number of advantages. It is robust to small rotations and translations. It provides useful comparisons even without a preprocessing image registration step, which is essential for other indices. Moreover, it is computationally less expensive.

568 citations


Journal ArticleDOI
TL;DR: In this paper, the wavelet thresholding principle is used in the decomposition modes resulting from applying EMD to a signal, and it is shown that although a direct application of this principle is not feasible in the EMD case, it can be appropriately adapted by exploiting the special characteristics of the E MD decomposition mode.
Abstract: One of the tasks for which empirical mode decomposition (EMD) is potentially useful is nonparametric signal denoising, an area for which wavelet thresholding has been the dominant technique for many years. In this paper, the wavelet thresholding principle is used in the decomposition modes resulting from applying EMD to a signal. We show that although a direct application of this principle is not feasible in the EMD case, it can be appropriately adapted by exploiting the special characteristics of the EMD decomposition modes. In the same manner, inspired by the translation invariant wavelet thresholding, a similar technique adapted to EMD is developed, leading to enhanced denoising performance.

553 citations


Journal ArticleDOI
TL;DR: A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling, with performance comparisons to many state-of-the-art compressive-sensing inversion algorithms.
Abstract: Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and, therefore, this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in wavelet-based compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many state-of-the-art compressive-sensing inversion algorithms.

516 citations


Journal ArticleDOI
TL;DR: This paper presents a generic and patient-specific classification system designed for robust and accurate detection of ECG heartbeat patterns that can adapt to significant interpatient variations in ECG patterns by training the optimal network structure, and achieves higher accuracy over larger datasets.
Abstract: This paper presents a generic and patient-specific classification system designed for robust and accurate detection of ECG heartbeat patterns. The proposed feature extraction process utilizes morphological wavelet transform features, which are projected onto a lower dimensional feature space using principal component analysis, and temporal features from the ECG data. For the pattern recognition unit, feedforward and fully connected artificial neural networks, which are optimally designed for each patient by the proposed multidimensional particle swarm optimization technique, are employed. By using relatively small common and patient-specific training data, the proposed classification system can adapt to significant interpatient variations in ECG patterns by training the optimal network structure, and thus, achieves higher accuracy over larger datasets. The classification experiments over a benchmark database demonstrate that the proposed system achieves such average accuracies and sensitivities better than most of the current state-of-the-art algorithms for detection of ventricular ectopic beats (VEBs) and supra-VEBs (SVEBs). Over the entire database, the average accuracy-sensitivity performances of the proposed system for VEB and SVEB detections are 98.3%-84.6% and 97.4%-63.5%, respectively. Finally, due to its parameter-invariant nature, the proposed system is highly generic, and thus, applicable to any ECG dataset.

423 citations


Proceedings ArticleDOI
07 Nov 2009
TL;DR: Block-based random image sampling is coupled with a projection-driven compressed-sensing recovery that encourages sparsity in the domain of directional transforms simultaneously with a smooth reconstructed image, yielding images with quality that matches or exceeds that produced by a popular, yet computationally expensive, technique which minimizes total variation.
Abstract: Block-based random image sampling is coupled with a projection-driven compressed-sensing recovery that encourages sparsity in the domain of directional transforms simultaneously with a smooth reconstructed image. Both contourlets as well as complex-valued dual-tree wavelets are considered for their highly directional representation, while bivariate shrinkage is adapted to their multiscale decomposition structure to provide the requisite sparsity constraint. Smoothing is achieved via a Wiener filter incorporated into iterative projected Landweber compressed-sensing recovery, yielding fast reconstruction. The proposed approach yields images with quality that matches or exceeds that produced by a popular, yet computationally expensive, technique which minimizes total variation. Additionally, reconstruction quality is substantially superior to that from several prominent pursuits-based algorithms that do not include any smoothing.

368 citations


Journal ArticleDOI
TL;DR: This review provides a comprehensive survey of the current work on wavelet approaches to TCM and also proposes two new prospects for future studies in this area.
Abstract: This paper reviews the state-of-the-art of wavelet analysis for tool condition monitoring (TCM). Wavelet analysis has been the most important non-stationary signal processing tool today, and popular in machining sensor signal analysis. Based on the nature of monitored signals, wavelet approaches are introduced and the superiorities of wavelet analysis to Fourier methods are discussed for TCM. According to the multiresolution, sparsity and localization properties of wavelet transform, literatures are reviewed in five categories in TCM: time–frequency analysis of machining signal, signal denoising, feature extraction, singularity analysis for tool state estimation, and density estimation for tool wear classification. This review provides a comprehensive survey of the current work on wavelet approaches to TCM and also proposes two new prospects for future studies in this area.

348 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel approach based on the shearlet transform: a multiscale directional transform with a greater ability to localize distributed discontinuities such as edges, which is useful to design simple and effective algorithms for the detection of corners and junctions.
Abstract: It is well known that the wavelet transform provides a very effective framework for analysis of multiscale edges. In this paper, we propose a novel approach based on the shearlet transform: a multiscale directional transform with a greater ability to localize distributed discontinuities such as edges. Indeed, unlike traditional wavelets, shearlets are theoretically optimal in representing images with edges and, in particular, have the ability to fully capture directional and other geometrical features. Numerical examples demonstrate that the shearlet approach is highly effective at detecting both the location and orientation of edges, and outperforms methods based on wavelets as well as other standard methods. Furthermore, the shearlet approach is useful to design simple and effective algorithms for the detection of corners and junctions.

337 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of fitting a parametric model to time-series data that are afflicted by correlated noise, represented by a sum of two stationary Gaussian processes: one that is uncorrelated in time and another that has a power spectral density varying as 1/f γ.
Abstract: We consider the problem of fitting a parametric model to time-series data that are afflicted by correlated noise. The noise is represented by a sum of two stationary Gaussian processes: one that is uncorrelated in time, and another that has a power spectral density varying as 1/f γ. We present an accurate and fast [O(N)] algorithm for parameter estimation based on computing the likelihood in a wavelet basis. The method is illustrated and tested using simulated time-series photometry of exoplanetary transits, with particular attention to estimating the mid-transit time. We compare our method to two other methods that have been used in the literature, the time-averaging method and the residual-permutation method. For noise processes that obey our assumptions, the algorithm presented here gives more accurate results for mid-transit times and truer estimates of their uncertainties.

Journal ArticleDOI
TL;DR: In this article, a multi-resolution analysis based on the wavelet transform (WT) has been implemented to study NDVI time series, which can be decomposed using this MRA as a sum of series associated with different temporal scales.

Journal ArticleDOI
TL;DR: In this article, a new vision sensor-based fire-detection method for an early warning fire-monitoring system was proposed, where candidate fire regions were detected using modified versions of previous related methods, such as the detection of moving regions and fire-colored pixels.

01 Jan 2009
TL;DR: This paper aims to find out the advantages of wavelet transform compared to Fourier transform, the improved version of Fourier Transform for signal transmission.
Abstract: Wavelet analysis is an exciting new method for solving difficult problems in mathematics, physics, and engineering, with modern applications as diverse as wave propagation, data compression, signal processing, image processing, pattern recognition, computer graphics, the detection of aircraft and submarines and other medical image technology. Wavelets allow complex information such as music, speech, images and patterns to be decomposed into elementary forms at different positions and scales and subsequently reconstructed with high precision. Signal transmission is based on transmission of a series of numbers. The series representation of a function is important in all types of signal transmission. The wavelet representation of a function is a new technique. Wavelet transform of a function is the improved version of Fourier transform. Fourier transform is a powerful tool for analyzing the components of a stationary signal. But it is failed for analyzing the non stationary signal where as wavelet transform allows the components of a non-stationary signal to be analyzed. In this paper, our main goal is to find out the advantages of wavelet transform compared to Fourier transform.

Journal ArticleDOI
TL;DR: The obtained results show the proposed model can predict both short- and long-term precipitation events because of using multi-scale time series as the ANN input layer.

Journal ArticleDOI
27 Jul 2009
TL;DR: A new family of second-generation wavelets constructed using a robust data-prediction lifting scheme that achieves results, previously computed by solving an inhomogeneous Laplace equation, through an explicit computation, avoiding the difficulties in solving large and poorly-conditioned systems of equations.
Abstract: We propose a new family of second-generation wavelets constructed using a robust data-prediction lifting scheme. The support of these new wavelets is constructed based on the edge content of the image and avoids having pixels from both sides of an edge. Multi-resolution analysis, based on these new edge-avoiding wavelets, shows a better decorrelation of the data compared to common linear translation-invariant multi-resolution analyses. The reduced inter-scale correlation allows us to avoid halo artifacts in band-independent multi-scale processing without taking any special precautions. We thus achieve nonlinear data-dependent multi-scale edge-preserving image filtering and processing at computation times which are linear in the number of image pixels. The new wavelets encode, in their shape, the smoothness information of the image at every scale. We use this to derive a new edge-aware interpolation scheme that achieves results, previously computed by solving an inhomogeneous Laplace equation, through an explicit computation. We thus avoid the difficulties in solving large and poorly-conditioned systems of equations.We demonstrate the effectiveness of the new wavelet basis for various computational photography applications such as multi-scale dynamic-range compression, edge-preserving smoothing and detail enhancement, and image colorization.

Journal ArticleDOI
TL;DR: The use of combined neural network model to guide model selection for classification of electroencephalogram (EEG) signals achieved accuracy rates which were higher than that of the stand-alone neuralnetwork model.

Journal ArticleDOI
TL;DR: A new semi-blind reference watermarking scheme based on discrete wavelet transform (DWT) and singular value decomposition (SVD) for copyright protection and authenticity and it is shown that the proposed scheme also stands with the ambiguity attack also.

Journal ArticleDOI
TL;DR: A fault diagnosis system is proposed for internal combustion engines using wavelet packet transform (WPT) and artificial neural network (ANN) techniques and achieved an average classification accuracy of over 95% for various engine working conditions.
Abstract: In the present study, a fault diagnosis system is proposed for internal combustion engines using wavelet packet transform (WPT) and artificial neural network (ANN) techniques. In fault diagnosis for mechanical systems, WPT is a well-known signal processing technique for fault detection and identification. The signal processing algorithm of the present system is gained from previous work used for speech recognition. In the preprocessing of sound emission signals, WPT coefficients are used for evaluating their entropy and treated as the features to distinguish the fault conditions. Obviously, WPT can improve the continuous wavelet transform (CWT) used over a longer computing time and huge operand. It can also solve the frequency-band disagreement by discrete wavelet transform (DWT) only breaking up the approximation version. In the experimental work, the wavelets are used as mother wavelets to build and perform the proposed WPT technique. In the classification, to verify the effect of the proposed generalized regression neural network (GRNN) in fault diagnosis, a conventional back-propagation network (BPN) is compared with a GRNN network. The experimental results showed the proposed system achieved an average classification accuracy of over 95% for various engine working conditions.

Journal ArticleDOI
TL;DR: A novel framework for IQA to mimic the human visual system (HVS) by incorporating the merits from multiscale geometric analysis (MGA), contrast sensitivity function (CSF), and the Weber's law of just noticeable difference (JND) is developed.
Abstract: Reduced-reference (RR) image quality assessment (IQA) has been recognized as an effective and efficient way to predict the visual quality of distorted images. The current standard is the wavelet-domain natural image statistics model (WNISM), which applies the Kullback-Leibler divergence between the marginal distributions of wavelet coefficients of the reference and distorted images to measure the image distortion. However, WNISM fails to consider the statistical correlations of wavelet coefficients in different subbands and the visual response characteristics of the mammalian cortical simple cells. In addition, wavelet transforms are optimal greedy approximations to extract singularity structures, so they fail to explicitly extract the image geometric information, e.g., lines and curves. Finally, wavelet coefficients are dense for smooth image edge contours. In this paper, to target the aforementioned problems in IQA, we develop a novel framework for IQA to mimic the human visual system (HVS) by incorporating the merits from multiscale geometric analysis (MGA), contrast sensitivity function (CSF), and the Weber's law of just noticeable difference (JND). In the proposed framework, MGA is utilized to decompose images and then extract features to mimic the multichannel structure of HVS. Additionally, MGA offers a series of transforms including wavelet, curvelet, bandelet, contourlet, wavelet-based contourlet transform (WBCT), and hybrid wavelets and directional filter banks (HWD), and different transforms capture different types of image geometric information. CSF is applied to weight coefficients obtained by MGA to simulate the appearance of images to observers by taking into account many of the nonlinearities inherent in HVS. JND is finally introduced to produce a noticeable variation in sensory experience. Thorough empirical studies are carried out upon the LIVE database against subjective mean opinion score (MOS) and demonstrate that 1) the proposed framework has good consistency with subjective perception values and the objective assessment results can well reflect the visual quality of images, 2) different transforms in MGA under the new framework perform better than the standard WNISM and some of them even perform better than the standard full-reference IQA model, i.e., the mean structural similarity index, and 3) HWD performs best among all transforms in MGA under the framework.

Journal ArticleDOI
TL;DR: In this paper, a constrained linear combination of the channels with minimum error variance, and of Wiener filtering, on a frame of spherical wavelets called needlets, allowing localised filtering in both pixel space and harmonic space.
Abstract: Context. The WMAP satellite has made available high quality maps of the sky in five frequency bands ranging from 22 to 94 GHz, with the main scientific objective of studying the anisotropies of the Cosmic Microwave Background (CMB). These maps, however, contain a mixture of emission from various astrophysical origin, superimposed on CMB emission.Aims. The objective of the present work is to make a high resolution CMB map in which contamination by such galactic and extra-galactic foreground emissions, as well as by instrumental noise, is as low as possible.Methods. The method used is an implementation of a constrained linear combination of the channels with minimum error variance, and of Wiener filtering, on a frame of spherical wavelets called needlets, allowing localised filtering in both pixel space and harmonic space.Results. We obtain a low contamination low noise CMB map at the resolution of the WMAP W channel, which can be used for a range of scientific studies. We obtain also a Wiener-filtered version with minimal integrated error.Conclusions. The resulting CMB maps offer significantly better rejection of galactic foreground than previous CMB maps from WMAP data. They can be considered as the most precise full-sky CMB temperature maps to date.

Journal ArticleDOI
TL;DR: Wavelet behavior is found to be strongly impacted by the degree of asymmetry of the wavelet in both the frequency and the time domain, as quantified by the third central moments.
Abstract: The influence of higher-order wavelet properties on the analytic wavelet transform behavior is investigated, and wavelet functions offering advantageous performance are identified. This is accomplished through detailed investigation of the generalized Morse wavelets, a two-parameter family of exactly analytic continuous wavelets. The degree of time/frequency localization, the existence of a mapping between scale and frequency, and the bias involved in estimating properties of modulated oscillatory signals, are proposed as important considerations. Wavelet behavior is found to be strongly impacted by the degree of asymmetry of the wavelet in both the frequency and the time domain, as quantified by the third central moments. A particular subset of the generalized Morse wavelets, recognized as deriving from an inhomogeneous Airy function, emerge as having particularly desirable properties. These ldquoAiry waveletsrdquo substantially outperform the only approximately analytic Morlet wavelets for high time localization. Special cases of the generalized Morse wavelets are examined, revealing a broad range of behaviors which can be matched to the characteristics of a signal.

Journal ArticleDOI
Kwang Eun Jang1, Sungho Tak1, Jinwook Jung1, Jaeduck Jang1, Yong Jeong1, Jong Chul Ye1 
TL;DR: A wavelet minimum description length (Wavelet-MDL) detrending algorithm is proposed to decompose NIRS measurements into global trends, hemodynamic signals, and uncorrelated noise components at distinct scales and demonstrates that the new detrended algorithm outperforms the conventional approaches.
Abstract: Near-infrared spectroscopy (NIRS) can be employed to investigate brain activities associated with regional changes of the oxy- and deoxyhemoglobin concentration by measuring the absorption of near-infrared light through the intact skull. NIRS is regarded as a promising neuroimaging modality thanks to its excellent temporal resolution and flexibility for routine monitoring. Recently, the general linear model (GLM), which is a standard method for functional MRI (fMRI) analysis, has been employed for quantitative analysis of NIRS data. However, the GLM often fails in NIRS when there exists an unknown global trend due to breathing, cardiac, vasomotion, or other experimental errors. We propose a wavelet minimum description length (Wavelet-MDL) detrending algorithm to overcome this problem. Specifically, the wavelet transform is applied to decompose NIRS measurements into global trends, hemodynamic signals, and uncorrelated noise components at distinct scales. The minimum description length (MDL) principle plays an important role in preventing over- or underfitting and facilitates optimal model order selection for the global trend estimate. Experimental results demonstrate that the new detrending algorithm outperforms the conventional approaches.

Journal ArticleDOI
TL;DR: A new network signal modelling technique for detecting network anomalies, combining the wavelet approximation and system identification theory is proposed, which achieves high-detection rates in terms of both attack instances and attack types.
Abstract: Signal processing techniques have been applied recently for analyzing and detecting network anomalies due to their potential to find novel or unknown intrusions. In this paper, we propose a new network signal modelling technique for detecting network anomalies, combining the wavelet approximation and system identification theory. In order to characterize network traffic behaviors, we present fifteen features and use them as the input signals in our system. We then evaluate our approach with the 1999 DARPA intrusion detection dataset and conduct a comprehensive analysis of the intrusions in the dataset. Evaluation results show that the approach achieves high-detection rates in terms of both attack instances and attack types. Furthermore, we conduct a full day's evaluation in a real large-scale WiFi ISP network where five attack types are successfully detected from over 30 millions flows.

Journal ArticleDOI
TL;DR: With respect to space-time tensor-product wavelet bases, parabolic initial boundary value problems are equivalently formulated as bi-infinite matrix problems and adaptive wavelet methods are shown to yield sequences of approximate solutions which converge at the optimal rate.
Abstract: With respect to space-time tensor-product wavelet bases, parabolic initial boundary value problems are equivalently formulated as bi-infinite matrix problems. Adaptive wavelet methods are shown to yield sequences of approximate solutions which converge at the optimal rate. In case the spatial domain is of product type, the use of spatial tensor product wavelet bases is proved to overcome the so-called curse of dimensionality, i.e., the reduction of the convergence rate with increasing spatial dimension.

Journal ArticleDOI
TL;DR: A complexified version of the Riesz transform which has the remarkable property of mapping a real-valued wavelet basis of L2(R2) into a complex one is considered, which yields a representation where each wavelet index is associated with a local orientation, an amplitude and a phase.
Abstract: The monogenic signal is the natural 2D counterpart of the 1D analytic signal. We propose to transpose the concept to the wavelet domain by considering a complexified version of the Riesz transform which has the remarkable property of mapping a real-valued (primary) wavelet basis of L2(R2) into a complex one. The Riesz operator is also steerable in the sense that it give access to the Hilbert transform of the signal along any orientation. Having set those foundations, we specify a primary polyharmonic spline wavelet basis of L2(R2) that involves a single Mexican-hat-like mother wavelet (Laplacian of a B-spline). The important point is that our primary wavelets are quasi-isotropic: they behave like multiscale versions of the fractional Laplace operator from which they are derived, which ensures steerability. We propose to pair these real-valued basis functions with their complex Riesz counterparts to specify a multiresolution monogenic signal analysis. This yields a representation where each wavelet index is associated with a local orientation, an amplitude and a phase. We give a corresponding wavelet-domain method for estimating the underlying instantaneous frequency. We also provide a mechanism for improving the shift and rotation-invariance of the wavelet decomposition and show how to implement the transform efficiently using perfect-reconstruction filterbanks. We illustrate the specific feature-extraction capabilities of the representation and present novel examples of wavelet-domain processing; in particular, a robust, tensor-based analysis of directional image patterns, the demodulation of interferograms, and the reconstruction of digital holograms.

Journal ArticleDOI
TL;DR: In this article, a technique for the fusion of multispectral and hyperspectral images to enhance the spatial resolution of the latter is presented based on a Bayesian estimation of the HS image, assuming a joint normal model for the images and an additive noise imaging model for HS image.
Abstract: In this paper, a technique is presented for the fusion of multispectral (MS) and hyperspectral (HS) images to enhance the spatial resolution of the latter. The technique works in the wavelet domain and is based on a Bayesian estimation of the HS image, assuming a joint normal model for the images and an additive noise imaging model for the HS image. In the complete model, an operator is defined, describing the spatial degradation of the HS image. Since this operator is, in general, not exactly known and in order to alleviate the burden of solving the inverse operation (a deconvolution problem), an interpolation is performed a priori . Furthermore, the knowledge of the spatial degradation is restricted to an approximation based on the resolution difference between the images. The technique is compared to its counterpart in the image domain and validated for noisy conditions. Furthermore, its performance is compared to several state-of-the-art pansharpening techniques, in the case where the MS image becomes a panchromatic image, and to MS and HS image fusion techniques from the literature.

Patent
16 Dec 2009
TL;DR: In this paper, the authors used wavelet transformers to analyze heart-related electronic signals and extract features that can accurately identify various states of the cardiovascular system and predict the presence and extent of cardiovascular disease in general.
Abstract: The present invention relates to advanced signal processing methods including digital wavelet transformation to analyze heart-related electronic signals and extract features that can accurately identify various states of the cardiovascular system. The invention may be utilized to estimate the extent of blood volume loss, distinguish blood volume loss from physiological activities associated with exercise, and predict the presence and extent of cardiovascular disease in general.

Book
12 Nov 2009

Proceedings ArticleDOI
TL;DR: This work compares analysis and synthesis ℓ1-norm regularization with overcomplete transforms for denoising and deconvolution and finds that for orthonormal transforms, the synthesis prior and analysis prior are equivalent; however, for over complete transforms the two formulations are different.
Abstract: The variational approach to signal restoration calls for the minimization of a cost function that is the sum of a data fidelity term and a regularization term, the latter term constituting a 'prior'. A synthesis prior represents the sought signal as a weighted sum of 'atoms'. On the other hand, an analysis prior models the coefficients obtained by applying the forward transform to the signal. For orthonormal transforms, the synthesis prior and analysis prior are equivalent; however, for overcomplete transforms the two formulations are different. We compare analysis and synthesis e1-norm regularization with overcomplete transforms for denoising and deconvolution.