scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Sparse Regularization via Convex Analysis

01 Sep 2017-IEEE Transactions on Signal Processing (IEEE)-Vol. 65, Iss: 17, pp 4481-4494
TL;DR: A class of nonconvex penalty functions that maintain the convexity of the least squares cost function to be minimized, and avoids the systematic underestimation characteristic of L1 norm regularization are proposed.
Abstract: Sparse approximate solutions to linear equations are classically obtained via L1 norm regularized least squares, but this method often underestimates the true solution As an alternative to the L1 norm, this paper proposes a class of nonconvex penalty functions that maintain the convexity of the least squares cost function to be minimized, and avoids the systematic underestimation characteristic of L1 norm regularization The proposed penalty function is a multivariate generalization of the minimax-concave penalty It is defined in terms of a new multivariate generalization of the Huber function, which in turn is defined via infimal convolution The proposed sparse-regularized least squares cost function can be minimized by proximal algorithms comprising simple computations
Citations
More filters
Journal ArticleDOI
TL;DR: A nonconvex sparse regularization method for bearing fault diagnosis is proposed based on the generalized minimax-concave (GMC) penalty, which maintains the convexity of the sparsity-regularized least squares cost function, and thus the global minimum can be solved by convex optimization algorithms.
Abstract: Vibration monitoring is one of the most effective ways for bearing fault diagnosis, and a challenge is how to accurately estimate bearing fault signals from noisy vibration signals. In this paper, a nonconvex sparse regularization method for bearing fault diagnosis is proposed based on the generalized minimax-concave (GMC) penalty, which maintains the convexity of the sparsity-regularized least squares cost function, and thus the global minimum can be solved by convex optimization algorithms. Furthermore, we introduce a k-sparsity strategy for the adaptive selection of the regularization parameter. The main advantage over conventional filtering methods is that GMC can better preserve the bearing fault signal while reducing the interference of noise and other components; thus, it can significantly improve the estimation accuracy of the bearing fault signal. A simulation study and two run-to-failure experiments verify the effectiveness of GMC in the diagnosis of localized faults in rolling bearings, and the comparison studies show that GMC provides more accurate estimation results than L1-norm regularization and spectral kurtosis.

175 citations


Cites background or methods from "Sparse Regularization via Convex An..."

  • ...In this section, we use the nonconvex GMC penalty [36] and analyze the convexity of the GMC-regularized cost function....

    [...]

  • ...First, we use the generalized minimax-concave (GMC) penalty [36] as a nonconvex penalty that maintains the convexity of the sparsity-regularized cost function, which we minimize using a...

    [...]

  • ...NSR that maintains the convexity of the cost function has been recently studied, to capture the advantages of both nonconvex regularization and convex optimization [35], [36]....

    [...]

Journal ArticleDOI
TL;DR: An overview of nonconvex regularization based sparse and low-rank recovery in various fields in signal processing, statistics, and machine learning, including compressive sensing, sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA is given.
Abstract: In the past decade, sparse and low-rank recovery has drawn much attention in many areas such as signal/image processing, statistics, bioinformatics, and machine learning. To achieve sparsity and/or low-rankness inducing, the $\ell _{1}$ norm and nuclear norm are of the most popular regularization penalties due to their convexity. While the $\ell _{1}$ and nuclear norm are convenient as the related convex optimization problems are usually tractable, it has been shown in many applications that a nonconvex penalty can yield significantly better performance. In recent, nonconvex regularization-based sparse and low-rank recovery is of considerable interest and it in fact is a main driver of the recent progress in nonconvex and nonsmooth optimization. This paper gives an overview of this topic in various fields in signal processing, statistics, and machine learning, including compressive sensing, sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA. We present recent developments of nonconvex regularization based sparse and low-rank recovery in these fields, addressing the issues of penalty selection, applications and the convergence of nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git .

132 citations


Cites background from "Sparse Regularization via Convex An..."

  • ...In addition, a class of nonconvex penalties which can maintain the convexity of the global cost function have been designed in [209]–[211], whilst log penalties have been considered in [212], [213]....

    [...]

Journal ArticleDOI
TL;DR: It is observed that Wavelet-VBE, EMD-MAF, GAN2, GSSSA, new MP-EKF, DLSR, and AKF are most suitable for additive white Gaussian noise removal and GAN1 is the best denoising option for composite noise removal.
Abstract: An electrocardiogram (ECG) records the electrical signal from the heart to check for different heart conditions, but it is susceptible to noises. ECG signal denoising is a major pre-processing step which attenuates the noises and accentuates the typical waves in ECG signals. Researchers over time have proposed numerous methods to correctly detect morphological anomalies. This study discusses the workflow, and design principles followed by these methods, and classify the state-of-the-art methods into different categories for mutual comparison, and development of modern methods to denoise ECG. The performance of these methods is analysed on some benchmark metrics, viz., root-mean-square error, percentage-root-mean-square difference, and signal-to-noise ratio improvement, thus comparing various ECG denoising techniques on MIT-BIH databases, PTB, QT, and other databases. It is observed that Wavelet-VBE, EMD-MAF, GAN2, GSSSA, new MP-EKF, DLSR, and AKF are most suitable for additive white Gaussian noise removal. For muscle artefacts removal, GAN1, new MP-EKF, DLSR, and AKF perform comparatively well. For base-line wander, and electrode motion artefacts removal, GAN1 is the best denoising option. For power-line interference removal, DLSR and EWT perform well. Finally, FCN-based DAE, DWT (Sym6) soft, MABWT (soft), CPSD sparsity, and UWT are promising ECG denoising methods for composite noise removal.

103 citations

Journal ArticleDOI
Ning Li1, Weiguo Huang1, Wenjun Guo1, Guanqi Gao1, Zhongkui Zhu1 
TL;DR: A novel multiple enhanced sparse decomposition (MESD) method is proposed to address multiple feature extraction for gearbox compound fault vibration signals and the simulation and engineering signals of the gearbox validate the performance of the proposed MESD method.
Abstract: The vibration monitoring of gearboxes is an effective means of ensuring the long-term safe operation of rotating machinery. A gearbox may have more than one fault in actual applications. Therefore, gearbox compound fault diagnosis should be investigated. In this paper, a novel multiple enhanced sparse decomposition (MESD) method is proposed to address multiple feature extraction for gearbox compound fault vibration signals. Through this method, a novel MESD algorithm is utilized to simultaneously separate and extract the harmonic components and transient features of the gear and bearing from the compound fault signal. Three subdictionaries are specially constructed according to the gearbox failure mechanism to accurately extract each feature component. Meanwhile, the generalized minimax concave (GMC) penalty is used as sparse regularization to further ensure the accuracy of sparse decomposition. The simulation and engineering signals of the gearbox validate the performance of the proposed MESD method.

91 citations


Cites background or methods from "Sparse Regularization via Convex An..."

  • ...Second, we use the generalized minimax concave (GMC) penalty [28] to establish the MESD cost function, which can overcome the disadvantages of the L1 norm, underestimating the highamplitude feature components....

    [...]

  • ...(22) Because it is difficult to directly solve the value of B , refer to the literature [28], we initially set the value of B as follows: Bi = √ γ /λi Di , 0 ≤ γ ≤ 1....

    [...]

  • ...Thus, as mentioned in [28], the decomposition of the gearbox compound vibration signal can be modeled as...

    [...]

References
More filters
Book
01 Mar 2004
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Abstract: Convex optimization problems arise frequently in many different fields. A comprehensive introduction to the subject, this book shows in detail how such problems can be solved numerically with great efficiency. The focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them. The text contains many worked examples and homework exercises and will appeal to students, researchers and practitioners in fields such as engineering, computer science, mathematics, statistics, finance, and economics.

33,341 citations

Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


"Sparse Regularization via Convex An..." refers background in this paper

  • ...For example, the idea may admit extension to more general convex regularizers such as total variation [48], nuclear norm [10], mixed norms [34], composite regularizers [1], [2], co-sparse regularization [40], and more generally, atomic norms [14], and partly smooth regularizers [61]....

    [...]

Journal ArticleDOI
TL;DR: Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions.
Abstract: The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.

9,950 citations


"Sparse Regularization via Convex An..." refers methods in this paper

  • ...This example illustrates the use of the GMC penalty for denoising [18]....

    [...]

Journal ArticleDOI
TL;DR: In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
Abstract: Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters. The proposed approaches are distinguished from others in that the penalty functions are symmetric, nonconcave on (0, ∞), and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing penalized likelihood functions. The proposed ideas are widely applicable. They are readily applied to a variety of ...

8,314 citations


Additional excerpts

  • ..., [11], [13], [15], [16], [19], [25], [30], [31], [38], [39], [43], [47], [58], [64], [66]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a new approach toward a theory of robust estimation is presented, which treats in detail the asymptotic theory of estimating a location parameter for contaminated normal distributions, and exhibits estimators that are asyptotically most robust (in a sense to be specified) among all translation invariant estimators.
Abstract: This paper contains a new approach toward a theory of robust estimation; it treats in detail the asymptotic theory of estimating a location parameter for contaminated normal distributions, and exhibits estimators—intermediaries between sample mean and sample median—that are asymptotically most robust (in a sense to be specified) among all translation invariant estimators. For the general background, see Tukey (1960) (p. 448 ff.)

5,628 citations