scispace - formally typeset
Open Access

Regularized Learning of High-dimensional Sparse Graphical Models

Yiyu Xue, +1 more
TLDR
In this paper, the authors proposed a rank-based estimator for the non-concave penalized composite likelihood (NP-dimensionality) and showed that the proposed estimator can be efficiently estimated by using a unified regularized rank estimation scheme which does not require estimating those unknown transformation functions in the nonparanormal graphical model.
Abstract
High-dimensional graphical models are important tools for characterizing complex interactions within a large-scale system In this thesis, our emphasis is to utilize the increasingly popular regularization technique to learn sparse graphical models, and our focus is on two types of graphs: Ising model for binary data and nonparanormal graphical model for continuous data In the first part, we propose an efficient procedure for learning a sparse Ising model based on a non-concave penalized composite likelihood, which extends the methodology and theory of non-concave penalized likelihood An efficient solution path algorithm is devised by using a novel coordinate-minorization-ascent algorithm Asymptotic oracle properties of our proposed estimator are established with NP-dimensionality We demonstrate its finite sample performance via simulation studies and real applications to study the Human Immunodeficiency Virus type 1 protease structure In the second part, we study the nonparanormal graphical model that is much more robust than the Gaussian graphical model while retains the good interpretability of the latter In this thesis we show that the nonparanormal graphical model can be efficiently estimated by using a unified regularized rank estimation scheme which does not require estimating those unknown transformation functions in the nonparanormal graphical model In particular, we study the rank-based Graphical LASSO, the rank-based Dantzig selector and the rank-based CLIME We establish their theoretical properties in the setting where the dimension is nearly exponentially large relative to the sample size It is shown that the proposed rank-based estimators work as well as their oracle counterparts in both simulated and real data

read more

References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Journal ArticleDOI

Regularization Paths for Generalized Linear Models via Coordinate Descent

TL;DR: In comparative timings, the new algorithms are considerably faster than competing methods and can handle large problems and can also deal efficiently with sparse features.
Journal ArticleDOI

Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties

TL;DR: In this article, penalized likelihood approaches are proposed to handle variable selection problems, and it is shown that the newly proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well if the correct submodel were known.
Journal ArticleDOI

Model selection and estimation in regression with grouped variables

TL;DR: In this paper, instead of selecting factors by stepwise backward elimination, the authors focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection.
Related Papers (5)