scispace - formally typeset
Open AccessJournal ArticleDOI

Variable selection in linear mixed effects models.

TLDR
It is proved that, with the proxy matrix appropriately chosen, the proposed procedure can identify all true random effects with asymptotic probability one, where the dimension of random effects vector is allowed to increase exponentially with the sample size.
Abstract
This paper is concerned with the selection and estimation of fixed and random effects in linear mixed effects models. We propose a class of nonconcave penalized profile likelihood methods for selecting and estimating important fixed effects. To overcome the difficulty of unknown covariance matrix of random effects, we propose to use a proxy matrix in the penalized profile likelihood. We establish conditions on the choice of the proxy matrix and show that the proposed procedure enjoys the model selection consistency where the number of fixed effects is allowed to grow exponentially with the sample size. We further propose a group variable selection strategy to simultaneously select and estimate important random effects, where the unknown covariance matrix of random effects is replaced with a proxy matrix. We prove that, with the proxy matrix appropriately chosen, the proposed procedure can identify all true random effects with asymptotic probability one, where the dimension of random effects vector is allowed to increase exponentially with the sample size. Monte Carlo simulation studies are conducted to examine the finite-sample performance of the proposed procedures. We further illustrate the proposed procedures via a real data example.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Spatial as deep: Spatial CNN for traffic scene understanding

TL;DR: This paper proposes Spatial CNN (SCNN), which generalizes traditional deep layer-by-layer convolutions to slice-byslice convolutions within feature maps, thus enabling message passings between pixels across rows and columns in a layer.
Journal ArticleDOI

Model Selection in Linear Mixed Models

TL;DR: A large body of literature on linear mixed model selection methods based on four major approaches is reviewed, including information criteria such as AIC or BIC, shrinkage methodsbased on penalized loss functions such as LASSO, the Fence procedure and Bayesian techniques.
Journal ArticleDOI

Automated mixed ANOVA modeling of sensory and consumer data

TL;DR: An approach for automated mixed ANOVA/ANCOVA modeling together with the open source R package lmerTest developed by the authors that can perform automated complex mixed-effects modeling is introduced.
Journal ArticleDOI

A Penalized Likelihood Method for Structural Equation Modeling

TL;DR: A penalized likelihood (PL) method for structural equation modeling (SEM) was proposed as a methodology for exploring the underlying relations among both observed and latent variables and an expectation-conditional maximization algorithm was developed to maximize the PL criterion.
Journal ArticleDOI

Hierarchical vector auto-regressive models and their applications to multi-subject effective connectivity

TL;DR: A Bayesian hierarchical framework is proposed for the VAR model that will account for both temporal correlation within a subject and between subject variation and is applied to investigate differences in effective connectivity during a hand grasp experiment between healthy controls and patients with residual motor deficit following a stroke.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Proceedings Article

Information Theory and an Extention of the Maximum Likelihood Principle

H. Akaike
TL;DR: The classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion to provide answers to many practical problems of statistical model fitting.
Journal ArticleDOI

Regularization and variable selection via the elastic net

TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Book

Bayesian Data Analysis

TL;DR: Detailed notes on Bayesian Computation Basics of Markov Chain Simulation, Regression Models, and Asymptotic Theorems are provided.
Book ChapterDOI

Information Theory and an Extension of the Maximum Likelihood Principle

TL;DR: In this paper, it is shown that the classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion.
Related Papers (5)