scispace - formally typeset
Journal ArticleDOI

Improving predictive inference under covariate shift by weighting the log-likelihood function

Hidetoshi Shimodaira
- 01 Oct 2000 - 
- Vol. 90, Iss: 2, pp 227-244
TLDR
A class of predictive densities is derived by weighting the observed samples in maximizing the log-likelihood function, effective in cases such as sample surveys or design of experiments, where the observed covariate follows a different distribution than that in the whole population.
About
This article is published in Journal of Statistical Planning and Inference.The article was published on 2000-10-01. It has received 1767 citations till now. The article focuses on the topics: Covariate & Likelihood function.

read more

Citations
More filters
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Journal ArticleDOI

A Survey on Transfer Learning

TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Posted Content

Unsupervised Domain Adaptation by Backpropagation

TL;DR: In this paper, a gradient reversal layer is proposed to promote the emergence of deep features that are discriminative for the main learning task on the source domain and invariant with respect to the shift between the domains.
Journal ArticleDOI

A survey of transfer learning

TL;DR: This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied toTransfer learning, which can be applied to big data environments.
References
More filters
Journal ArticleDOI

A new look at the statistical model identification

TL;DR: In this article, a new estimate minimum information theoretical criterion estimate (MAICE) is introduced for the purpose of statistical identification, which is free from the ambiguities inherent in the application of conventional hypothesis testing procedure.
Journal ArticleDOI

Maximum likelihood estimation of misspecified models

Halbert White
- 01 Jan 1982 - 
TL;DR: In this article, the consequences and detection of model misspecification when using maximum likelihood techniques for estimation and inference are examined, and the properties of the quasi-maximum likelihood estimator and the information matrix are exploited to yield several useful tests.
Book

Robust statistics: the approach based on influence functions

TL;DR: This paper presents a meta-modelling framework for estimating the values of Covariance Matrices and Multivariate Location using one-Dimensional and Multidimensional Estimators.
Related Papers (5)