scispace - formally typeset
Journal ArticleDOI

Note on the sampling error of the difference between correlated proportions or percentages.

Quinn McNemar
- 01 Jun 1947 - 
- Vol. 12, Iss: 2, pp 153-157
Reads0
Chats0
TLDR
Two formulas are presented for judging the significance of the difference between correlated proportions and the chi square equivalent of one of the developed formulas.
Abstract
Two formulas are presented for judging the significance of the difference between correlated proportions. The chi square equivalent of one of the developed formulas is pointed out.

read more

Citations
More filters
Journal ArticleDOI

Evaluating the added predictive ability of a new marker: From area under the ROC curve to reclassification and beyond

TL;DR: Two new measures, one based on integrated sensitivity and specificity and the other on reclassification tables, are introduced that offer incremental information over the AUC and are proposed to be considered in addition to the A UC when assessing the performance of newer biomarkers.
Journal ArticleDOI

Multiple imputation: a primer:

TL;DR: Essential features of multiple imputation are reviewed, with answers to frequently asked questions about using the method in practice.
Journal ArticleDOI

An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers.

TL;DR: A subset of 'observers who demonstrate a high level of interobserver agreement can be identified by using pairwise agreement statistics betweeni each observer and the internal majority standard opinion on each subject.
Journal ArticleDOI

A survey on concept drift adaptation

TL;DR: The survey covers the different facets of concept drift in an integrated way to reflect on the existing scattered state of the art and aims at providing a comprehensive introduction to the concept drift adaptation for researchers, industry analysts, and practitioners.
Journal Article

In Defense of One-Vs-All Classification

TL;DR: It is argued that a simple "one-vs-all" scheme is as accurate as any other approach, assuming that the underlying binary classifiers are well-tuned regularized classifiers such as support vector machines.