scispace - formally typeset
S

Steve Hanneke

Researcher at Toyota Technological Institute at Chicago

Publications -  105
Citations -  3153

Steve Hanneke is an academic researcher from Toyota Technological Institute at Chicago. The author has contributed to research in topics: Computer science & Active learning (machine learning). The author has an hindex of 21, co-authored 84 publications receiving 2652 citations. Previous affiliations of Steve Hanneke include Princeton University & Carnegie Mellon University.

Papers
More filters
Journal ArticleDOI

Discrete temporal models of social networks

TL;DR: This paper propose a family of statistical models for social network evolution over time, which represent an extension of Exponential Random Graph Models (ERGMs) and give examples of their use for hypothesis testing and classification.
Proceedings ArticleDOI

A bound on the label complexity of agnostic active learning

TL;DR: General bounds on the number of label requests made by the A2 algorithm proposed by Balcan, Beygelzimer & Langford are derived, which represents the first nontrivial general-purpose upper bound on label complexity in the agnostic PAC model.
Book

Theory of Disagreement-Based Active Learning

Steve Hanneke
TL;DR: Recent advances in the understanding of the theoretical benefits of active learning are described, and implications for the design of effective active learning algorithms are described.
Book ChapterDOI

Discrete temporal models of social networks

TL;DR: A family of statistical models for social network evolution over time is proposed, which represents an extension of Exponential Random Graph Models (ERGMs), and examples of their use for hypothesis testing and classification are given.
Journal ArticleDOI

The true sample complexity of active learning

TL;DR: It is proved that it is always possible to learn an ε-good classifier with a number of samples asymptotically smaller than this, which contrasts with the traditional analysis of active learning problems such as non-homogeneous linear separators or depth-limited decision trees, in which Ω(1/ε) lower bounds are common.