scispace - formally typeset
Open AccessPosted Content

Learning to detect an oddball target

Reads0
Chats0
TLDR
In this paper, a generalised likelihood ratio based sequential policy is proposed to detect an odd process among a group of Poisson point processes, all having the same rate except the odd process.
Abstract
We consider the problem of detecting an odd process among a group of Poisson point processes, all having the same rate except the odd process. The actual rates of the odd and non-odd processes are unknown to the decision maker. We consider a time-slotted sequential detection scenario where, at the beginning of each slot, the decision maker can choose which process to observe during that time slot. We are interested in policies that satisfy a given constraint on the probability of false detection. We propose a generalised likelihood ratio based sequential policy which, via suitable thresholding, can be made to satisfy the given constraint on the probability of false detection. Further, we show that the proposed policy is asymptotically optimal in terms of the conditional expected stopping time among all policies that satisfy the constraint on the probability of false detection. The asymptotic is as the probability of false detection is driven to zero. We apply our results to a particular visual search experiment studied recently by neuroscientists. Our model suggests a neuronal dissimilarity index for the visual search task. The neuronal dissimilarity index, when applied to visual search data from the particular experiment, correlates strongly with the behavioural data. However, the new dissimilarity index performs worse than some previously proposed neuronal dissimilarity indices. We explain why this may be attributed to the experiment conditons.

read more

Citations
More filters
Proceedings Article

Optimal Best Arm Identification with Fixed Confidence

TL;DR: A new, tight lower bound on the sample complexity is proved on the complexity of best-arm identification in one-parameter bandit problems and the `Track-and-Stop' strategy is proposed, which is proved to be asymptotically optimal.
Posted Content

Optimal Best Arm Identification with Fixed Confidence

TL;DR: In this paper, the authors give a complete characterization of the complexity of best-arm identification in one-parameter bandit problems and prove a tight lower bound on the sample complexity.
Journal ArticleDOI

Active Anomaly Detection in Heterogeneous Processes

TL;DR: A low-complexity deterministic test is shown to enjoy the same asymptotic optimality while offering significantly better performance in the finite regime and faster convergence to the optimal rate function, especially when the number of processes is large.
Journal ArticleDOI

Searching for Anomalies Over Composite Hypotheses

TL;DR: A deterministic search algorithm that minimizes the expected detection time subject to an error probability constraint and is consistent in terms of achieving error probability that decays to zero with the detection delay is developed.
Journal ArticleDOI

Learning the distribution with largest mean: two bandit frameworks

TL;DR: In this paper, the authors present asymptotically optimal algorithms for regret minimization and best arm identification in the multi-armed bandit model. But the sampling rule of each algorithm is different.
References
More filters
Book

A course in probability theory

Kai Lai Chung
TL;DR: This edition of A Course in Probability Theory includes an introduction to measure theory that expands the market, as this treatment is more consistent with current courses.
Journal ArticleDOI

A General Class of Coefficients of Divergence of One Distribution from Another

TL;DR: In this paper, a general class of coefficients of divergence is generated in this way and it is shown that various available measures of divergence, distance, discriminatory information, etc, are members of this class.
Journal Article

On the complexity of best-arm identification in multi-armed bandit models

TL;DR: This work introduces generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-confidence settings, and provides the first known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m ≥ 1 under general assumptions.
Related Papers (5)