Journal ArticleDOI
Pattern Recognition and Machine Learning
TLDR
This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.Abstract:
(2007). Pattern Recognition and Machine Learning. Technometrics: Vol. 49, No. 3, pp. 366-366.read more
Citations
More filters
Book
An Introduction to Neural Information Retrieval
Bhaskar Mitra,Nick Craswell +1 more
TL;DR: The monograph provides a complete picture of neural information retrieval techniques that culminate in supervised neural learning to rank models including deep neural network architectures that are trained end-to-end for ranking tasks.
Posted Content
On the O(1/k) Convergence of Asynchronous Distributed Alternating Direction Method of Multipliers
Ermin Wei,Asuman Ozdaglar +1 more
TL;DR: A novel asynchronous ADMM based distributed method is presented for the general formulation of a network of agents that are cooperatively solving a global optimization problem and it is shown that it converges at the rate O (1=k).
Posted Content
Stochastic Gradient Descent as Approximate Bayesian Inference
TL;DR: In this paper, an approximate Bayesian posterior inference algorithm for stochastic gradient descent with constant SGD was proposed, where the tuning parameters of SGD were adjusted to best match the stationary distribution to a posterior, minimizing the Kullback-Leibler divergence.
Journal ArticleDOI
The Extreme Value Machine
TL;DR: The Extreme Value Machine (EVM) is a novel, theoretically sound classifier that has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning.
Journal ArticleDOI
Bayesian Robust Principal Component Analysis
TL;DR: The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the low-rank and sparse-outlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings.