scispace - formally typeset
Search or ask a question
Author

Kolyan Ray

Bio: Kolyan Ray is an academic researcher from Imperial College London. The author has contributed to research in topics: Prior probability & Minimax. The author has an hindex of 9, co-authored 31 publications receiving 335 citations. Previous affiliations of Kolyan Ray include Leiden University & King's College London.

Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors investigated the frequentist posterior contraction rate of nonparametric Bayesian procedures in linear inverse problems in both the mildly and severely ill-posed cases, and proved a theorem in a general Hilbert space setting under approximation-theoretic assumptions on the prior.
Abstract: We investigate the frequentist posterior contraction rate of nonparametric Bayesian procedures in linear inverse problems in both the mildly and severely ill-posed cases. A theorem is proved in a general Hilbert space setting under approximation-theoretic assumptions on the prior. The result is applied to non-conjugate priors, notably sieve and wavelet series priors, as well as in the conjugate setting. In the mildly ill-posed setting minimax optimal rates are obtained, with sieve priors being rate adaptive over Sobolev classes. In the severely ill-posed setting, oversmoothing the prior yields minimax rates. Previously established results in the conjugate setting are obtained using this method. Examples of applications include deconvolution, recovering the initial condition in the heat equation and the Radon transform.

66 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the frequentist posterior contraction rate of nonparametric Bayesian procedures in linear inverse problems in both the mildly and severely ill-posed cases, and proved a theorem in a general Hilbert space setting under approximation-theoretic assumptions on the prior.
Abstract: We investigate the frequentist posterior contraction rate of nonparametric Bayesian procedures in linear inverse problems in both the mildly and severely ill-posed cases. A theorem is proved in a general Hilbert space setting under approximation-theoretic assumptions on the prior. The result is applied to non-conjugate priors, notably sieve and wavelet series priors, as well as in the conjugate setting. In the mildly ill-posed setting minimax optimal rates are obtained, with sieve priors being rate adaptive over Sobolev classes. In the severely ill-posed setting, oversmoothing the prior yields minimax rates. Previously established results in the conjugate setting are obtained using this method. Examples of applications include deconvolution, recovering the initial condition in the heat equation and the Radon transform.

63 citations

Journal ArticleDOI
TL;DR: A mean-field spike and slab variational Bayes (VB) approximation to Bayesian model selection priors in sparse high-dimensional linear regression is studied, showing that it works comparably well as other state-of-the-art Bayesian variable selection methods.
Abstract: We study a mean-field spike and slab variational Bayes (VB) approximation to Bayesian model selection priors in sparse high-dimensional linear regression. Under compatibility conditions on the design matrix, oracle inequalities are derived for the mean-field VB approximation, implying that it converges to the sparse truth at the optimal rate and gives optimal prediction of the response vector. The empirical performance of our algorithm is studied, showing that it works comparably well as other state-of-the-art Bayesian variable selection methods. We also numerically demonstrate that the widely used coordinate-ascent variational inference (CAVI) algorithm can be highly sensitive to the parameter updating order, leading to potentially poor performance. To mitigate this, we propose a novel prioritized updating scheme that uses a data-driven updating order and performs better in simulations. The variational algorithm is implemented in the R package 'sparsevb'.

42 citations

Journal ArticleDOI
TL;DR: In this article, the authors investigate Bernstein-von Mises theorems for adaptive nonparametric Bayesian procedures in the canonical Gaussian white noise model and construct optimal frequentist confidence sets based on the posterior distribution.
Abstract: We investigate Bernstein-von Mises theorems for adaptive nonparametric Bayesian procedures in the canonical Gaussian white noise model. We consider both a Hilbert space and multiscale setting with applications in $L^2$ and $L^\infty$ respectively. This provides a theoretical justification for plug-in procedures, for example the use of certain credible sets for sufficiently smooth linear functionals. We use this general approach to construct optimal frequentist confidence sets based on the posterior distribution. We also provide simulations to numerically illustrate our approach and obtain a visual representation of the geometries involved.

42 citations

Journal ArticleDOI
TL;DR: In this paper, the convergence rates of a penalised least squares estimator, which equals the maximum a posteriori (MAP) estimate corresponding to a high-dimensional Gaussian product prior, are derived from corresponding contraction rates for the associated posterior distributions.
Abstract: The problem of determining a periodic Lipschitz vector field $b=(b_{1},\ldots ,b_{d})$ from an observed trajectory of the solution $(X_{t}:0\le t\le T)$ of the multi-dimensional stochastic differential equation \begin{equation*}dX_{t}=b(X_{t})\,dt+dW_{t},\quad t\geq 0,\end{equation*} where $W_{t}$ is a standard $d$-dimensional Brownian motion, is considered. Convergence rates of a penalised least squares estimator, which equals the maximum a posteriori (MAP) estimate corresponding to a high-dimensional Gaussian product prior, are derived. These results are deduced from corresponding contraction rates for the associated posterior distributions. The rates obtained are optimal up to log-factors in $L^{2}$-loss in any dimension, and also for supremum norm loss when $d\le 4$. Further, when $d\le 3$, nonparametric Bernstein–von Mises theorems are proved for the posterior distributions of $b$. From this, we deduce functional central limit theorems for the implied estimators of the invariant measure $\mu _{b}$. The limiting Gaussian process distributions have a covariance structure that is asymptotically optimal from an information-theoretic point of view.

29 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the convergence of Distri butions of Likelihood Ratio has been discussed, and the authors propose a method to construct a set of limit laws for Likelihood Ratios.
Abstract: 1 Introduction.- 2 Experiments, Deficiencies, Distances v.- 2.1 Comparing Risk Functions.- 2.2 Deficiency and Distance between Experiments.- 2.3 Likelihood Ratios and Blackwell's Representation.- 2.4 Further Remarks on the Convergence of Distri butions of Likelihood Ratios.- 2.5 Historical Remarks.- 3 Contiguity - Hellinger Transforms.- 3.1 Contiguity.- 3.2 Hellinger Distances, Hellinger Transforms.- 3.3 Historical Remarks.- 4 Gaussian Shift and Poisson Experiments.- 4.1 Introduction.- 4.2 Gaussian Experiments.- 4.3 Poisson Experiments.- 4.4 Historical Remarks.- 5 Limit Laws for Likelihood Ratios.- 5.1 Introduction.- 5.2 Auxiliary Results.- 5.2.1 Lindeberg's Procedure.- 5.2.2 Levy Splittings.- 5.2.3 Paul Levy's Symmetrization Inequalities.- 5.2.4 Conditions for Shift-Compactness.- 5.2.5 A Central Limit Theorem for Infinitesimal Arrays.- 5.2.6 The Special Case of Gaussian Limits.- 5.2.7 Peano Differentiable Functions.- 5.3 Limits for Binary Experiments.- 5.4 Gaussian Limits.- 5.5 Historical Remarks.- 6 Local Asymptotic Normality.- 6.1 Introduction.- 6.2 Locally Asymptotically Quadratic Families.- 6.3 A Method of Construction of Estimates.- 6.4 Some Local Bayes Properties.- 6.5 Invariance and Regularity.- 6.6 The LAMN and LAN Conditions.- 6.7 Additional Remarks on the LAN Conditions.- 6.8 Wald's Tests and Confidence Ellipsoids.- 6.9 Possible Extensions.- 6.10 Historical Remarks.- 7 Independent, Identically Distributed Observations.- 7.1 Introduction.- 7.2 The Standard i.i.d. Case: Differentiability in Quadratic Mean.- 7.3 Some Examples.- 7.4 Some Nonparametric Considerations.- 7.5 Bounds on the Risk of Estimates.- 7.6 Some Cases Where the Number of Observations Is Random.- 7.7 Historical Remarks.- 8 On Bayes Procedures.- 8.1 Introduction.- 8.2 Bayes Procedures Behave Nicely.- 8.3 The Bernstein-von Mises Phenomenon.- 8.4 A Bernstein-von Mises Result for the i.i.d. Case.- 8.5 Bayes Procedures Behave Miserably.- 8.6 Historical Remarks.- Author Index.

483 citations

Journal ArticleDOI
TL;DR: This survey paper aims to give an account of some of the main contributions in data-driven inverse problems.
Abstract: Recent research in inverse problems seeks to develop a mathematically coherent foundation for combining data-driven models, and in particular those based on deep learning, with domain-specific knowledge contained in physical–analytical models. The focus is on solving ill-posed inverse problems that are at the core of many challenging applications in the natural sciences, medicine and life sciences, as well as in engineering and industrial applications. This survey paper aims to give an account of some of the main contributions in data-driven inverse problems.

473 citations

01 Jan 2011
TL;DR: Books and internet are the recommended media to help you improving your quality and performance.
Abstract: Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.

178 citations

Journal ArticleDOI
TL;DR: In this paper, a Bayesian approach is adopted to the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map applied to u. The prior measure is specified as a Gaussian random field μ 0.
Abstract: We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map $\mathcal {G}$ applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ0. We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μy. Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager–Machlup functional defined on the Cameron–Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of $\mathcal {G}(u)$ can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier–Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics.

169 citations