scispace - formally typeset
Search or ask a question
Author

Philip Matchett Wood

Other affiliations: Rutgers University
Bio: Philip Matchett Wood is an academic researcher from University of Wisconsin-Madison. The author has contributed to research in topics: Random matrix & Matrix (mathematics). The author has an hindex of 10, co-authored 28 publications receiving 451 citations. Previous affiliations of Philip Matchett Wood include Rutgers University.

Papers
More filters
Journal ArticleDOI
TL;DR: Tao and Vu as mentioned in this paper showed that the probability that a matrix M n is singular is at most (p 1 / r + o ( 1 ) n ) for any constant 0 p 1 and a constant positive integer r.

195 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that the circular law holds for sparse n by n matrices with zero mean and unit variance, where 0 < α ≤ 1 is any constant.
Abstract: The universality phenomenon asserts that the distribution of the eigenvalues of random matrix with i.i.d. zero mean, unit variance entries does not depend on the underlying structure of the random entries. For example, a plot of the eigenvalues of a random sign matrix, where each entry is +1 or −1 with equal probability, looks the same as an analogous plot of the eigenvalues of a random matrix where each entry is complex Gaussian with zero mean and unit variance. In the current paper, we prove a universality result for sparse random n by n matrices where each entry is nonzero with probability 1/n1−α where 0 < α ≤ 1 is any constant. One consequence of the sparse universality principle is that the circular law holds for sparse random matrices so long as the entries have zero mean and unit variance, which is the most general result for sparse random matrices to date.

64 citations

Journal ArticleDOI
TL;DR: In this paper, the average number of unramified extensions of a quadratic field for any finite group of odd order is given. But this conjecture is restricted to the case that the field is abelian.
Abstract: In this paper we give a conjecture for the average number of unramified $G$-extensions of a quadratic field for any finite group $G$. The Cohen-Lenstra heuristics are the specialization of our conjecture to the case that $G$ is abelian of odd order. We prove a theorem towards the function field analog of our conjecture, and give additional motivations for the conjecture including the construction of a lifting invariant for the unramified $G$-extensions that takes the same number of values as the predicted average and an argument using the Malle-Bhargava principle. We note that for even $|G|$, corrections for the roots of unity in $\mathbb{Q}$ are required, which can not be seen when $G$ is abelian.

28 citations

Journal ArticleDOI
08 Oct 2013
TL;DR: In this article, it was shown that the regularization of a sequence of matrices that converges in ∗-moments to a regular element a, by the addition of a polynomially vanishing Gaussian Ginibre matrix, forces the empirical measure of eigenvalues to converge to the Brown measure of a.
Abstract: We discuss regularization by noise of the spectrum of large random nonnormal matrices. Under suitable conditions, we show that the regularization of a sequence of matrices that converges in ∗-moments to a regular element a, by the addition of a polynomially vanishing Gaussian Ginibre matrix, forces the empirical measure of eigenvalues to converge to the Brown measure of a.

28 citations

Journal ArticleDOI
TL;DR: A simple algorithm is presented to allow direct sampling from the uniform distribution on n to form a Markov chain, which is reversible and explicitly diagonalizable with transformed Jacobi polynomials as eigenfunctions.
Abstract: Let n be the compact convex set of tridiagonal doubly stochastic matrices. These arise naturally in probability problems as birth and death chains with a uniform stationary distribution. We study ‘typical’ matrices T∈ n chosen uniformly at random in the set n. A simple algorithm is presented to allow direct sampling from the uniform distribution on n. Using this algorithm, the elements above the diagonal in T are shown to form a Markov chain. For large n, the limiting Markov chain is reversible and explicitly diagonalizable with transformed Jacobi polynomials as eigenfunctions. These results are used to study the limiting behavior of such typical birth and death chains, including their eigenvalues and mixing times. The results on a uniform random tridiagonal doubly stochastic matrices are related to the distribution of alternating permutations chosen uniformly at random.© 2012 Wiley Periodicals, Inc. Random Struct. Alg., 42, 403–437, 2013

27 citations


Cited by
More filters
Book
13 Apr 2012
TL;DR: The field of random matrix theory has seen an explosion of activity in recent years, with connections to many areas of mathematics and physics as mentioned in this paper, which makes the current state of the field almost too large to survey in a single book.
Abstract: The field of random matrix theory has seen an explosion of activity in recent years, with connections to many areas of mathematics and physics. However, this makes the current state of the field almost too large to survey in a single book. In this graduate text, we focus on one specific sector of the field, namely the spectral distribution of random Wigner matrix ensembles (such as the Gaussian Unitary Ensemble), as well as iid matrix ensembles. The text is largely self-contained and starts with a review of relevant aspects of probability theory and linear algebra. With over 200 exercises, the book is suitable as an introductory text for beginning graduate students seeking to enter the field.

1,075 citations

Book
26 Jun 2017
TL;DR: It is shown how the combination of various free probability results with a linearization trick allows to address successfully the problem of determining the asymptotic eigenvalue distribution of general selfadjoint polynomials in independent random matrices.
Abstract: The concept of freeness was introduced by Voiculescu in the context of operator algebras. Later it was observed that it is also relevant for large random matrices. We will show how the combination of various free probability results with a linearization trick allows to address successfully the problem of determining the asymptotic eigenvalue distribution of general selfadjoint polynomials in independent random matrices.

296 citations

Posted Content
TL;DR: The main result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: for any fixed U ∈ Rn×d with orthonormal columns and random sparse Π, all singular values of ΠU lie in [1 - ε, 1 + ε] with good probability.
Abstract: An "oblivious subspace embedding (OSE)" given some parameters eps,d is a distribution D over matrices B in R^{m x n} such that for any linear subspace W in R^n with dim(W) = d it holds that Pr_{B ~ D}(forall x in W ||B x||_2 in (1 +/- eps)||x||_2) > 2/3 We show an OSE exists with m = O(d^2/eps^2) and where every B in the support of D has exactly s=1 non-zero entries per column. This improves previously best known bound in [Clarkson-Woodruff, arXiv:1207.6365]. Our quadratic dependence on d is optimal for any OSE with s=1 [Nelson-Nguyen, 2012]. We also give two OSE's, which we call Oblivious Sparse Norm-Approximating Projections (OSNAPs), that both allow the parameter settings m = O(d/eps^2) and s = polylog(d)/eps, or m = O(d^{1+gamma}/eps^2) and s=O(1/eps) for any constant gamma>0. This m is nearly optimal since m >= d is required simply to no non-zero vector of W lands in the kernel of B. These are the first constructions with m=o(d^2) to have s=o(d). In fact, our OSNAPs are nothing more than the sparse Johnson-Lindenstrauss matrices of [Kane-Nelson, SODA 2012]. Our analyses all yield OSE's that are sampled using either O(1)-wise or O(log d)-wise independent hash functions, which provides some efficiency advantages over previous work for turnstile streaming applications. Our main result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: i.e. we show that for any U in R^{n x d} with orthonormal columns and random sparse B, all singular values of BU lie in [1-eps, 1+eps] with good probability. Plugging OSNAPs into known algorithms for numerical linear algebra problems such as approximate least squares regression, low rank approximation, and approximating leverage scores implies faster algorithms for all these problems.

267 citations

Proceedings ArticleDOI
26 Oct 2013
TL;DR: In this paper, it was shown that oblivious subspace embeddings with O(d1+γ/e2) non-zero entries per column can be obtained with good probability.
Abstract: An oblivious subspace embedding (OSE) given some parameters e, d is a distribution D over matrices Π ∈ Rm×n such that for any linear subspace W ⊆ Rn with dim(W) = d, PΠ~D(∀x ∈ W ||Πx||2 ∈ (1 ± e)||x||2) > 2/3. We show that a certain class of distributions, Oblivious Sparse Norm-Approximating Projections (OSNAPs), provides OSE's with m = O(d1+γ/e2), and where every matrix Π in the support of the OSE has only s = Oγ(1/e) non-zero entries per column, for γ > 0 any desired constant. Plugging OSNAPs into known algorithms for approximate least squares regression, lp regression, low rank approximation, and approximating leverage scores implies faster algorithms for all these problems. Our main result is essentially a Bai-Yin type theorem in random matrix theory and is likely to be of independent interest: we show that for any fixed U ∈ Rn×d with orthonormal columns and random sparse Π, all singular values of ΠU lie in [1 - e, 1 + e] with good probability. This can be seen as a generalization of the sparse Johnson-Lindenstrauss lemma, which was concerned with d = 1. Our methods also recover a slightly sharper version of a main result of [Clarkson-Woodruff, STOC 2013], with a much simpler proof. That is, we show that OSNAPs give an OSE with m = O(d2/e2), s = 1.

257 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that it is information theoretically impossible to cluster if s2 ≤ d and moreover it is even impossible to even estimate the model parameters from the graph when s2 d.
Abstract: We study a random graph model called the “stochastic block model” in statistics and the “planted partition model” in theoretical computer science. In its simplest form, this is a random graph with two equal-sized classes of vertices, with a within-class edge probability of q and a between-class edge probability of q′. A striking conjecture of Decelle, Krzkala, Moore and Zdeborova [9], based on deep, non-rigorous ideas from statistical physics, gave a precise prediction for the algorithmic threshold of clustering in the sparse planted partition model. In particular, if q=a/n and q′=b/n, s=(a−b)/2 and d=(a+b)/2, then Decelle et al. conjectured that it is possible to efficiently cluster in a way correlated with the true partition if s2>d and impossible if s2 Cdlnd for sufficiently large C. In a previous work, we proved that indeed it is information theoretically impossible to cluster if s2 ≤ d and moreover that it is information theoretically impossible to even estimate the model parameters from the graph when s2 d. A different independent proof of the same result was recently obtained by Massoulie [20].

252 citations