scispace - formally typeset
Search or ask a question
Author

Dima Grigoriev

Bio: Dima Grigoriev is an academic researcher from Lille University of Science and Technology. The author has contributed to research in topics: Polynomial & Upper and lower bounds. The author has an hindex of 31, co-authored 230 publications receiving 3580 citations. Previous affiliations of Dima Grigoriev include Steklov Mathematical Institute & University of Rennes.


Papers
More filters
Journal ArticleDOI
TL;DR: A linear (thereby, sharp) lower bound on degrees of Positivstellensatz calculus refutations over a real field introduced in Grigoriev and Vorobjov is established.

224 citations

Book ChapterDOI
03 Sep 1984
TL;DR: An algorithm is described producing for each formula of the first order theory of algebraically closed fields an equivalent free of quantifiers one.
Abstract: An algorithm is described producing for each formula of the first order theory of algebraically closed fields an equivalent free of quantifiers one Denote by N a number of polynomials occuring in the formula, by d an upper bound on the degrees of polynomials, by n a number of variables, by a a number of quantifier alternations (in the prefix form) Then the algorithm works within the polynomial in the formula's size and in (Nd)n(2a+2) time Up to now a bound (Nd)no(n) was known ([5], [7], [15])

157 citations

Journal ArticleDOI
TL;DR: This algorithm yields the first efficient deterministic polynomial time algorithm (and moreover boolean $NC-algorithm) for interpolating t-sparse polynomials over finite fields and should be contrasted with the fact that efficient interpolation using a black box that only evaluates the polynometric at points in $GF[q]$ is not possible.
Abstract: The authors consider the problem of reconstructing (i.e., interpolating) a t-sparse multivariate polynomial given a black box which will produce the value of the polynomial for any value of the arguments. It is shown that, if the polynomial has coefficients in a finite field $GF[q]$ and the black box can evaluate the polynomial in the field $GF[q^{\ulcorner 2\log_{q}(nt)+3 \urcorner}]$, where n is the number of variables, then there is an algorithm to interpolate the polynomial in $O(\log^3 (nt))$ boolean parallel time and $O(n^2 t^6 \log^2 nt)$ processors.This algorithm yields the first efficient deterministic polynomial time algorithm (and moreover boolean $NC$-algorithm) for interpolating t-sparse polynomials over finite fields and should be contrasted with the fact that efficient interpolation using a black box that only evaluates the polynomial at points in $GF[q]$ is not possible (cf. [M. Clausen, A. Dress, J. Grabmeier, and M. Karpinski, Theoret. Comput. Sci., 1990, to appear]). This algorithm, tog...

149 citations

Proceedings ArticleDOI
23 May 1998
TL;DR: It is proved the first exponential lower bound on the size of any depth 3 arithmetic circuit with unbounded fanin computing an explicit function (the determinant) over an arbitrary finite field.
Abstract: We prove the rst exponential lower bound on the size of any depth 3 arithmetic circuit with unbounded fanin computing an explicit function (the determinant) over an arbitrary nite eld. This answers an open problem of N91] and NW95] for the case of nite elds. We intepret here arithmetic circuits in the algebra of polynomials over the given eld. The proof method involves a new argument on the rank of linear functions, and a group symmetry on polynomials vanishing at certain nonsingular matrices, and could be of independent interest.

141 citations

Proceedings ArticleDOI
12 Oct 1987
TL;DR: An NC3 algorithm for the problem of constructing all perfect matchings in a graph G with a permanent bounded by O(nk) is designed, which entails also among other things an efficient NC3-algorithm for computing small (polynomially bounded) arithmetic permanents, and a sublinear parallel time algorithm for enumerating all the perfect matching in graphs with permanents up to 2nε.
Abstract: It is shown that the problem of deciding and constructing a perfect matching in bipartite graphs G with the polynomial permanents of their n × n adjacency matrices A (perm(A) = nO(1)) are in the deterministic classes NC2 and NC3, respectively. We further design an NC3 algorithm for the problem of constructing all perfect matchings (enumeration problem) in a graph G with a permanent bounded by O(nk). The basic step was the development of a new symmetric functions method for the decision algorithm and the new parallel technique for the matching enumerator problem. The enumerator algorithm works in O(log3 n) parallel time and O(n3k+5.5 ? log n) processors. In the case of arbitrary bipartite graphs it yields an 'optimal' (up to the log n- factor) parallel time algorithm for enumerating all the perfect matchings in a graph. It entails also among other things an efficient NC3-algorithm for computing small (polynomially bounded) arithmetic permanents, and a sublinear parallel time algorithm for enumerating all the perfect matchings in graphs with permanents up to 2ne.

130 citations


Cited by
More filters
Book
01 Jan 2009
TL;DR: The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.
Abstract: Can machine learning deliver AI? Theoretical results, inspiration from the brain and cognition, as well as machine learning experiments suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one would need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers, graphical models with many levels of latent variables, or in complicated propositional formulae re-using many sub-formulae. Each level of the architecture represents features at a different level of abstraction, defined as a composition of lower-level features. Searching the parameter space of deep architectures is a difficult task, but new algorithms have been discovered and a new sub-area has emerged in the machine learning community since 2006, following these discoveries. Learning algorithms such as those for Deep Belief Networks and other related unsupervised learning algorithms have recently been proposed to train deep architectures, yielding exciting results and beating the state-of-the-art in certain areas. Learning Deep Architectures for AI discusses the motivations for and principles of learning algorithms for deep architectures. By analyzing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.

7,767 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI
TL;DR: It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.
Abstract: We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys $$m\ge C\,n^{1.2}r\log n$$ for some positive numerical constant C, then with very high probability, most n×n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information.

5,274 citations

01 Aug 2000
TL;DR: Assessment of medical technology in the context of commercialization with Bioentrepreneur course, which addresses many issues unique to biomedical products.
Abstract: BIOE 402. Medical Technology Assessment. 2 or 3 hours. Bioentrepreneur course. Assessment of medical technology in the context of commercialization. Objectives, competition, market share, funding, pricing, manufacturing, growth, and intellectual property; many issues unique to biomedical products. Course Information: 2 undergraduate hours. 3 graduate hours. Prerequisite(s): Junior standing or above and consent of the instructor.

4,833 citations

Journal ArticleDOI
TL;DR: In this paper, a convex programming problem is used to find the matrix with the minimum nuclear norm that is consistent with the observed entries in a low-rank matrix, which is then used to recover all the missing entries from most sufficiently large subsets.
Abstract: Suppose that one observes an incomplete subset of entries selected from a low-rank matrix. When is it possible to complete the matrix and recover the entries that have not been seen? We demonstrate that in very general settings, one can perfectly recover all of the missing entries from most sufficiently large subsets by solving a convex programming problem that finds the matrix with the minimum nuclear norm agreeing with the observed entries. The techniques used in this analysis draw upon parallels in the field of compressed sensing, demonstrating that objects other than signals and images can be perfectly reconstructed from very limited information.

2,327 citations