scispace - formally typeset
Search or ask a question
Author

Alexander A. Razborov

Bio: Alexander A. Razborov is an academic researcher from University of Chicago. The author has contributed to research in topics: Upper and lower bounds & Proof complexity. The author has an hindex of 45, co-authored 144 publications receiving 7233 citations. Previous affiliations of Alexander A. Razborov include Toyota Technological Institute at Chicago & Toyota Technological Institute.


Papers
More filters
Journal ArticleDOI
TL;DR: It is proved that the distributional communication complexity of the predicate “disjointness” with respect to a very simple measure on inputs is Ω(n).

640 citations

Journal ArticleDOI
01 Aug 1997
TL;DR: It is shown that the weaker class ofAC0-natural proofs which is sufficient to prove the parity lower bounds of Furst, Saxe, and Sipser, Yao, and Hastad is inherently incapable of proving the bounds of Razborov and Smolensky.
Abstract: We introduce the notion ofnaturalproof. We argue that the known proofs of lower bounds on the complexity of explicit Boolean functions in nonmonotone models fall within our definition of natural. We show, based on a hardness assumption, that natural proofs can not prove superpolynomial lower bounds for general circuits. Without the hardness assumption, we are able to show that they can not prove exponential lower bounds (for general circuits) for the discrete logarithm problem. We show that the weaker class ofAC0-natural proofs which is sufficient to prove the parity lower bounds of Furst, Saxe, and Sipser, Yao, and Hastad is inherently incapable of proving the bounds of Razborov and Smolensky. We give some formal evidence that natural proofs are indeed natural by showing that every formal complexity measure, which can prove superpolynomial lower bounds for a single function, can do so for almost all functions, which is one of the two requirements of a natural proof in our sense.

436 citations

Proceedings ArticleDOI
23 May 1994
TL;DR: Every formal complexity measure which can prove super-polynomial lower bounds for a single function, can do so for almost all functions, which is one of the key requirements to a natural proof in the authors' sense.
Abstract: We introduce the notion ofnaturalproof. We argue that the known proofs of lower bounds on the complexity of explicit Boolean functions in nonmonotone models fall within our definition of natural. We show, based on a hardness assumption, that natural proofs can not prove superpolynomial lower bounds for general circuits. Without the hardness assumption, we are able to show that they can not prove exponential lower bounds (for general circuits) for the discrete logarithm problem. We show that the weaker class ofAC0-natural proofs which is sufficient to prove the parity lower bounds of Furst, Saxe, and Sipser, Yao, and Hastad is inherently incapable of proving the bounds of Razborov and Smolensky. We give some formal evidence that natural proofs are indeed natural by showing that every formal complexity measure, which can prove superpolynomial lower bounds for a single function, can do so for almost all functions, which is one of the two requirements of a natural proof in our sense.

296 citations

Journal ArticleDOI
TL;DR: The bounded-error quantum communication complexity of the set disjointness predicate is equal to (up to a logarithmic factor), which holds both in the model with prior entanglement and without it.
Abstract: We completely (that is, up to a logarithmic factor) characterize the bounded-error quantum communication complexity of every predicate ) depending only on More precisely, given a predicate on , we put Then the bounded-error quantum communication complexity of is equal to (up to a logarithmic factor) In particular, the complexity of the set disjointness predicate is equal to This result holds both in the model with prior entanglement and in the model without it

253 citations


Cited by
More filters
Proceedings Article
08 Dec 2014
TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Abstract: Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

12,299 citations

Posted Content
TL;DR: This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
Abstract: Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

11,936 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

MonographDOI
20 Apr 2009
TL;DR: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory and can be used as a reference for self-study for anyone interested in complexity.
Abstract: This beginning graduate textbook describes both recent achievements and classical results of computational complexity theory. Requiring essentially no background apart from mathematical maturity, the book can be used as a reference for self-study for anyone interested in complexity, including physicists, mathematicians, and other scientists, as well as a textbook for a variety of courses and seminars. More than 300 exercises are included with a selected hint set.

2,965 citations

Journal ArticleDOI
TL;DR: Expander graphs were first defined by Bassalygo and Pinsker in the early 1970s, and their existence was proved in the late 1970s as discussed by the authors and early 1980s.
Abstract: A major consideration we had in writing this survey was to make it accessible to mathematicians as well as to computer scientists, since expander graphs, the protagonists of our story, come up in numerous and often surprising contexts in both fields But, perhaps, we should start with a few words about graphs in general They are, of course, one of the prime objects of study in Discrete Mathematics However, graphs are among the most ubiquitous models of both natural and human-made structures In the natural and social sciences they model relations among species, societies, companies, etc In computer science, they represent networks of communication, data organization, computational devices as well as the flow of computation, and more In mathematics, Cayley graphs are useful in Group Theory Graphs carry a natural metric and are therefore useful in Geometry, and though they are “just” one-dimensional complexes, they are useful in certain parts of Topology, eg Knot Theory In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such systems The study of these models calls, then, for the comprehension of the significant structural properties of the relevant graphs But are there nontrivial structural properties which are universally important? Expansion of a graph requires that it is simultaneously sparse and highly connected Expander graphs were first defined by Bassalygo and Pinsker, and their existence first proved by Pinsker in the early ’70s The property of being an expander seems significant in many of these mathematical, computational and physical contexts It is not surprising that expanders are useful in the design and analysis of communication networks What is less obvious is that expanders have surprising utility in other computational settings such as in the theory of error correcting codes and the theory of pseudorandomness In mathematics, we will encounter eg their role in the study of metric embeddings, and in particular in work around the Baum-Connes Conjecture Expansion is closely related to the convergence rates of Markov Chains, and so they play a key role in the study of Monte-Carlo algorithms in statistical mechanics and in a host of practical computational applications The list of such interesting and fruitful connections goes on and on with so many applications we will not even

2,037 citations