scispace - formally typeset
Search or ask a question
Author

Jeffrey Dean

Other affiliations: University of Washington, World Health Organization, Microsoft  ...read more
Bio: Jeffrey Dean is an academic researcher from Google. The author has contributed to research in topics: Deep learning & Web search query. The author has an hindex of 83, co-authored 242 publications receiving 179031 citations. Previous affiliations of Jeffrey Dean include University of Washington & World Health Organization.


Papers
More filters
Patent
24 Mar 2011
TL;DR: In this paper, a computer-implemented method of providing invitations to a shared communication space, performed by a server system, includes content associated with a set of characteristics, and identifying a user, in accordance with the characteristics associated with the user and the set of attributes associated with content in the shared communications space.
Abstract: A computer-implemented method of providing invitations to a shared communication space, performed by a server system, includes providing the shared communication space, which includes content associated with a set of characteristics, and identifying a user, in accordance with a set of characteristics associated with the user and the set of characteristics associated with the content in the shared communication space. The method further includes sending to the identified user a invitation to participate in the shared communication space, and upon acceptance of the invitation by the user, enabling access by the user to the shared communication space by the user and enabling the user to exchange information with other participants in the shared communication space via the shared communication space.

11 citations

Patent
13 Dec 2000
TL;DR: In this paper, a system modifies documents to aid users in determining which of the entries in the documents to choose, and then provides the modified document to a user, based on the determined scores.
Abstract: A system modifies documents to aid users in determining which of the entries in the documents to choose. The system identifies a document that includes one or more entries. The system determines a score for each of the entries and modifies the identified document, or entries in the identified document, based on the determined scores. The system then provides the modified document to a user.

11 citations

Patent
26 Sep 2011
TL;DR: In this article, a system may determine an extent to which a document is selected when the document is included in a set of search results, generate a score for the document based, at least in part, on the degree to which the document was selected, and rank the document with regard to at least one other document based on the score.
Abstract: A system may determine an extent to which a document is selected when the document is included in a set of search results, generate a score for the document based, at least in part, on the extent to which the document is selected when the document is included in a set of search results; and rank the document with regard to at least one other document based, at least in part, on the score.

11 citations

Patent
Matt Cutts1, Jeffrey Dean1, Paul Haahr1, Monika Henzinger1, Steve Lawrence1, Karl Pfleger1, Simon Tong1 
12 Oct 2010
TL;DR: A system may determine a document inception date associated with a document, generate a score for the document based, at least in part, on the document inception dates, and rank the document with regard to at least one other document based on the score as mentioned in this paper.
Abstract: A system may determine a document inception date associated with a document, generate a score for the document based, at least in part, on the document inception date, and rank the document with regard to at least one other document based, at least in part, on the score.

10 citations

01 Jan 2007
TL;DR: This work applies dynamic profile information to determine the dynamic execution frequency distributions of the classes of receivers at call sites and shows that these distributions are heavily skewed towards the most commonly occurring receiver class across several different languages.
Abstract: Dynamic binding slows down object-oriented programs. Dynamic dispatch mechanisms which work well where all receiver classes are equally likely are too pessimistic because at most call sites one receiver class predominates. We apply dynamic profile information to determine the dynamic execution frequency distributions of the classes of receivers at call sites. We show that these distributions are heavily skewed towards the most commonly occurring receiver class across several different languages. Moreover, we show that the distributions are stable across program inputs, from one version of a program to another, and even to some extent across programs that share library code. Finally, we demonstrate that significant run-time performance improvements for object-oriented programs can be gained by exploiting the information contained in dynamic receiver class distributions in a relatively simple optimizing compiler.

10 citations


Cited by
More filters
Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Proceedings Article
Sergey Ioffe1, Christian Szegedy1
06 Jul 2015
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

30,843 citations