scispace - formally typeset
Search or ask a question
Institution

Facebook

CompanyTel Aviv, Israel
About: Facebook is a company organization based out in Tel Aviv, Israel. It is known for research contribution in the topics: Artificial neural network & Language model. The organization has 7856 authors who have published 10906 publications receiving 570123 citations. The organization is also known as: facebook.com & FB.


Papers
More filters
Proceedings ArticleDOI
Doug Beaver1, Sanjeev Kumar1, Harry C. Li1, Jason Sobel1, Peter Vajgel1 
04 Oct 2010
TL;DR: This paper describes Haystack, an object storage system optimized for Facebook's Photos application, which provides a less expensive and higher performing solution than the previous approach, which leveraged network attached storage appliances over NFS.
Abstract: This paper describes Haystack, an object storage system optimized for Facebook's Photos application Facebook currently stores over 260 billion images, which translates to over 20 petabytes of data Users upload one billion new photos (∼60 terabytes) each week and Facebook serves over one million images per second at peak Haystack provides a less expensive and higher performing solution than our previous approach, which leveraged network attached storage appliances over NFS Our key observation is that this traditional design incurs an excessive number of disk operations because of metadata lookups We carefully reduce this per photo metadata so that Haystack storage machines can perform all metadata lookups in main memory This choice conserves disk operations for reading actual data and thus increases overall throughput

473 citations

Proceedings ArticleDOI
04 Apr 2009
TL;DR: This work uses server log data from approximately 140,000 newcomers in Facebook to predict long-term sharing based on the experiences the newcomers have in their first two weeks, and finds support for social learning: newcomers who see their friends contributing go on to share more content themselves.
Abstract: Social networking sites (SNS) are only as good as the content their users share. Therefore, designers of SNS seek to improve the overall user experience by encouraging members to contribute more content. However, user motivations for contribution in SNS are not well understood. This is particularly true for newcomers, who may not recognize the value of contribution. Using server log data from approximately 140,000 newcomers in Facebook, we predict long-term sharing based on the experiences the newcomers have in their first two weeks. We test four mechanisms: social learning, singling out, feedback, and distribution. In particular, we find support for social learning: newcomers who see their friends contributing go on to share more content themselves. For newcomers who are initially inclined to contribute, receiving feedback and having a wide audience are also predictors of increased sharing. On the other hand, singling out appears to affect only those newcomers who are not initially inclined to share. The paper concludes with design implications for motivating newcomer sharing in online communities.

469 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this article, the authors propose a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer.
Abstract: Existing methods for visual reasoning attempt to directly map inputs to outputs using black-box architectures without explicitly modeling the underlying reasoning processes. As a result, these black-box models often learn to exploit biases in the data rather than learning to perform visual reasoning. Inspired by module networks, this paper proposes a model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer. Both the program generator and the execution engine are implemented by neural networks, and are trained using a combination of backpropagation and REINFORCE. Using the CLEVR benchmark for visual reasoning, we show that our model significantly outperforms strong baselines and generalizes better in a variety of settings.

469 citations

Proceedings ArticleDOI
20 Apr 2018
TL;DR: The authors proposed two model variants, a neural and a phrase-based model, which leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation.
Abstract: Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT’14 English-French and WMT’16 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semi-supervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.

461 citations

Posted Content
TL;DR: Domain Transfer Network (DTN) as discussed by the authors employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves.
Abstract: We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.

460 citations


Authors

Showing all 7875 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Xiang Zhang1541733117576
Jitendra Malik151493165087
Trevor Darrell148678181113
Christopher D. Manning138499147595
Robert W. Heath128104973171
Pieter Abbeel12658970911
Yann LeCun121369171211
Li Fei-Fei120420145574
Jon Kleinberg11744487865
Sergey Levine11565259769
Richard Szeliski11335972019
Sanjeev Kumar113132554386
Bruce Neal10856187213
Larry S. Davis10769349714
Network Information
Related Institutions (5)
Google
39.8K papers, 2.1M citations

98% related

Microsoft
86.9K papers, 4.1M citations

96% related

Adobe Systems
8K papers, 214.7K citations

94% related

Carnegie Mellon University
104.3K papers, 5.9M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20241
202237
20211,738
20202,017
20191,607
20181,229