scispace - formally typeset
Search or ask a question
Institution

Facebook

CompanyTel Aviv, Israel
About: Facebook is a company organization based out in Tel Aviv, Israel. It is known for research contribution in the topics: Computer science & Artificial neural network. The organization has 7856 authors who have published 10906 publications receiving 570123 citations. The organization is also known as: facebook.com & FB.


Papers
More filters
Patent
01 Mar 2010
TL;DR: In this article, a social CAPTCHA is presented to authenticate a member of the social network, which includes one or more challenge questions based on information available in the social networks, such as the user's activities and/or connections.
Abstract: A social CAPTCHA is presented to authenticate a member of the social network. The social CAPTCHA includes one or more challenge questions based on information available in the social network, such as the user's activities and/or connections in the social network. The social information selected for the social CAPTCHA may be determined based on affinity scores associated with the member's connections, so that the challenge question relates to information that the user is more likely to be familiar with. A degree of difficulty of challenge questions may be determined and used for selecting the CAPTCHA based on a degree of suspicion.

146 citations

Posted Content
TL;DR: Empirical study on standard machine translation and text summarization benchmarks shows that noisy self-training is able to effectively utilize unlabeled data and improve the performance of the supervised baseline by a large margin.
Abstract: Self-training is one of the earliest and simplest semi-supervised methods. The key idea is to augment the original labeled dataset with unlabeled data paired with the model's prediction (i.e. the pseudo-parallel data). While self-training has been extensively studied on classification problems, in complex sequence generation tasks (e.g. machine translation) it is still unclear how self-training works due to the compositionality of the target space. In this work, we first empirically show that self-training is able to decently improve the supervised baseline on neural sequence generation tasks. Through careful examination of the performance gains, we find that the perturbation on the hidden states (i.e. dropout) is critical for self-training to benefit from the pseudo-parallel data, which acts as a regularizer and forces the model to yield close predictions for similar unlabeled inputs. Such effect helps the model correct some incorrect predictions on unlabeled data. To further encourage this mechanism, we propose to inject noise to the input space, resulting in a "noisy" version of self-training. Empirical study on standard machine translation and text summarization benchmarks shows that noisy self-training is able to effectively utilize unlabeled data and improve the performance of the supervised baseline by a large margin.

146 citations

Proceedings ArticleDOI
15 Jun 2019
TL;DR: This work introduces the first completely unsupervised correspondence learning approach for deformable 3D shapes, understanding that natural deformations approximately preserve the metric structure of the surface, yielding a natural criterion to drive the learning process toward distortion-minimizing predictions.
Abstract: We introduce the first completely unsupervised correspondence learning approach for deformable 3D shapes. Key to our model is the understanding that natural deformations (such as changes in pose) approximately preserve the metric structure of the surface, yielding a natural criterion to drive the learning process toward distortion-minimizing predictions. On this basis, we overcome the need for annotated data and replace it by a purely geometric criterion. The resulting learning model is class-agnostic, and is able to leverage any type of deformable geometric data for the training phase. In contrast to existing supervised approaches which specialize on the class seen at training time, we demonstrate stronger generalization as well as applicability to a variety of challenging settings. We showcase our method on a wide selection of correspondence benchmarks, where we outperform other methods in terms of accuracy, generalization, and efficiency.

146 citations

Patent
Ding Zhou1, Pierre Moreels1
29 Oct 2010
TL;DR: In this paper, user profile information for a user of a social networking system is inferred based on information about user profile of the user's connections in the social network system, including age, gender, education, affiliations, location, and the like.
Abstract: User profile information for a user of a social networking system is inferred based on information about user profile of the user's connections in the social networking system. The inferred user profile attributes may include age, gender, education, affiliations, location, and the like. To infer a value of a user profile attribute, the system may determine an aggregate value based on the attributes of the user's connections. A confidence score may also be associated with the inferred attribute value. The set of connections analyzed to infer a user profile attribute may depend on the attribute, the types of connections, and the interactions between the user and the connections. The inferred attribute values may be used to update the user's profile and to determine information relevant to the user to be presented to the user (e.g., targeting advertisements to the user based on the user's inferred attributes).

145 citations


Authors

Showing all 7875 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Xiang Zhang1541733117576
Jitendra Malik151493165087
Trevor Darrell148678181113
Christopher D. Manning138499147595
Robert W. Heath128104973171
Pieter Abbeel12658970911
Yann LeCun121369171211
Li Fei-Fei120420145574
Jon Kleinberg11744487865
Sergey Levine11565259769
Richard Szeliski11335972019
Sanjeev Kumar113132554386
Bruce Neal10856187213
Larry S. Davis10769349714
Network Information
Related Institutions (5)
Google
39.8K papers, 2.1M citations

98% related

Microsoft
86.9K papers, 4.1M citations

96% related

Adobe Systems
8K papers, 214.7K citations

94% related

Carnegie Mellon University
104.3K papers, 5.9M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20241
202237
20211,738
20202,017
20191,607
20181,229