scispace - formally typeset
Search or ask a question
Institution

Facebook

CompanyTel Aviv, Israel
About: Facebook is a company organization based out in Tel Aviv, Israel. It is known for research contribution in the topics: Computer science & Artificial neural network. The organization has 7856 authors who have published 10906 publications receiving 570123 citations. The organization is also known as: facebook.com & FB.


Papers
More filters
Patent
12 Sep 2006
TL;DR: In this article, the content maintained in an online social network or other online communities is tracked for changes and updates, such as user profiles, digital photos, digital audio and video files, testimonials, and identification of users who are friends.
Abstract: Content maintained in an online social network or other online communities is tracked for changes and updates. The content may include user profiles, digital photos, digital audio and video files, testimonials, and identification of users who are friends. When such change or update occurs, users of the online social network or online community are notified according to various criteria that they have set. The notification may be provided by e-mail, an RSS feed, or a web page when accessed. With this feature, users can browse through content of other users with efficiency.

117 citations

Proceedings ArticleDOI
01 Apr 2021
TL;DR: The authors show that large scale models can learn these skills when given appropriate training data and choice of generation strategy, and build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make their models and code publicly available.
Abstract: Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we highlight other ingredients. Good conversation requires blended skills: providing engaging talking points, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models outperform existing approaches in multi-turn dialogue on engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.

117 citations

Journal ArticleDOI
TL;DR: In this paper, a heuristic clustering method was used to group Bitcoin wallets based on evidence of shared authority and then using reidentification attacks (i.e., empirical purchasing of goods and services) to classify the operators of those clusters.
Abstract: Bitcoin is a purely online virtual currency, unbacked by either physical commodities or sovereign obligation; instead, it relies on a combination of cryptographic protection and a peer-to-peer protocol for witnessing settlements. Consequently, Bitcoin has the unintuitive property that while the ownership of money is implicitly anonymous, its flow is globally visible. In this paper we explore this unique characteristic further, using heuristic clustering to group Bitcoin wallets based on evidence of shared authority, and then using re-identification attacks (i.e., empirical purchasing of goods and services) to classify the operators of those clusters. From this analysis, we consider the challenges for those seeking to use Bitcoin for criminal or fraudulent purposes at scale.

117 citations

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, the problem of inferring image labels from images when only a few annotated examples are available at training time is considered, which is referred to as low-shot learning, where a standard approach is to retrain the last few layers of a CNN learned on separate classes for which training examples are abundant.
Abstract: This paper considers the problem of inferring image labels from images when only a few annotated examples are available at training time. This setup is often referred to as low-shot learning, where a standard approach is to retrain the last few layers of a convolutional neural network learned on separate classes for which training examples are abundant. We consider a semi-supervised setting based on a large collection of images to support label propagation. This is possible by leveraging the recent advances on large-scale similarity graph construction. We show that despite its conceptual simplicity, scaling label propagation up to hundred millions of images leads to state of the art accuracy in the low-shot learning regime.

117 citations

Patent
14 Jun 2004
TL;DR: In this article, relevant content is prepared and selected for delivery to a member of a network based on prior online activities of other members of the network, and the closeness of the member's relationship with the other members.
Abstract: Relevant content is prepared and selected for delivery to a member of a network based, in part, on prior online activities of the other members of the network, and the closeness of the member's relationship with the other members of the network. The relevant content may be an online ad, and is selected from a number of candidate online ads based on click-through rates of groups that are predefined with respect to the member and with respect to certain attributes. An online ad's revenue-generating potential may be considered in the selection process.

117 citations


Authors

Showing all 7875 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Xiang Zhang1541733117576
Jitendra Malik151493165087
Trevor Darrell148678181113
Christopher D. Manning138499147595
Robert W. Heath128104973171
Pieter Abbeel12658970911
Yann LeCun121369171211
Li Fei-Fei120420145574
Jon Kleinberg11744487865
Sergey Levine11565259769
Richard Szeliski11335972019
Sanjeev Kumar113132554386
Bruce Neal10856187213
Larry S. Davis10769349714
Network Information
Related Institutions (5)
Google
39.8K papers, 2.1M citations

98% related

Microsoft
86.9K papers, 4.1M citations

96% related

Adobe Systems
8K papers, 214.7K citations

94% related

Carnegie Mellon University
104.3K papers, 5.9M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20241
202237
20211,738
20202,017
20191,607
20181,229