scispace - formally typeset
Search or ask a question
Institution

Facebook

CompanyTel Aviv, Israel
About: Facebook is a company organization based out in Tel Aviv, Israel. It is known for research contribution in the topics: Computer science & Artificial neural network. The organization has 7856 authors who have published 10906 publications receiving 570123 citations. The organization is also known as: facebook.com & FB.


Papers
More filters
Patent
13 Sep 2012
TL;DR: In this paper, a geo-social networking system determines a user's current location, calculates a novelty score for the location representing the user's degree of familiarity, and surfaces content within a geographic and temporal radius based on the novelty score.
Abstract: In one embodiment, a geo-social networking system determines a user's current location, calculates a novelty score for the location representing the user's degree of familiarity, and surfaces content within a geographic and temporal radius based on the novelty score for display to the user.

127 citations

Patent
23 Jun 2015
TL;DR: In this paper, a method was proposed to find the first person in an image portraying at least a first person, accessing a social graph, determining a social-graph affinity for a first set of users and determining a facial-recognition scores for the first set.
Abstract: In one embodiment, a method includes accessing an image portraying at least a first person, accessing a social graph, determining a social-graph affinity for a first set of users, determining a facial-recognition scores for the first set of users based on the social-graph affinity for each user and a facial-representation associated with each user, where the facial-representation for each user is compared with the image, and generating one or more tag suggestions for the first person portrayed in the image based on the facial-recognition scores.

127 citations

Patent
12 Jan 2012
TL;DR: In this article, a mobile device performs an over-the-air firmware update by writing the updated firmware to a inactive system image partition, and rebooting the device, and the security of the OTA update is maintained through checking a plurality of security signatures in an OTA manifest.
Abstract: In one embodiment, a mobile device performs an over-the-air firmware update by writing the updated firmware to a inactive system image partition, and rebooting the device. The security of the OTA update is maintained through checking a plurality of security signatures in an OTA manifest, and the integrity of the data is maintained by checking a hash value of the downloaded system image.

127 citations

Posted Content
TL;DR: In this paper, a dense-sparse-densemble training flow is proposed for regularizing deep neural networks and achieving better optimization performance. But it is difficult to train a large number of parameters, making them very hard to train.
Abstract: Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance. In the first D (Dense) step, we train a dense network to learn connection weights and importance. In the S (Sparse) step, we regularize the network by pruning the unimportant connections with small weights and retraining the network given the sparsity constraint. In the final D (re-Dense) step, we increase the model capacity by removing the sparsity constraint, re-initialize the pruned parameters from zero and retrain the whole dense network. Experiments show that DSD training can improve the performance for a wide range of CNNs, RNNs and LSTMs on the tasks of image classification, caption generation and speech recognition. On ImageNet, DSD improved the Top1 accuracy of GoogLeNet by 1.1%, VGG-16 by 4.3%, ResNet-18 by 1.2% and ResNet-50 by 1.1%, respectively. On the WSJ'93 dataset, DSD improved DeepSpeech and DeepSpeech2 WER by 2.0% and 1.1%. On the Flickr-8K dataset, DSD improved the NeuralTalk BLEU score by over 1.7. DSD is easy to use in practice: at training time, DSD incurs only one extra hyper-parameter: the sparsity ratio in the S step. At testing time, DSD doesn't change the network architecture or incur any inference overhead. The consistent and significant performance gain of DSD experiments shows the inadequacy of the current training methods for finding the best local optimum, while DSD effectively achieves superior optimization performance for finding a better solution. DSD models are available to download at this https URL.

127 citations

Patent
Blake Groves1, W. Karl Renner1
07 Jan 2011
TL;DR: In this paper, instant messaging (IM) entities may be invited to an electronic calendar event using an instant message using an IM buddy list, and selecting the IM entities as invitees to the event may include dragging and dropping names of IM entities from a buddy list of an IM application to an event from an e-calendar application, or vice versa.
Abstract: Instant messaging (IM) entities may be invited to an electronic calendar event using an instant message. Selecting the IM entities as invitees to the event may include dragging and dropping names of the IM entities from a buddy list of an IM application to an event from an electronic calendar application, or vice versa. A method of inviting an entity to a calendar event includes providing a calendar event from a calendar application and recognizing, by the calendar application, an IM entity as an invitee to the event.

127 citations


Authors

Showing all 7875 results

NameH-indexPapersCitations
Yoshua Bengio2021033420313
Xiang Zhang1541733117576
Jitendra Malik151493165087
Trevor Darrell148678181113
Christopher D. Manning138499147595
Robert W. Heath128104973171
Pieter Abbeel12658970911
Yann LeCun121369171211
Li Fei-Fei120420145574
Jon Kleinberg11744487865
Sergey Levine11565259769
Richard Szeliski11335972019
Sanjeev Kumar113132554386
Bruce Neal10856187213
Larry S. Davis10769349714
Network Information
Related Institutions (5)
Google
39.8K papers, 2.1M citations

98% related

Microsoft
86.9K papers, 4.1M citations

96% related

Adobe Systems
8K papers, 214.7K citations

94% related

Carnegie Mellon University
104.3K papers, 5.9M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20241
202237
20211,738
20202,017
20191,607
20181,229