scispace - formally typeset
Search or ask a question
Author

Michael S. Bernstein

Bio: Michael S. Bernstein is an academic researcher from Stanford University. The author has contributed to research in topics: Crowdsourcing & Computer science. The author has an hindex of 52, co-authored 191 publications receiving 42744 citations. Previous affiliations of Michael S. Bernstein include Association for Computing Machinery & Massachusetts Institute of Technology.


Papers
More filters
Peer Review
TL;DR: In this article , the authors use a parametric empirical Bayes model to weight reviews of a new product against expectations of what that product's quality should be based on previous products that existed in the same market, especially when the number of reviews for that new product are low.
Abstract: User-solicited ratings systems in online marketplaces suffer from a cold-start problem : new products have very few ratings, which may capture overly pessimistic or optimistic views of the proper rating of that product. This could lead to platforms promoting new products that are actually low quality, or cause high-quality products to get discouraged and exit the market early. In this paper, we ad-dress this cold-start problem through an approach that softens early reviews, interpolating them against a background distribution of product ratings on the platform. We instantiate this approach using a parametric empirical Bayes model, which weighs reviews of a new product against expectations of what that product’s quality ought to be based on previous products that existed in the same market, especially when the number of reviews for that new product are low. We apply our method to real-world data drawn from Amazon as well as synthetically generated data. In aggregate, parametric empirical Bayes performs better on predicting future ratings, especially when working with few reviews. However, in these same low-data settings, our method performs worse on individual products that are outliers within the population.

2 citations

01 Jan 2007
TL;DR: A user-controlled central database of personal information called Databasket is proposed as a potential reinvention of web personalization, and an interface drawing on research in usable privacy and security is designed to keep the user aware and in control.
Abstract: Web applications have put significant effort into personalization services to improve the user experience. The current personalization model suffers from two major drawbacks: each site has access to a very limited subset of information about the user, and the users themselves have little or no control about what data is maintained and how it is kept private. Users thus repeat personalizing rituals across a number of sites, specifying their names, email and shipping addresses, and interests; and web sites often make poor predictions, recommending items when inappropriate or the wrong items altogether. Web sites occasionally see privacy gaffes such as America Online’s in 2006, sharing personal data on the Web and exposing their users to fraud and identity theft. In this paper we propose a user-controlled central database of personal information called Databasket (Figure 1) as a potential reinvention of web personalization. We place the data locally on the user’s computer, ensuring that the user himor herself has primary control over how the data is shared. We provide a Javascript API for web sites to query over a range of this data once the user has granted permission, thus allowing web sites access to customize using broader, more up-to-date data. To control data access, we have designed an interface drawing on research in usable privacy and security to keep the user (arguably the most vulnerable link) aware and in control. To follow, we introduce the Databasket system and its design. We focus first on related work in centralized personal data repositories for the web. Then we describe a typical Databasket use scenario, the system’s user interface and developer API, and back-end implementation. We report on a first-use study of the interface using two web sites developed using the Databasket API, and finally focus on challenges and future work for the system.

2 citations

Proceedings ArticleDOI
05 May 2012
TL;DR: This SIG will explore reviewing through a critical and constructive lens, discussing current successes and future opportunities in the CHI review process, and actionable conclusions about ways to improve the system will be drawn.
Abstract: The HCI research community grows bigger each year, refining and expanding its boundaries in new ways. The ability to effectively review submissions is critical to the growth of CHI and related conferences. The review process is designed to produce a consistent supply of fair, high-quality reviews without overloading individual reviewers; yet, after each cycle, concerns are raised about limitations of the process. Every year, participants are left wondering why their papers were not accepted (or why they were). This SIG will explore reviewing through a critical and constructive lens, discussing current successes and future opportunities in the CHI review process. Goals will include actionable conclusions about ways to improve the system, potential alternative peer models, and the creation of materials to educate newcomer reviewers.

2 citations

Journal ArticleDOI
27 Aug 2022
TL;DR: This article measured the prevalence of anti-social behavior in Reddit by measuring the proportion of unmoderated comments in the 97 most popular communities on Reddit that violate eight widely accepted platform norms and found that 6.25% of all comments in 2016, and 4.28% in 2020, are violations of these norms.
Abstract: With increasing attention to online anti-social behaviors such as personal attacks and bigotry, it is critical to have an accurate accounting of how widespread anti-social behaviors are. In this paper, we empirically measure the prevalence of anti-social behavior in one of the world's most popular online community platforms. We operationalize this goal as measuring the proportion of unmoderated comments in the 97 most popular communities on Reddit that violate eight widely accepted platform norms. To achieve this goal, we contribute a human-AI pipeline for identifying these violations and a bootstrap sampling method to quantify measurement uncertainty. We find that 6.25% (95% Confidence Interval [5.36%, 7.13%]) of all comments in 2016, and 4.28% (95% CI [2.50%, 6.26%]) in 2020, are violations of these norms. Most anti-social behaviors remain unmoderated: moderators only removed one in twenty violating comments in 2016, and one in ten violating comments in 2020. Personal attacks were the most prevalent category of norm violation; pornography and bigotry were the most likely to be moderated, while politically inflammatory comments and misogyny/vulgarity were the least likely to be moderated. This paper offers a method and set of empirical results for tracking these phenomena as both the social practices (e.g., moderation) and technical practices (e.g., design) evolve.

2 citations

Proceedings ArticleDOI
05 May 2012
TL;DR: This panel brings together scholars who study deviance and failure in diverse social computing systems to examine four design-related themes that contribute to and support these problematic uses: theft, anonymity, deviance, and polarization.
Abstract: Social computing technologies are pervasive in our work, relationships, and culture. Despite their promise for transforming the structure of communication and human interaction, the complex social dimensions of these technological systems often reproduce offline social ills or create entirely novel forms of conflict and deviance. This panel brings together scholars who study deviance and failure in diverse social computing systems to examine four design-related themes that contribute to and support these problematic uses: theft, anonymity, deviance, and polarization.

2 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
04 Sep 2014
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

55,235 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations