Institution
Microsoft
Company•Redmond, Washington, United States•
About: Microsoft is a company organization based out in Redmond, Washington, United States. It is known for research contribution in the topics: User interface & Context (language use). The organization has 49501 authors who have published 86900 publications receiving 4195429 citations. The organization is also known as: MS & MSFT.
Topics: User interface, Context (language use), Object (computer science), Computer science, Cloud computing
Papers published on a yearly basis
Papers
More filters
••
01 Sep 1999TL;DR: This work develops Wallflower, a three-component system for background maintenance that is shown to outperform previous algorithms by handling a greater set of the difficult situations that can occur.
Abstract: Background maintenance is a frequent element of video surveillance systems. We develop Wallflower, a three-component system for background maintenance: the pixel-level component performs Wiener filtering to make probabilistic predictions of the expected background; the region-level component fills in homogeneous regions of foreground objects; and the frame-level component detects sudden, global changes in the image and swaps in better approximations of the background. We compare our system with 8 other background subtraction algorithms. Wallflower is shown to outperform previous algorithms by handling a greater set of the difficult situations that can occur. Finally, we analyze the experimental results and propose normative principles for background maintenance.
1,971 citations
••
05 Jan 2010TL;DR: This paper examines the practice of retweeting as a way by which participants can be "in a conversation" and highlights how authorship, attribution, and communicative fidelity are negotiated in diverse ways.
Abstract: Twitter - a microblogging service that enables users to post messages ("tweets") of up to 140 characters - supports a variety of communicative practices; participants use Twitter to converse with individuals, groups, and the public at large, so when conversations emerge, they are often experienced by broader audiences than just the interlocutors. This paper examines the practice of retweeting as a way by which participants can be "in a conversation." While retweeting has become a convention inside Twitter, participants retweet using different styles and for diverse reasons. We highlight how authorship, attribution, and communicative fidelity are negotiated in diverse ways. Using a series of case studies and empirical data, this paper maps out retweeting as a conversational practice.
1,953 citations
•
TL;DR: The propagation formulations behind the residual building blocks suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation.
Abstract: Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: this https URL
1,952 citations
••
TL;DR: It is shown that further error rate reduction can be obtained by using convolutional neural networks (CNNs), and a limited-weight-sharing scheme is proposed that can better model speech features.
Abstract: Recently, the hybrid deep neural network (DNN)- hidden Markov model (HMM) has been shown to significantly improve speech recognition performance over the conventional Gaussian mixture model (GMM)-HMM. The performance improvement is partially attributed to the ability of the DNN to model complex correlations in speech features. In this paper, we show that further error rate reduction can be obtained by using convolutional neural networks (CNNs). We first present a concise description of the basic CNN and explain how it can be used for speech recognition. We further propose a limited-weight-sharing scheme that can better model speech features. The special structure such as local connectivity, weight sharing, and pooling in CNNs exhibits some degree of invariance to small shifts of speech features along the frequency axis, which is important to deal with speaker and environment variations. Experimental results show that CNNs reduce the error rate by 6%-10% compared with DNNs on the TIMIT phone recognition and the voice search large vocabulary speech recognition tasks.
1,948 citations
••
TL;DR: This work surveys the most widely-used algorithms for smoothing models for language n -gram modeling, and presents an extensive empirical comparison of several of these smoothing techniques, including those described by Jelinek and Mercer (1980), and introduces methodologies for analyzing smoothing algorithm efficacy in detail.
1,948 citations
Authors
Showing all 49603 results
Name | H-index | Papers | Citations |
---|---|---|---|
P. Chang | 170 | 2154 | 151783 |
Andrew Zisserman | 167 | 808 | 261717 |
Alexander S. Szalay | 166 | 936 | 145745 |
Darien Wood | 160 | 2174 | 136596 |
Xiang Zhang | 154 | 1733 | 117576 |
Vivek Sharma | 150 | 3030 | 136228 |
Rajesh Kumar | 149 | 4439 | 140830 |
Bernhard Schölkopf | 148 | 1092 | 149492 |
Thomas S. Huang | 146 | 1299 | 101564 |
Christopher D. Manning | 138 | 499 | 147595 |
Nicolas Berger | 137 | 1581 | 96529 |
Georgios B. Giannakis | 137 | 1321 | 73517 |
Luc Van Gool | 133 | 1307 | 107743 |
Eric Horvitz | 133 | 914 | 66162 |
Xiaoou Tang | 132 | 553 | 94555 |