Institution
Microsoft
Company•Redmond, Washington, United States•
About: Microsoft is a company organization based out in Redmond, Washington, United States. It is known for research contribution in the topics: User interface & Context (language use). The organization has 49501 authors who have published 86900 publications receiving 4195429 citations. The organization is also known as: MS & MSFT.
Topics: User interface, Context (language use), Object (computer science), Computer science, Cloud computing
Papers published on a yearly basis
Papers
More filters
••
20 May 2001TL;DR: The design of PAST is sketched, a large-scale, Internet-based, global storage utility that provides scalability, high availability, persistence and security, and the use of randomization to ensure diversity in the set of nodes that store a file's replicas.
Abstract: This paper sketches the design of PAST, a large-scale, Internet-based, global storage utility that provides scalability, high availability, persistence and security. PAST is a peer-to-peer Internet application and is entirely self-organizing. PAST nodes serve as access points for clients, participate in the routing of client requests, and contribute storage to the system. Nodes are not trusted, they may join the system at any time and may silently leave the system without warning. Yet, the system is able to provide strong assurances, efficient storage access, load balancing and scalability. Among the most interesting aspects of PAST's design are (1) the Pastry location and routing scheme, which reliably and efficiently routes client requests among the PAST nodes, has good network locality properties and automatically resolves node failures and node additions; (2) the use of randomization to ensure diversity in the set of nodes that store a file's replicas and to provide load balancing; and (3) the optional use of smartcards, which are held by each PAST user and issued by a third party called a broker The smartcards support a quota system that balances supply and demand of storage in the system.
702 citations
••
01 Dec 2011TL;DR: This work investigates the potential of Context-Dependent Deep-Neural-Network HMMs, or CD-DNN-HMMs, from a feature-engineering perspective to reduce the word error rate for speaker-independent transcription of phone calls.
Abstract: We investigate the potential of Context-Dependent Deep-Neural-Network HMMs, or CD-DNN-HMMs, from a feature-engineering perspective. Recently, we had shown that for speaker-independent transcription of phone calls (NIST RT03S Fisher data), CD-DNN-HMMs reduced the word error rate by as much as one third—from 27.4%, obtained by discriminatively trained Gaussian-mixture HMMs with HLDA features, to 18.5%—using 300+ hours of training data (Switchboard), 9000+ tied triphone states, and up to 9 hidden network layers.
702 citations
•
TL;DR: The Deep Convolutional Inverse Graphics Network (DC-IGN) as discussed by the authors learns an interpretable representation of images with respect to transformations such as out-of-plane rotations and lighting variations.
Abstract: This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that learns an interpretable representation of images. This representation is disentangled with respect to transformations such as out-of-plane rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm. We propose a training procedure to encourage neurons in the graphics code layer to represent a specific transformation (e.g. pose or light). Given a single input image, our model can generate new images of the same object with variations in pose and lighting. We present qualitative and quantitative results of the model's efficacy at learning a 3D rendering engine.
702 citations
••
TL;DR: In Part I of this series, we showed that left convergence is equivalent to convergence in metric, both for simple graphs and for graphs with nodeweights and edgeweights as discussed by the authors.
702 citations
•
TL;DR: This work takes the skeleton as the input at each time slot and introduces a novel regularization scheme to learn the co-occurrence features of skeleton joints, and proposes a new dropout algorithm which simultaneously operates on the gates, cells, and output responses of the LSTM neurons.
Abstract: Skeleton based action recognition distinguishes human actions using the trajectories of skeleton joints, which provide a very good representation for describing actions. Considering that recurrent neural networks (RNNs) with Long Short-Term Memory (LSTM) can learn feature representations and model long-term temporal dependencies automatically, we propose an end-to-end fully connected deep LSTM network for skeleton based action recognition. Inspired by the observation that the co-occurrences of the joints intrinsically characterize human actions, we take the skeleton as the input at each time slot and introduce a novel regularization scheme to learn the co-occurrence features of skeleton joints. To train the deep LSTM network effectively, we propose a new dropout algorithm which simultaneously operates on the gates, cells, and output responses of the LSTM neurons. Experimental results on three human action recognition datasets consistently demonstrate the effectiveness of the proposed model.
702 citations
Authors
Showing all 49603 results
Name | H-index | Papers | Citations |
---|---|---|---|
P. Chang | 170 | 2154 | 151783 |
Andrew Zisserman | 167 | 808 | 261717 |
Alexander S. Szalay | 166 | 936 | 145745 |
Darien Wood | 160 | 2174 | 136596 |
Xiang Zhang | 154 | 1733 | 117576 |
Vivek Sharma | 150 | 3030 | 136228 |
Rajesh Kumar | 149 | 4439 | 140830 |
Bernhard Schölkopf | 148 | 1092 | 149492 |
Thomas S. Huang | 146 | 1299 | 101564 |
Christopher D. Manning | 138 | 499 | 147595 |
Nicolas Berger | 137 | 1581 | 96529 |
Georgios B. Giannakis | 137 | 1321 | 73517 |
Luc Van Gool | 133 | 1307 | 107743 |
Eric Horvitz | 133 | 914 | 66162 |
Xiaoou Tang | 132 | 553 | 94555 |