Institution
Amazon.com
Company•Seattle, Washington, United States•
About: Amazon.com is a company organization based out in Seattle, Washington, United States. It is known for research contribution in the topics: Service (business) & Service provider. The organization has 13363 authors who have published 17317 publications receiving 266589 citations.
Papers published on a yearly basis
Papers
More filters
•
08 Jun 2015TL;DR: In this paper, the placement policies applied at the administrative layer or storage layer may be based on the percentage or amount of provisioned, reserved, or available storage or IOPS capacity on each storage device, and particular placement (or subsequent operations to move partition replicas) may result in an overall resource utilization that is well balanced.
Abstract: A system that implements a data storage service may store data in multiple replicated partitions on respective storage nodes. The selection of the storage nodes (or storage devices thereof) on which to store the partition replicas may be performed by administrative components that are responsible for partition management and resource allocation for respective groups of storage nodes (e.g., based on a global view of resource capacity or usage), or the selection of particular storage devices of a storage node may be determined by the storage node itself (e.g., based on a local view of resource capacity or usage). Placement policies applied at the administrative layer or storage layer may be based on the percentage or amount of provisioned, reserved, or available storage or IOPS capacity on each storage device, and particular placements (or subsequent operations to move partition replicas) may result in an overall resource utilization that is well balanced.
83 citations
•
TL;DR: MetaOptNet as mentioned in this paper proposes to use linear classifiers as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of fewshot recognition benchmarks.
Abstract: Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. However, even in the few-shot regime, discriminatively trained linear predictors can offer better generalization. We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks. Our objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories. To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. This allows us to use high-dimensional embeddings with improved generalization at a modest increase in computational overhead. Our approach, named MetaOptNet, achieves state-of-the-art performance on miniImageNet, tieredImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks. Our code is available at this https URL.
83 citations
•
20 Sep 2010TL;DR: A computer system includes a chassis, one or more hard disk drives coupled to the chassis, and a number of air passages under at least one of the hard disk drive as discussed by the authors.
Abstract: A computer system includes a chassis, one or more hard disk drives coupled to the chassis, and one or more air passages under at least one of the hard disk drives. The air passages include one or more air inlets and one or more air outlets. The inlets direct at least a portion of the air downwardly into the passages. The passages allow air to move from the air inlets to the air outlets.
83 citations
••
01 Jun 2021TL;DR: This work develops a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision, underpinned by contrastive regularization and confidence-based reweighting, which gradually improves model fitting while effectively suppressing error propagation.
Abstract: Fine-tuned pre-trained language models (LMs) have achieved enormous success in many natural language processing (NLP) tasks, but they still require excessive labeled data in the fine-tuning stage. We study the problem of fine-tuning pre-trained LMs using only weak supervision, without any labeled data. This problem is challenging because the high capacity of LMs makes them prone to overfitting the noisy labels generated by weak supervision. To address this problem, we develop a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision. Underpinned by contrastive regularization and confidence-based reweighting, our framework gradually improves model fitting while effectively suppressing error propagation. Experiments on sequence, token, and sentence pair classification tasks show that our model outperforms the strongest baseline by large margins and achieves competitive performance with fully-supervised fine-tuning methods. Our implementation is available on https://github.com/yueyu1030/COSINE.
83 citations
•
20 Jul 2006TL;DR: In this paper, a spelling change analyzer was used to identify useful alternative spellings of search strings submitted to a search engine, as detected by programmatically analyzing search histories of a population of search engine users.
Abstract: A spelling change analyzer (160) identifies useful alternative spellings of search strings submitted to a search engine (100). The spelling change analyzer (160) takes into consideration spelling changes made by users, as detected by programmatically analyzing search histories (150) of a population of search engine users. In one embodiment, an assessment of whether a second search string represents a useful alternative spelling of a first search string takes into consideration (1) an edit distance between the first and second search strings, and (2) a likelihood that a user who submits the first search string will thereafter submit the second search string, as determined by monitoring and analyzing actions of users.
83 citations
Authors
Showing all 13498 results
Name | H-index | Papers | Citations |
---|---|---|---|
Jiawei Han | 168 | 1233 | 143427 |
Bernhard Schölkopf | 148 | 1092 | 149492 |
Christos Faloutsos | 127 | 789 | 77746 |
Alexander J. Smola | 122 | 434 | 110222 |
Rama Chellappa | 120 | 1031 | 62865 |
William F. Laurance | 118 | 470 | 56464 |
Andrew McCallum | 113 | 472 | 78240 |
Michael J. Black | 112 | 429 | 51810 |
David Heckerman | 109 | 483 | 62668 |
Larry S. Davis | 107 | 693 | 49714 |
Chris M. Wood | 102 | 795 | 43076 |
Pietro Perona | 102 | 414 | 94870 |
Guido W. Imbens | 97 | 352 | 64430 |
W. Bruce Croft | 97 | 426 | 39918 |
Chunhua Shen | 93 | 681 | 37468 |