scispace - formally typeset
Search or ask a question
Institution

Amazon.com

CompanySeattle, Washington, United States
About: Amazon.com is a company organization based out in Seattle, Washington, United States. It is known for research contribution in the topics: Service (business) & Service provider. The organization has 13363 authors who have published 17317 publications receiving 266589 citations.


Papers
More filters
Patent
08 Jun 2015
TL;DR: In this paper, the placement policies applied at the administrative layer or storage layer may be based on the percentage or amount of provisioned, reserved, or available storage or IOPS capacity on each storage device, and particular placement (or subsequent operations to move partition replicas) may result in an overall resource utilization that is well balanced.
Abstract: A system that implements a data storage service may store data in multiple replicated partitions on respective storage nodes. The selection of the storage nodes (or storage devices thereof) on which to store the partition replicas may be performed by administrative components that are responsible for partition management and resource allocation for respective groups of storage nodes (e.g., based on a global view of resource capacity or usage), or the selection of particular storage devices of a storage node may be determined by the storage node itself (e.g., based on a local view of resource capacity or usage). Placement policies applied at the administrative layer or storage layer may be based on the percentage or amount of provisioned, reserved, or available storage or IOPS capacity on each storage device, and particular placements (or subsequent operations to move partition replicas) may result in an overall resource utilization that is well balanced.

83 citations

Posted Content
TL;DR: MetaOptNet as mentioned in this paper proposes to use linear classifiers as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of fewshot recognition benchmarks.
Abstract: Many meta-learning approaches for few-shot learning rely on simple base learners such as nearest-neighbor classifiers. However, even in the few-shot regime, discriminatively trained linear predictors can offer better generalization. We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks. Our objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories. To efficiently solve the objective, we exploit two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. This allows us to use high-dimensional embeddings with improved generalization at a modest increase in computational overhead. Our approach, named MetaOptNet, achieves state-of-the-art performance on miniImageNet, tieredImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks. Our code is available at this https URL.

83 citations

Patent
Peter George Ross1
20 Sep 2010
TL;DR: A computer system includes a chassis, one or more hard disk drives coupled to the chassis, and a number of air passages under at least one of the hard disk drive as discussed by the authors.
Abstract: A computer system includes a chassis, one or more hard disk drives coupled to the chassis, and one or more air passages under at least one of the hard disk drives. The air passages include one or more air inlets and one or more air outlets. The inlets direct at least a portion of the air downwardly into the passages. The passages allow air to move from the air inlets to the air outlets.

83 citations

Proceedings ArticleDOI
01 Jun 2021
TL;DR: This work develops a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision, underpinned by contrastive regularization and confidence-based reweighting, which gradually improves model fitting while effectively suppressing error propagation.
Abstract: Fine-tuned pre-trained language models (LMs) have achieved enormous success in many natural language processing (NLP) tasks, but they still require excessive labeled data in the fine-tuning stage. We study the problem of fine-tuning pre-trained LMs using only weak supervision, without any labeled data. This problem is challenging because the high capacity of LMs makes them prone to overfitting the noisy labels generated by weak supervision. To address this problem, we develop a contrastive self-training framework, COSINE, to enable fine-tuning LMs with weak supervision. Underpinned by contrastive regularization and confidence-based reweighting, our framework gradually improves model fitting while effectively suppressing error propagation. Experiments on sequence, token, and sentence pair classification tasks show that our model outperforms the strongest baseline by large margins and achieves competitive performance with fully-supervised fine-tuning methods. Our implementation is available on https://github.com/yueyu1030/COSINE.

83 citations

Patent
20 Jul 2006
TL;DR: In this paper, a spelling change analyzer was used to identify useful alternative spellings of search strings submitted to a search engine, as detected by programmatically analyzing search histories of a population of search engine users.
Abstract: A spelling change analyzer (160) identifies useful alternative spellings of search strings submitted to a search engine (100). The spelling change analyzer (160) takes into consideration spelling changes made by users, as detected by programmatically analyzing search histories (150) of a population of search engine users. In one embodiment, an assessment of whether a second search string represents a useful alternative spelling of a first search string takes into consideration (1) an edit distance between the first and second search strings, and (2) a likelihood that a user who submits the first search string will thereafter submit the second search string, as determined by monitoring and analyzing actions of users.

83 citations


Authors

Showing all 13498 results

NameH-indexPapersCitations
Jiawei Han1681233143427
Bernhard Schölkopf1481092149492
Christos Faloutsos12778977746
Alexander J. Smola122434110222
Rama Chellappa120103162865
William F. Laurance11847056464
Andrew McCallum11347278240
Michael J. Black11242951810
David Heckerman10948362668
Larry S. Davis10769349714
Chris M. Wood10279543076
Pietro Perona10241494870
Guido W. Imbens9735264430
W. Bruce Croft9742639918
Chunhua Shen9368137468
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

89% related

Google
39.8K papers, 2.1M citations

88% related

Carnegie Mellon University
104.3K papers, 5.9M citations

87% related

ETH Zurich
122.4K papers, 5.1M citations

82% related

University of Maryland, College Park
155.9K papers, 7.2M citations

82% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20234
2022168
20212,015
20202,596
20192,002
20181,189