Institution
Amazon.com
Company•Seattle, Washington, United States•
About: Amazon.com is a company organization based out in Seattle, Washington, United States. It is known for research contribution in the topics: Computer science & Service (business). The organization has 13363 authors who have published 17317 publications receiving 266589 citations.
Topics: Computer science, Service (business), Service provider, Context (language use), Virtual machine
Papers published on a yearly basis
Papers
More filters
••
TL;DR: Building reliable distributed systems at a worldwide scale demands trade-offs between consistency and availability.
Abstract: Building reliable distributed systems at a worldwide scale demands trade-offs between consistency and availability.
1,060 citations
•
10 Sep 1999TL;DR: In this paper, the authors present a recommendation service that recommends items to individual users based on a set of items that are known to be of interest to the user, such as the items previously purchased by the user.
Abstract: A recommendations service recommends items to individual users based on a set of items that are known to be of interest to the user, such as a set of items previously purchased by the user. The service is used to recommend products to users of a merchant's Web site (30). The service generates the recommendations using a previously-generated table (60) which maps items (62) to lists (64) of 'similar' items. The similarities reflected by the table (60) are based on the collective interests of the community of users. To generate personal recommendations, the service retrieves from the table (60) the similar items lists (64) corresponding to the items known to be of interest to the user. These similar items lists (64) are appropriately combined into a single list, which is then sorted and filtered to generate a list of recommended items. Also disclosed are various methods for using the current and/or past contents of a user's electronic shopping cart to generate recommendations.
981 citations
••
TL;DR: VL2 is a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics and can be deployed today, and a working prototype is built.
Abstract: To be agile and cost effective, data centers must allow dynamic resource allocation across large server pools. In particular, the data center network should provide a simple flat abstraction: it should be able to take any set of servers anywhere in the data center and give them the illusion that they are plugged into a physically separate, noninterfering Ethernet switch with as many ports as the service needs. To meet this goal, we present VL2, a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer-2 semantics. VL2 uses (1) flat addressing to allow service instances to be placed anywhere in the network, (2) Valiant Load Balancing to spread traffic uniformly across network paths, and (3) end system--based address resolution to scale to large server pools without introducing complexity to the network control plane. VL2's design is driven by detailed measurements of traffic and fault data from a large operational cloud service provider. VL2's implementation leverages proven network technologies, already available at low cost in high-speed hardware implementations, to build a scalable and reliable network architecture. As a result, VL2 networks can be deployed today, and we have built a working prototype. We evaluate the merits of the VL2 design using measurement, analysis, and experiments. Our VL2 prototype shuffles 2.7 TB of data among 75 servers in 395 s---sustaining a rate that is 94% of the maximum possible.
981 citations
••
01 Jun 2019TL;DR: This article examined a collection of such refinements and empirically evaluated their impact on the final model accuracy through ablation study, and showed that by combining these refinements together, they are able to improve various CNN models significantly.
Abstract: Much of the recent progress made in image classification research can be credited to training procedure refinements, such as changes in data augmentations and optimization methods. In the literature, however, most refinements are either briefly mentioned as implementation details or only visible in source code. In this paper, we will examine a collection of such refinements and empirically evaluate their impact on the final model accuracy through ablation study. We will show that, by combining these refinements together, we are able to improve various CNN models significantly. For example, we raise ResNet-50's top-1 validation accuracy from 75.3% to 79.29% on ImageNet. We will also demonstrate that improvement on image classification accuracy leads to better transfer learning performance in other application domains such as object detection and semantic segmentation.
980 citations
••
Naturalis1, Utrecht University2, Duke University3, Institut de recherche pour le développement4, Institut national de la recherche agronomique5, Museu Paraense Emílio Goeldi6, University of California, Berkeley7, University of Leeds8, Empresa Brasileira de Pesquisa Agropecuária9, National Institute of Amazonian Research10, National University of Saint Anthony the Abbot in Cuzco11, University of Exeter12, World Wide Fund for Nature13, Universidad Autónoma Gabriel René Moreno14, Norwegian University of Life Sciences15, Max Planck Society16, James Cook University17, Universidade do Estado de Mato Grosso18, University of Amsterdam19, Silver Spring Networks20, State University of Campinas21, University of Edinburgh22, University of Los Andes23, Smithsonian Conservation Biology Institute24, National University of Colombia25, University of East Anglia26, Central University of Ecuador27, Centre national de la recherche scientifique28, Humboldt State University29, New York Botanical Garden30, Universidade Federal do Acre31, Paul Sabatier University32, Missouri Botanical Garden33, Amazon.com34, University of Texas at Austin35, University of Florida36, Venezuelan Institute for Scientific Research37, Environmental Change Institute38, Federal Rural University of Amazonia39, University of São Paulo40, State University of Norte Fluminense41, University of Wisconsin–Milwaukee42, Smithsonian Tropical Research Institute43, Northern Arizona University44, Aarhus University45, Tropenbos International46, University of Kent47, Royal Botanic Gardens48, Universidad Nacional de la Amazonía Peruana49, University of Missouri–St. Louis50, Florida International University51, Fairchild Tropical Botanic Garden52, Wake Forest University53
TL;DR: The finding that Amazonia is dominated by just 227 tree species implies that most biogeochemical cycling in the world’s largest tropical forest is performed by a tiny sliver of its diversity.
Abstract: The vast extent of the Amazon Basin has historically restricted the study of its tree communities to the local and regional scales. Here, we provide empirical data on the commonness, rarity, and richness of lowland tree species across the entire Amazon Basin and Guiana Shield (Amazonia), collected in 1170 tree plots in all major forest types. Extrapolations suggest that Amazonia harbors roughly 16,000 tree species, of which just 227 (1.4%) account for half of all trees. Most of these are habitat specialists and only dominant in one or two regions of the basin. We discuss some implications of the finding that a small group of species—less diverse than the North American tree flora—accounts for half of the world’s most diverse tree community.
963 citations
Authors
Showing all 13498 results
Name | H-index | Papers | Citations |
---|---|---|---|
Jiawei Han | 168 | 1233 | 143427 |
Bernhard Schölkopf | 148 | 1092 | 149492 |
Christos Faloutsos | 127 | 789 | 77746 |
Alexander J. Smola | 122 | 434 | 110222 |
Rama Chellappa | 120 | 1031 | 62865 |
William F. Laurance | 118 | 470 | 56464 |
Andrew McCallum | 113 | 472 | 78240 |
Michael J. Black | 112 | 429 | 51810 |
David Heckerman | 109 | 483 | 62668 |
Larry S. Davis | 107 | 693 | 49714 |
Chris M. Wood | 102 | 795 | 43076 |
Pietro Perona | 102 | 414 | 94870 |
Guido W. Imbens | 97 | 352 | 64430 |
W. Bruce Croft | 97 | 426 | 39918 |
Chunhua Shen | 93 | 681 | 37468 |