scispace - formally typeset
Search or ask a question
Institution

Amazon.com

CompanySeattle, Washington, United States
About: Amazon.com is a company organization based out in Seattle, Washington, United States. It is known for research contribution in the topics: Service (business) & Service provider. The organization has 13363 authors who have published 17317 publications receiving 266589 citations.


Papers
More filters
Patent
07 May 2001
TL;DR: In this paper, a computer-implemented service recommends items to a user based on items previously selected by the user, such as items previously purchased, viewed, or placed in an electronic shopping cart.
Abstract: A computer-implemented service recommends items to a user based on items previously selected by the user, such as items previously purchased, viewed, or placed in an electronic shopping cart by the user. The items may, for example, be products represented within a database of an online merchant. In one embodiment, the service generates the recommendations using a previously generated table that maps items to respective lists of “similar” items. To generate the table, historical data indicative of users' affinities for particular items is processed periodically to identify correlations between item interests of users (e.g., items A and B are similar because a large portion of those who selected A also selected B). Personal recommendations are generated by accessing the table to identify items similar to those selected by the user. In one embodiment, items are recommended based on the current contents of a user's shopping cart.

259 citations

Journal ArticleDOI
TL;DR: A probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends is put forth.
Abstract: Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.

259 citations

Patent
29 Mar 2007
TL;DR: In this article, the authors describe techniques for managing the execution of programs on a plurality of computing systems, such as computing systems organized into multiple groups, and a program execution service manages the program execution on behalf of multiple customers or other users.
Abstract: Techniques are described for managing the execution of programs on a plurality of computing systems, such as computing systems organized into multiple groups. A program execution service manages the program execution on behalf of multiple customers or other users, and selects appropriate computing systems to execute one or more instances of program, such as based in part on locations of one or more previously stored copies of the program from which copies of the program to execute may be acquired. For example, in some situations the selection of an appropriate computing system to execute an instance of a program is based in part on physical or logical proximity to other resources, such as stored copies of the program, executing copies of the program, and/or available computing systems.

257 citations

Patent
23 Jul 2015
TL;DR: In this paper, the authors describe techniques for providing managed virtual computer networks whose configured logical network topology may have one or more virtual networking devices, such as by a network-accessible configurable network service, with corresponding networking functionality provided for communications between multiple computing nodes of a virtual computer network by emulating functionality that would be provided by the networking devices if they were physically present.
Abstract: Techniques are described for providing managed virtual computer networks whose configured logical network topology may have one or more virtual networking devices, such as by a network-accessible configurable network service, with corresponding networking functionality provided for communications between multiple computing nodes of a virtual computer network by emulating functionality that would be provided by the networking devices if they were physically present. The networking functionality provided for a managed computer network may include supporting a connection between that managed computer network and other managed computer networks, such as via a provided virtual peering router to which each of the managed computer networks may connect, with the functionality of the virtual peering router being emulated by modules of the configurable network service without physically providing the virtual peering router, including to manage data communications between computing nodes of the inter-connected managed computer networks in accordance with client-specified configuration information.

256 citations

Posted Content
TL;DR: The implicit MAML algorithm as discussed by the authors decouples the meta-gradient computation from the choice of inner-loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints.
Abstract: A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as an effective approach for few-shot learning. In this formulation, meta-parameters are learned in the outer loop, while task-specific models are learned in the inner-loop, by using only a small amount of data from the current task. A key challenge in scaling these approaches is the need to differentiate through the inner loop learning process, which can impose considerable computational and memory burdens. By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer. This effectively decouples the meta-gradient computation from the choice of inner loop optimizer. As a result, our approach is agnostic to the choice of inner loop optimizer and can gracefully handle many gradient steps without vanishing gradients or memory constraints. Theoretically, we prove that implicit MAML can compute accurate meta-gradients with a memory footprint that is, up to small constant factors, no more than that which is required to compute a single inner loop gradient and at no overall increase in the total computational cost. Experimentally, we show that these benefits of implicit MAML translate into empirical gains on few-shot image recognition benchmarks.

253 citations


Authors

Showing all 13498 results

NameH-indexPapersCitations
Jiawei Han1681233143427
Bernhard Schölkopf1481092149492
Christos Faloutsos12778977746
Alexander J. Smola122434110222
Rama Chellappa120103162865
William F. Laurance11847056464
Andrew McCallum11347278240
Michael J. Black11242951810
David Heckerman10948362668
Larry S. Davis10769349714
Chris M. Wood10279543076
Pietro Perona10241494870
Guido W. Imbens9735264430
W. Bruce Croft9742639918
Chunhua Shen9368137468
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

89% related

Google
39.8K papers, 2.1M citations

88% related

Carnegie Mellon University
104.3K papers, 5.9M citations

87% related

ETH Zurich
122.4K papers, 5.1M citations

82% related

University of Maryland, College Park
155.9K papers, 7.2M citations

82% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20234
2022168
20212,015
20202,596
20192,002
20181,189