scispace - formally typeset
Search or ask a question
Institution

Amazon.com

CompanySeattle, Washington, United States
About: Amazon.com is a company organization based out in Seattle, Washington, United States. It is known for research contribution in the topics: Service (business) & Service provider. The organization has 13363 authors who have published 17317 publications receiving 266589 citations.


Papers
More filters
Journal ArticleDOI
07 Jun 2021
TL;DR: For example, the authors found that between 2000 and 2019, most soybean expansion in South America was on pastures converted originally for cattle production, especially in the Brazilian Amazon, where 9% of forest loss was converted to soybeans by 2016.
Abstract: A prominent goal of policies mitigating climate change and biodiversity loss is to achieve zero deforestation in the global supply chain of key commodities, such as palm oil and soybean. However, the extent and dynamics of deforestation driven by commodity expansion are largely unknown. Here we mapped annual soybean expansion in South America between 2000 and 2019 by combining satellite observations and sample field data. From 2000 to 2019, the area cultivated with soybean more than doubled from 26.4 Mha to 55.1 Mha. Most soybean expansion occurred on pastures originally converted from natural vegetation for cattle production. The most rapid expansion occurred in the Brazilian Amazon, where soybean area increased more than tenfold, from 0.4 Mha to 4.6 Mha. Across the continent, 9% of forest loss was converted to soybean by 2016. Soybean-driven deforestation was concentrated at the active frontiers, nearly half located in the Brazilian Cerrado. Efforts to limit future deforestation must consider how soybean expansion may drive deforestation indirectly by displacing pasture or other land uses. Holistic approaches that track land use across all commodities coupled with vegetation monitoring are required to maintain critical ecosystem services. Deforestation is often driven by land conversion for growing commodity crops. This study finds that, between 2000 and 2019, most soybean expansion in South America was on pastures converted originally for cattle production, especially in the Brazilian Amazon. More soy-driven deforestation occurred in the Brazilian Cerrado.

91 citations

Posted Content
TL;DR: Jasper as mentioned in this paper uses only 1D convolutions, batch normalization, ReLU, dropout, and residual connections to improve training, and further introduces a new layer-wise optimizer called NovoGrad.
Abstract: In this paper, we report state-of-the-art results on LibriSpeech among end-to-end speech recognition models without any external training data. Our model, Jasper, uses only 1D convolutions, batch normalization, ReLU, dropout, and residual connections. To improve training, we further introduce a new layer-wise optimizer called NovoGrad. Through experiments, we demonstrate that the proposed deep architecture performs as well or better than more complex choices. Our deepest Jasper variant uses 54 convolutional layers. With this architecture, we achieve 2.95% WER using a beam-search decoder with an external neural language model and 3.86% WER with a greedy decoder on LibriSpeech test-clean. We also report competitive results on the Wall Street Journal and the Hub5'00 conversational evaluation datasets.

91 citations

Proceedings ArticleDOI
01 Oct 2020
TL;DR: The Contextualized Language and Knowledge Embedding (CoLAKE) is proposed, which jointly learns contextualized representation for both language and knowledge with the extended MLM objective, and achieves surprisingly high performance on a synthetic task called word-knowledge graph completion, which shows the superiority of simultaneously contextualizing language andknowledge representation.
Abstract: With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models. Few works explore the potential of deep contextualized knowledge representation when injecting knowledge. In this paper, we propose the Contextualized Language and Knowledge Embedding (CoLAKE), which jointly learns contextualized representation for both language and knowledge with the extended MLM objective. Instead of injecting only entity embeddings, CoLAKE extracts the knowledge context of an entity from large-scale knowledge bases. To handle the heterogeneity of knowledge context and language context, we integrate them in a unified data structure, word-knowledge graph (WK graph). CoLAKE is pre-trained on large-scale WK graphs with the modified Transformer encoder. We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks. Experimental results show that CoLAKE outperforms previous counterparts on most of the tasks. Besides, CoLAKE achieves surprisingly high performance on our synthetic task called word-knowledge graph completion, which shows the superiority of simultaneously contextualizing language and knowledge representation.

91 citations

Patent
29 Apr 2014
TL;DR: In this article, techniques for managing communications between multiple computing nodes, such as computing nodes that are part of a virtual computer network, are described for managing ongoing communications for those computing nodes so as to accommodate the modifications.
Abstract: Techniques are described for managing communications between multiple computing nodes, such as computing nodes that are part of a virtual computer network. In some situations, various types of modifications may be made to one or more computing nodes of an existing virtual computer network, and the described techniques include managing ongoing communications for those computing nodes so as to accommodate the modifications. Such modifications may include, for example, migrating or otherwise moving a particular computing node that is part of a virtual network to a new physical network location, or modifying other aspects of how the computing node participates in the virtual network (e.g., changing one or more virtual network addresses used by the computing node). In some situations, the computing nodes may include virtual machine nodes hosted on one or more physical computing machines or systems, such as by or on behalf of one or more users.

91 citations

Proceedings ArticleDOI
03 Mar 2021
TL;DR: The Bias in Open-Ended Language Generation Dataset (BOLD) as mentioned in this paper is a large-scale dataset that consists of 23,679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion and political ideology.
Abstract: Recent advances in deep learning techniques have enabled machines to generate cohesive open-ended text when prompted with a sequence of words as context. While these models now empower many downstream applications from conversation bots to automatic storytelling, they have been shown to generate texts that exhibit social biases. To systematically study and benchmark social biases in open-ended language generation, we introduce the Bias in Open-Ended Language Generation Dataset (BOLD), a large-scale dataset that consists of 23,679 English text generation prompts for bias benchmarking across five domains: profession, gender, race, religion, and political ideology. We also propose new automated metrics for toxicity, psycholinguistic norms, and text gender polarity to measure social biases in open-ended text generation from multiple angles. An examination of text generated from three popular language models reveals that the majority of these models exhibit a larger social bias than human-written Wikipedia text across all domains. With these results we highlight the need to benchmark biases in open-ended language generation and caution users of language generation models on downstream tasks to be cognizant of these embedded prejudices.

91 citations


Authors

Showing all 13498 results

NameH-indexPapersCitations
Jiawei Han1681233143427
Bernhard Schölkopf1481092149492
Christos Faloutsos12778977746
Alexander J. Smola122434110222
Rama Chellappa120103162865
William F. Laurance11847056464
Andrew McCallum11347278240
Michael J. Black11242951810
David Heckerman10948362668
Larry S. Davis10769349714
Chris M. Wood10279543076
Pietro Perona10241494870
Guido W. Imbens9735264430
W. Bruce Croft9742639918
Chunhua Shen9368137468
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

89% related

Google
39.8K papers, 2.1M citations

88% related

Carnegie Mellon University
104.3K papers, 5.9M citations

87% related

ETH Zurich
122.4K papers, 5.1M citations

82% related

University of Maryland, College Park
155.9K papers, 7.2M citations

82% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20234
2022168
20212,015
20202,596
20192,002
20181,189