scispace - formally typeset
Search or ask a question
Institution

Amazon.com

CompanySeattle, Washington, United States
About: Amazon.com is a company organization based out in Seattle, Washington, United States. It is known for research contribution in the topics: Computer science & Service (business). The organization has 13363 authors who have published 17317 publications receiving 266589 citations.


Papers
More filters
Proceedings ArticleDOI
08 Sep 2016
TL;DR: It is shown that making decisions based on the full ASR output lattice makes SLU systems more robust to ASR errors, and this model generalizes recurrent neural networks to process weighted lattices as input, instead of sequences.
Abstract: We present a new model called LATTICERNN, which generalizes recurrent neural networks (RNNs) to process weighted lattices as input, instead of sequences. A LATTICERNN can encode the complete structure of a lattice into a dense representation, which makes it suitable to a variety of problems, including rescoring, classifying, parsing, or translating lattices using deep neural networks (DNNs). In this paper, we use LATTICERNNs for a classification task: each lattice represents the output from an automatic speech recognition (ASR) component of a spoken language understanding (SLU) system, and we classify the intent of the spoken utterance based on the lattice embedding computed by a LATTICERNN. We show that making decisions based on the full ASR output lattice, as opposed to 1-best or n-best hypotheses, makes SLU systems more robust to ASR errors. Our experiments yield improvements of 13% over a baseline RNN system trained on transcriptions and 10% over an nbest list rescoring system for intent classification.

74 citations

Patent
25 Jun 2013
TL;DR: In this article, gaze determination is used as an indication that the user intends to unlock at least some additional functionality of the computing device, such as a tap or a voice command.
Abstract: A computing device, in a locked operational state, captures image information of a user which is analyzed to determine the direction of the users gaze. When the users gaze is determined to be substantially in the direction of the device, a predetermined input from the user, such as a tap or a voice command, will provide the user with access to at least some functionality of the device that was previously unavailable. If, however, the computing device detects what appears to be the predetermined input, but the users gaze direction is not in the direction of the device, the computing device will remain in the locked operational state. Therefore, in accordance with various embodiments, gaze determination is utilized as an indication that the user intends to unlock at least some additional functionality of the computing device.

74 citations

Book ChapterDOI
Chris Newcombe1
02 Jun 2014
TL;DR: The reasons why TLA+ was chosen instead of other methods, and areas in which the team would welcome further progress are described.
Abstract: Since 2011, engineers at Amazon have been using TLA+ to help solve difficult design problems in critical systems. This paper describes the reasons why we chose TLA+ instead of other methods, and areas in which we would welcome further progress.

74 citations

Journal ArticleDOI
Jean-François Bastin, Ervan Rutishauser1, James R. Kellner2, Sassan Saatchi3, Raphaël Pélissier4, Bruno Hérault, Ferry Slik, Jan Bogaert5, Charles De Cannière6, Andrew R. Marshall7, Andrew R. Marshall8, John R. Poulsen9, Patricia Alvarez-Loyayza10, Ana Andrade, Albert Angbonga-Basia, Alejandro Araujo-Murakami, Luzmila Arroyo11, Narayanan Ayyappan12, Narayanan Ayyappan13, Celso Paulo de Azevedo14, Olaf Bánki15, Nicolas Barbier4, Jorcely Barroso15, Hans Beeckman16, Robert Bitariho17, Pascal Boeckx18, Katrin Boehning-Gaese19, Hilandia Brandão20, Francis Q. Brearley21, Mireille Breuer-Ndoundou Hockemba22, Roel J. W. Brienen23, José Luís Camargo, Ahimsa Campos-Arceiz, Benoît Cassart24, Benoît Cassart25, Jérôme Chave26, Robin L. Chazdon27, Georges Chuyong28, David B. Clark29, Connie J. Clark9, Richard Condit10, Eurídice N. Honorio Coronado, Priya Davidar11, Thalès de Haulleville5, Thalès de Haulleville16, Laurent Descroix, Jean-Louis Doucet5, Aurélie Dourdain30, Vincent Droissart4, Thomas Duncan31, Javier Silva Espejo32, Santiago Espinosa33, Nina Farwig34, Adeline Fayolle5, Ted R. Feldpausch35, Antonio Ferraz3, Christine Fletcher, Krisna Gajapersad36, Jean François Gillet5, Iêda Leão do Amaral20, Christelle Gonmadje37, James Grogan38, David Harris39, Sebastian K. Herzog, Jürgen Homeier40, Wannes Hubau16, Stephen P. Hubbell1, Stephen P. Hubbell41, Koen Hufkens18, Johanna Hurtado42, Narcisse Guy Kamdem37, Elizabeth Kearsley18, David Kenfack1, Michael Kessler43, Nicolas Labrière44, Yves Laumonier45, Susan G. Laurance46, William F. Laurance46, Simon L. Lewis23, Moses Libalah37, Gauthier Ligot5, Jon Lloyd47, Jon Lloyd48, Thomas E. Lovejoy48, Yadvinder Malhi49, Beatriz Schwantes Marimon50, Ben Hur Marimon Junior50, Emmanuel H. Martin51, Paulus Matius52, Victoria Meyer3, Casimero Mendoza Bautista53, Abel Monteagudo-Mendoza, Arafat S. Mtui, David A. Neill, Germaine Alexander Parada Gutierrez, Guido Pardo, Marc P. E. Parren, Narayanaswamy Parthasarathy13, Oliver L. Phillips23, Nigel C. A. Pitman, Pierre Ploton4, Quentin Ponette24, B.R. Ramesh13, Jean Claude Razafimahaimodison, Maxime Réjou-Méchain4, Samir Gonçalves Rolim12, Hugo Romero Saltos54, Luiz Marcelo Brum Rossi12, Wilson Roberto Spironello20, Francesco Rovero, Philippe Saner43, Denise Sasaki, Mark Schulze, Marcos Silveira15, James Singh55, Plinio Sist, Bonaventure Sonké37, J. Daniel Soto, Cintia Rodrigues de Souza12, Juliana Stropp56, Martin J. P. Sullivan23, Ben Swanepoel22, Hans ter Steege57, Hans ter Steege14, John Terborgh46, John Terborgh58, Nicolas Texier6, Takeshi Toma, Renato Valencia59, Luis Valenzuela, Leandro Valle Ferreira60, Fernando Cornejo Valverde20, Tinde van Andel14, Rodolfo Vasque, Hans Verbeeck18, Pandi Vivek11, Jason Vleminckx61, Vincent A. Vos, Fabien Wagner62, Papi Puspa Warsudi52, Verginia Wortel, Roderick Zagt63, Donatien Zebaze37 
Smithsonian Tropical Research Institute1, Brown University2, California Institute of Technology3, Centre national de la recherche scientifique4, Gembloux Agro-Bio Tech5, Université libre de Bruxelles6, University of York7, University of the Sunshine Coast8, Duke University9, Field Museum of Natural History10, Pondicherry University11, Empresa Brasileira de Pesquisa Agropecuária12, French Institute of Pondicherry13, Naturalis14, Universidade Federal do Acre15, Royal Museum for Central Africa16, Mbarara University of Science and Technology17, Ghent University18, Goethe University Frankfurt19, Amazon.com20, Manchester Metropolitan University21, Wildlife Conservation Society22, University of Leeds23, Université catholique de Louvain24, École Normale Supérieure25, Paul Sabatier University26, University of Connecticut27, University of Buea28, University of Missouri–St. Louis29, University of the French West Indies and Guiana30, Oregon State University31, University of La Serena32, Universidad Autónoma de San Luis Potosí33, University of Marburg34, University of Exeter35, Conservation International36, University of Yaoundé I37, Smith College38, Royal Botanic Garden Edinburgh39, University of Göttingen40, University of California, Los Angeles41, Organization for Tropical Studies42, University of Zurich43, Agro ParisTech44, Center for International Forestry Research45, James Cook University46, Imperial College London47, George Mason University48, Environmental Change Institute49, Universidade do Estado de Mato Grosso50, Sokoine University of Agriculture51, Mulawarman University52, Universidad Mayor53, Universidad Yachay Tech54, Forestry Commission55, Federal University of Alagoas56, University of Amsterdam57, Florida Museum of Natural History58, Pontificia Universidad Católica del Ecuador59, Museu Paraense Emílio Goeldi60, University of California, Berkeley61, National Institute for Space Research62, Tropenbos International63
TL;DR: In this paper, a pan-tropical model was proposed to predict plot-level forest structure properties and biomass from only the largest trees, which can be used to generate accurate field estimates of tropical forest carbon stocks to support the calibration and validation of current and forthcoming space missions.
Abstract: Aim Large tropical trees form the interface between ground and airborne observations, offering a unique opportunity to capture forest properties remotely and to investigate their variations on broad scales. However, despite rapid development of metrics to characterize the forest canopy from remotely sensed data, a gap remains between aerial and field inventories. To close this gap, we propose a new pan-tropical model to predict plot-level forest structure properties and biomass from only the largest trees. Location Time period Pan-tropical. Early 21st century. Major taxa studied Methods Woody plants. Using a dataset of 867 plots distributed among 118 sites across the tropics, we tested the prediction of the quadratic mean diameter, basal area, Lorey's height, community wood density and aboveground biomass (AGB) from the ith largest trees. Results Main conclusions Measuring the largest trees in tropical forests enables unbiased predictions of plot- and site-level forest structure. The 20 largest trees per hectare predicted quadratic mean diameter, basal area, Lorey's height, community wood density and AGB with 12, 16, 4, 4 and 17.7% of relative error, respectively. Most of the remaining error in biomass prediction is driven by differences in the proportion of total biomass held in medium-sized trees (50-70 cm diameter at breast height), which shows some continental dependency, with American tropical forests presenting the highest proportion of total biomass in these intermediate-diameter classes relative to other continents. Our approach provides new information on tropical forest structure and can be used to generate accurate field estimates of tropical forest carbon stocks to support the calibration and validation of current and forthcoming space missions. It will reduce the cost of field inventories and contribute to scientific understanding of tropical forest ecosystems and response to climate change.

74 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: This work proposes the first end-to-end algorithm for ZSL in video classification, which uses a trainable 3D CNN to learn the visual features and outperforms the state-of-the-art by a wide margin.
Abstract: Trained on large datasets, deep learning (DL) can accurately classify videos into hundreds of diverse classes. However, video data is expensive to annotate. Zero-shot learning (ZSL) proposes one solution to this problem. ZSL trains a model once, and generalizes to new tasks whose classes are not present in the training dataset. We propose the first end-to-end algorithm for ZSL in video classification. Our training procedure builds on insights from recent video classification literature and uses a trainable 3D CNN to learn the visual features. This is in contrast to previous video ZSL methods, which use pretrained feature extractors. We also extend the current benchmarking paradigm: Previous techniques aim to make the test task unknown at training time but fall short of this goal. We encourage domain shift across training and test data and disallow tailoring a ZSL model to a specific test dataset. We outperform the state-of-the-art by a wide margin. Our code, evaluation procedure and model weights are available online github.com/bbrattoli/ZeroShotVideoClassification.

74 citations


Authors

Showing all 13498 results

NameH-indexPapersCitations
Jiawei Han1681233143427
Bernhard Schölkopf1481092149492
Christos Faloutsos12778977746
Alexander J. Smola122434110222
Rama Chellappa120103162865
William F. Laurance11847056464
Andrew McCallum11347278240
Michael J. Black11242951810
David Heckerman10948362668
Larry S. Davis10769349714
Chris M. Wood10279543076
Pietro Perona10241494870
Guido W. Imbens9735264430
W. Bruce Croft9742639918
Chunhua Shen9368137468
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

89% related

Google
39.8K papers, 2.1M citations

88% related

Carnegie Mellon University
104.3K papers, 5.9M citations

87% related

ETH Zurich
122.4K papers, 5.1M citations

82% related

University of Maryland, College Park
155.9K papers, 7.2M citations

82% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20234
2022168
20212,015
20202,596
20192,002
20181,189