scispace - formally typeset
Search or ask a question
Institution

Amazon.com

CompanySeattle, Washington, United States
About: Amazon.com is a company organization based out in Seattle, Washington, United States. It is known for research contribution in the topics: Computer science & Service (business). The organization has 13363 authors who have published 17317 publications receiving 266589 citations.


Papers
More filters
Journal ArticleDOI
Bruce Walker Nelson1
01 Jul 1994
TL;DR: Landsat Thematic Mapper images covering the entire 3.9 million km2 of forested Brazilian Amazon reveal natural change and disturbance occurring on a scale of decades to centuries.
Abstract: Landsat Thematic Mapper images covering the entire 3.9 million km2 of forested Brazilian Amazon reveal natural change and disturbance occurring on a scale of decades to centuries. These include: 92,000 km2 of bamboo forests undergoing synchronous mortality and regrowth; 1500 km2 of recently active dune fields; 900 km2 of recent downburst blowdowns; > 500 km of recent forest fire scars; and an unknown area of forest mortality from flooding of high ground and alluvial forests. Fire subclimax fern savannas created by Yanoama Indians cover an additional 600 km2. Natural and indigenous disturbances and synchronized phenologies are therefore responsible for a dynamic spectral bahavior in large portions of the Amazon Basin. Disturbances and disturbance indicators not easily detected include blowdown sites > 30 yrs old, blowdowns < 30 hectares in size, liana forests and babassu palm forests.

138 citations

Book ChapterDOI
06 Sep 2014
TL;DR: This work proposes an approach that goes beyond appearance by integrating a semantic aspect into the model, and outperforms several state-of-the-art methods on VIPeR, a standard re-identification dataset.
Abstract: Person re-identification has recently attracted a lot of attention in the computer vision community. This is in part due to the challenging nature of matching people across cameras with different viewpoints and lighting conditions, as well as across human pose variations. The literature has since devised several approaches to tackle these challenges, but the vast majority of the work has been concerned with appearance-based methods. We propose an approach that goes beyond appearance by integrating a semantic aspect into the model. We jointly learn a discriminative projection to a joint appearance-attribute subspace, effectively leveraging the interaction between attributes and appearance for matching. Our experimental results support our model and demonstrate the performance gain yielded by coupling both tasks. Our results outperform several state-of-the-art methods on VIPeR, a standard re-identification dataset. Finally, we report similar results on a new large-scale dataset we collected and labeled for our task.

138 citations

Journal ArticleDOI
TL;DR: ECTS (early classification on time series), an effective 1-nearest neighbor classification method that makes early predictions and at the same time retains the accuracy comparable with that of a 1NN classifier using the full-length time series.
Abstract: In this paper, we formulate the problem of early classification of time series data, which is important in some time-sensitive applications such as health informatics. We introduce a novel concept of MPL (minimum prediction length) and develop ECTS (early classification on time series), an effective 1-nearest neighbor classification method. ECTS makes early predictions and at the same time retains the accuracy comparable with that of a 1NN classifier using the full-length time series. Our empirical study using benchmark time series data sets shows that ECTS works well on the real data sets where 1NN classification is effective.

137 citations

Proceedings ArticleDOI
22 Jan 2019
TL;DR: This work proposes SAN-CTC, a deep, fully self-attentional network for CTC, and shows it is tractable and competitive for end-to-end speech recognition, and explores how label alphabets affect attention heads and performance.
Abstract: The success of self-attention in NLP has led to recent applications in end-to-end encoder-decoder architectures for speech recognition. Separately, connectionist temporal classification (CTC) has matured as an alignment-free, non-autoregressive approach to sequence transduction, either by itself or in various multitask and decoding frameworks. We propose SAN-CTC, a deep, fully self-attentional network for CTC, and show it is tractable and competitive for end-to-end speech recognition. SAN-CTC trains quickly and outperforms existing CTC models and most encoder-decoder models, with character error rates (CERs) of 4.7% in 1 day on WSJ eval92 and 2.8% in 1 week on LibriSpeech test-clean, with a fixed architecture and one GPU. Similar improvements hold for WERs after LM decoding. We motivate the architecture for speech, evaluate position and down-sampling approaches, and explore how label alphabets (character, phoneme, subword) affect attention heads and performance.

137 citations

Posted Content
Zhi Zhang, Tong He, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li1 
TL;DR: This work explores training tweaks that apply to various models including Faster R-CNN and YOLOv3 that can improve up to 5% absolute precision compared to state-of-the-art baselines.
Abstract: Training heuristics greatly improve various image classification model accuracies~\cite{he2018bag}. Object detection models, however, have more complex neural network structures and optimization targets. The training strategies and pipelines dramatically vary among different models. In this works, we explore training tweaks that apply to various models including Faster R-CNN and YOLOv3. These tweaks do not change the model architectures, therefore, the inference costs remain the same. Our empirical results demonstrate that, however, these freebies can improve up to 5% absolute precision compared to state-of-the-art baselines.

136 citations


Authors

Showing all 13498 results

NameH-indexPapersCitations
Jiawei Han1681233143427
Bernhard Schölkopf1481092149492
Christos Faloutsos12778977746
Alexander J. Smola122434110222
Rama Chellappa120103162865
William F. Laurance11847056464
Andrew McCallum11347278240
Michael J. Black11242951810
David Heckerman10948362668
Larry S. Davis10769349714
Chris M. Wood10279543076
Pietro Perona10241494870
Guido W. Imbens9735264430
W. Bruce Croft9742639918
Chunhua Shen9368137468
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

89% related

Google
39.8K papers, 2.1M citations

88% related

Carnegie Mellon University
104.3K papers, 5.9M citations

87% related

ETH Zurich
122.4K papers, 5.1M citations

82% related

University of Maryland, College Park
155.9K papers, 7.2M citations

82% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20234
2022168
20212,015
20202,596
20192,002
20181,189