scispace - formally typeset
Search or ask a question
Institution

Amazon.com

CompanySeattle, Washington, United States
About: Amazon.com is a company organization based out in Seattle, Washington, United States. It is known for research contribution in the topics: Computer science & Service (business). The organization has 13363 authors who have published 17317 publications receiving 266589 citations.


Papers
More filters
Proceedings Article
13 Aug 2016
TL;DR: The 2016 ACM Conference on Knowledge Discovery and Data Mining (KDD'16) as mentioned in this paper has attracted a significant number of submissions from countries all over the world, in particular, the research track attracted 784 submissions and the applied data science track attracted 331 submissions.
Abstract: It is our great pleasure to welcome you to the 2016 ACM Conference on Knowledge Discovery and Data Mining -- KDD'16. We hope that the content and the professional network at KDD'16 will help you succeed professionally by enabling you to: identify technology trends early; make new/creative contributions; increase your productivity by using newer/better tools, processes or ways of organizing teams; identify new job opportunities; and hire new team members. We are living in an exciting time for our profession. On the one hand, we are witnessing the industrialization of data science, and the emergence of the industrial assembly line processes characterized by the division of labor, integrated processes/pipelines of work, standards, automation, and repeatability. Data science practitioners are organizing themselves in more sophisticated ways, embedding themselves in larger teams in many industry verticals, improving their productivity substantially, and achieving a much larger scale of social impact. On the other hand we are also witnessing astonishing progress from research in algorithms and systems -- for example the field of deep neural networks has revolutionized speech recognition, NLP, computer vision, image recognition, etc. By facilitating interaction between practitioners at large companies & startups on the one hand, and the algorithm development researchers including leading academics on the other, KDD'16 fosters technological and entrepreneurial innovation in the area of data science. This year's conference continues its tradition of being the premier forum for presentation of results in the field of data mining, both in the form of cutting edge research, and in the form of insights from the development and deployment of real world applications. Further, the conference continues with its tradition of a strong tutorial and workshop program on leading edge issues of data mining. The mission of this conference has broadened in recent years even as we placed a significant amount of focus on both the research and applied aspects of data mining. As an example of this broadened focus, this year we have introduced a strong hands-on tutorial program nduring the conference in which participants will learn how to use practical tools for data mining. KDD'16 also gives researchers and practitioners a unique opportunity to form professional networks, and to share their perspectives with others interested in the various aspects of data mining. For example, we have introduced office hours for budding entrepreneurs from our community to meet leading Venture Capitalists investing in this area. We hope that KDD 2016 conference will serve as a meeting ground for researchers, practitioners, funding agencies, and investors to help create new algorithms and commercial products. The call for papers attracted a significant number of submissions from countries all over the world. In particular, the research track attracted 784 submissions and the applied data science track attracted 331 submissions. Papers were accepted either as full papers or as posters. The overall acceptance rate either as full papers or posters was less than 20%. For full papers in the research track, the acceptance rate was lower than 10%. This is consistent with the fact that the KDD Conference is a premier conference in data mining and the acceptance rates historically tend to be low. It is noteworthy that the applied data science track received a larger number of submissions compared to previous years. We view this as an encouraging sign that research in data mining is increasingly becoming relevant to industrial applications. All papers were reviewed by at least three program committee members and then discussed by the PC members in a discussion moderated by a meta-reviewer. Borderline papers were thoroughly reviewed by the program chairs before final decisions were made.

179 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this paper, the authors propose to factorize the convolutional layer to reduce its computation, which can effectively preserve the spatial information and maintain the accuracy with significantly less computation.
Abstract: In this paper, we propose to factorize the convolutional layer to reduce its computation. The 3D convolution operation in a convolutional layer can be considered as performing spatial convolution in each channel and linear projection across channels simultaneously. By unravelling them and arranging the spatial convolutions sequentially, the proposed layer is composed of a low-cost single intra-channel convolution and a linear channel projection. When combined with residual connection, it can effectively preserve the spatial information and maintain the accuracy with significantly less computation. We also introduce a topological subdivisioning to reduce the connection between the input and output channels. Our experiments demonstrate that the proposed layers outperform the standard convolutional layers on performance/complexity ratio. Our models achieve similar performance to VGG-16, ResNet-34, ResNet-50, ResNet-101 while requiring 42x,7.32x,4.38x,5.85x less computation respectively.

177 citations

Patent
17 Nov 2005
TL;DR: In this article, a system provides a user interface through which users can flexibly tag individual items represented in an electronic catalog with user-defined tags, such as text strings, and obtain recommendations that are specific to particular tags.
Abstract: A system provides a user interface through which users can flexibly tag individual items represented in an electronic catalog with user-defined tags, such as text strings, and obtain recommendations that are specific to particular tags. The tags and tag-item assignments created by each user are stored persistently in association with the user, and may be kept private to the user or exposed to others. Once a user has assigned a tag to a number of items, the user (or another user in some embodiments) can request and obtain recommendations that are specific to this tag. These recommendations may be generated in real time by a recommendation service that identifies items that are collectively similar or related to the items associated with the tag.

176 citations

Proceedings Article
01 Dec 2018
TL;DR: Packing as discussed by the authors is a principled approach to handling mode collapse in GANs, which can modify the discriminator to make decisions based on multiple samples from the same class, either real or artificially generated.
Abstract: Generative adversarial networks (GANs) are a technique for learning generative models of complex data distributions from samples. Despite remarkable advances in generating realistic images, a major shortcoming of GANs is the fact that they tend to produce samples with little diversity, even when trained on diverse datasets. This phenomenon, known as mode collapse, has been the focus of much recent work. We study a principled approach to handling mode collapse, which we call packing. The main idea is to modify the discriminator to make decisions based on multiple samples from the same class, either real or artificially generated. We draw analysis tools from binary hypothesis testing---in particular the seminal result of Blackwell---to prove a fundamental connection between packing and mode collapse. We show that packing naturally penalizes generators with mode collapse, thereby favoring generator distributions with less mode collapse during the training process. Numerical experiments on benchmark datasets suggest that packing provides significant improvements.

176 citations

Journal ArticleDOI
Sarah M. Keating1, Sarah M. Keating2, Dagmar Waltemath3, Matthias König4, Fengkai Zhang5, Andreas Dräger6, Claudine Chaouiya7, Claudine Chaouiya8, Frank Bergmann2, Andrew Finney9, Colin S. Gillespie10, Tomáš Helikar11, Stefan Hoops12, Rahuman S Malik-Sheriff, Stuart L. Moodie, Ion I. Moraru13, Chris J. Myers14, Aurélien Naldi15, Brett G. Olivier16, Brett G. Olivier2, Brett G. Olivier1, Sven Sahle2, James C. Schaff, Lucian P. Smith1, Lucian P. Smith17, Maciej J. Swat, Denis Thieffry15, Leandro Watanabe14, Darren J. Wilkinson18, Darren J. Wilkinson10, Michael L. Blinov13, Kimberly Begley1, James R. Faeder19, Harold F. Gómez20, Thomas M. Hamm6, Yuichiro Inagaki, Wolfram Liebermeister21, Allyson L. Lister22, Daniel Lucio23, Eric Mjolsness24, Carole J. Proctor10, Karthik Raman25, Nicolas Rodriguez26, Clifford A. Shaffer27, Bruce E. Shapiro28, Joerg Stelling20, Neil Swainston29, Naoki Tanimura, John Wagner30, Martin Meier-Schellersheim5, Herbert M. Sauro17, Bernhard O. Palsson31, Hamid Bolouri32, Hiroaki Kitano33, Akira Funahashi34, Henning Hermjakob, John Doyle1, Michael Hucka1, Richard R. Adams, Nicholas Alexander Allen35, Bastian R. Angermann5, Marco Antoniotti36, Gary D. Bader37, Jan Červený38, Mélanie Courtot, Christopher Cox39, Piero Dalle Pezze26, Emek Demir40, William S. Denney, Harish Dharuri41, Julien Dorier, Dirk Drasdo, Ali Ebrahim31, Johannes Eichner, Johan Elf42, Lukas Endler, Chris T. Evelo43, Christoph Flamm44, Ronan M. T. Fleming45, Martina Fröhlich, Mihai Glont, Emanuel Gonçalves46, Martin Golebiewski47, Hovakim Grabski48, Alex Gutteridge, Damon Hachmeister, Leonard A. Harris, Benjamin D. Heavner, Ron Henkel, William S. Hlavacek1, Bin Hu49, Daniel R. Hyduke50, Hidde de Jong, Nick Juty46, Peter D. Karp, Jonathan R. Karr51, Douglas B. Kell52, Roland Keller6, Ilya Kiselev53, Steffen Klamt54, Edda Klipp54, Christian Knüpfer55, Fedor A. Kolpakov, Falko Krause4, Martina Kutmon, Camille Laibe46, Conor Lawless8, Lu Li56, Leslie M. Loew10, Rainer Machné27, Yukiko Matsuoka, Pedro Mendes, Huaiyu Mi57, Florian Mittag2, Pedro T. Monteiro8, Kedar Nath Natarajan, Poul M. F. Nielsen17, Tramy Nguyen, Alida Palmisano58, Jean-Baptiste Pettit14, Thomas Pfau10, Robert Phair13, Tomas Radivoyevitch1, Johann M. Rohwer59, Oliver A. Ruebenacker60, Julio Saez-Rodriguez6, Martin Scharm61, Henning Schmidt47, Falk Schreiber48, Michael Schubert, Roman Schulte24, Stuart C. Sealfon10, Kieran Smallbone, Sylvain Soliman, Melanie I. Stefan1, Devin P. Sullivan28, Koichi Takahashi50, Bas Teusink, David Tolnay1, Ibrahim Vazirabad30, Axel von Kamp54, Ulrike Wittig52, Clemens Wrzodek6, Finja Wrzodek6, Ioannis Xenarios, Anna Zhukova, Jeremy Zucker62 
California Institute of Technology1, Heidelberg University2, University of Greifswald3, Humboldt University of Berlin4, National Institutes of Health5, University of Tübingen6, Aix-Marseille University7, Instituto Gulbenkian de Ciência8, Ansys9, Newcastle University10, University of Nebraska–Lincoln11, University of Virginia12, University of Connecticut13, University of Utah14, PSL Research University15, VU University Amsterdam16, University of Washington17, The Turing Institute18, University of Pittsburgh19, ETH Zurich20, Université Paris-Saclay21, University of Oxford22, North Carolina State University23, University of California, Irvine24, Indian Institute of Technology Madras25, Babraham Institute26, Virginia Tech27, California State University, Northridge28, University of Liverpool29, IBM30, University of California, San Diego31, Virginia Mason Medical Center32, Okinawa Institute of Science and Technology33, Keio University34, Amazon.com35, University of Milan36, University of Toronto37, Masaryk University38, University of Tennessee39, Oregon Health & Science University40, Illumina41, Uppsala University42, Maastricht University43, Alpen-Adria-Universität Klagenfurt44, Medical University of Vienna45, European Bioinformatics Institute46, University of Rostock47, Leibniz Association48, Lorentz Institute49, Shinshu University50, Icahn School of Medicine at Mount Sinai51, Heidelberg Institute for Theoretical Studies52, Greifswald University Hospital53, Max Planck Society54, University of Jena55, École Polytechnique56, University of Southern California57, École Normale Supérieure58, Stellenbosch University59, École Polytechnique Fédérale de Lausanne60, Mizuho Information & Research Institute61, Pacific Northwest National Laboratory62
TL;DR: The latest edition of the Systems Biology Markup Language (SBML) is reviewed, a format designed for this purpose that leverages two decades of SBML and a rich software ecosystem that transformed how systems biologists build and interact with models.
Abstract: Systems biology has experienced dramatic growth in the number, size, and complexity of computational models. To reproduce simulation results and reuse models, researchers must exchange unambiguous model descriptions. We review the latest edition of the Systems Biology Markup Language (SBML), a format designed for this purpose. A community of modelers and software authors developed SBML Level 3 over the past decade. Its modular form consists of a core suited to representing reaction-based models and packages that extend the core with features suited to other model types including constraint-based models, reaction-diffusion models, logical network models, and rule-based models. The format leverages two decades of SBML and a rich software ecosystem that transformed how systems biologists build and interact with models. More recently, the rise of multiscale models of whole cells and organs, and new data sources such as single-cell measurements and live imaging, has precipitated new ways of integrating data with models. We provide our perspectives on the challenges presented by these developments and how SBML Level 3 provides the foundation needed to support this evolution.

176 citations


Authors

Showing all 13498 results

NameH-indexPapersCitations
Jiawei Han1681233143427
Bernhard Schölkopf1481092149492
Christos Faloutsos12778977746
Alexander J. Smola122434110222
Rama Chellappa120103162865
William F. Laurance11847056464
Andrew McCallum11347278240
Michael J. Black11242951810
David Heckerman10948362668
Larry S. Davis10769349714
Chris M. Wood10279543076
Pietro Perona10241494870
Guido W. Imbens9735264430
W. Bruce Croft9742639918
Chunhua Shen9368137468
Network Information
Related Institutions (5)
Microsoft
86.9K papers, 4.1M citations

89% related

Google
39.8K papers, 2.1M citations

88% related

Carnegie Mellon University
104.3K papers, 5.9M citations

87% related

ETH Zurich
122.4K papers, 5.1M citations

82% related

University of Maryland, College Park
155.9K papers, 7.2M citations

82% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20234
2022168
20212,015
20202,596
20192,002
20181,189