Institution
University College London
Education•London, United Kingdom•
About: University College London is a education organization based out in London, United Kingdom. It is known for research contribution in the topics: Population & Context (language use). The organization has 81105 authors who have published 210603 publications receiving 9868552 citations. The organization is also known as: UCL & University College, London.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, two alternative linear estimators that are designed to improve the properties of the standard first-differenced GMM estimator are presented. But both estimators require restrictions on the initial conditions process.
19,132 citations
••
27 Jun 2016TL;DR: In this article, the authors explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Abstract: Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2% top-1 and 5:6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5% top-5 error and 17:3% top-1 error on the validation set and 3:6% top-5 error on the official test set.
16,962 citations
••
Monash University1, University of Amsterdam2, University of Paris3, Bond University4, University of Texas Health Science Center at San Antonio5, University of Ottawa6, American University of Beirut7, Oregon Health & Science University8, University of York9, Ottawa Hospital Research Institute10, University of Southern Denmark11, Johns Hopkins University12, Brigham and Women's Hospital13, Indiana University14, University of Bristol15, University College London16, University of Toronto17
TL;DR: The preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement as discussed by the authors was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found.
Abstract: The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
16,613 citations
•
TL;DR: This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Abstract: Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error on the validation set (3.6% error on the test set) and 17.3% top-1 error on the validation set.
15,519 citations
••
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Abstract: Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.
15,106 citations
Authors
Showing all 82293 results
Name | H-index | Papers | Citations |
---|---|---|---|
David Goldstein | 141 | 1301 | 101955 |
Thomas J. Smith | 140 | 1775 | 113919 |
Andrew J. Lees | 140 | 877 | 91605 |
Andrew G. Clark | 140 | 823 | 123333 |
Lenore J. Launer | 140 | 697 | 74309 |
Nick C. Fox | 139 | 748 | 93036 |
Ralph L. Sacco | 138 | 829 | 131687 |
David Price | 138 | 1687 | 93535 |
Andrew Steptoe | 137 | 1003 | 73431 |
Maxwell Chertok | 136 | 1837 | 102333 |
John P. Moore | 136 | 522 | 60331 |
Tim J Cole | 136 | 827 | 92998 |
Junji Tojo | 135 | 878 | 84615 |
Melitta Schachner | 135 | 861 | 67304 |
Tim Jones | 135 | 1314 | 91422 |