Institution
University College London
Education•London, United Kingdom•
About: University College London is a(n) education organization based out in London, United Kingdom. It is known for research contribution in the topic(s): Population & Poison control. The organization has 81105 authors who have published 210603 publication(s) receiving 9868552 citation(s). The organization is also known as: UCL & University College, London.
Papers published on a yearly basis
Papers
More filters
Abstract: Estimation of the dynamic error components model is considered using two alternative linear estimators that are designed to improve the properties of the standard first-differenced GMM estimator. Both estimators require restrictions on the initial conditions process. Asymptotic efficiency comparisons and Monte Carlo simulations for the simple AR(1) model demonstrate the dramatic improvement in performance of the proposed estimators compared to the usual first-differenced GMM estimator, and compared to non-linear GMM. The importance of these results is illustrated in an application to the estimation of a labour demand model using company panel data.
17,468 citations
TL;DR: A genetical mathematical model is described which allows for interactions between relatives on one another's fitness and a quantity is found which incorporates the maximizing property of Darwinian fitness, named “inclusive fitness”.
Abstract: A genetical mathematical model is described which allows for interactions between relatives on one another's fitness. Making use of Wright's Coefficient of Relationship as the measure of the proportion of replica genes in a relative, a quantity is found which incorporates the maximizing property of Darwinian fitness. This quantity is named “inclusive fitness”. Species following the model should tend to evolve behaviour such that each organism appears to be attempting to maximize its inclusive fitness. This implies a limited restraint on selfish competitive behaviour and possibility of limited self-sacrifices. Special cases of the model are used to show (a) that selection in the social situations newly covered tends to be slower than classical selection, (b) how in populations of rather non-dispersive organisms the model may apply to genes affecting dispersion, and (c) how it may apply approximately to competition between relatives, for example, within sibships. Some artificialities of the model are discussed.
14,111 citations
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Abstract: Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.
13,822 citations
27 Jun 2016
Abstract: Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2% top-1 and 5:6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5% top-5 error and 17:3% top-1 error on the validation set and 3:6% top-5 error on the official test set.
12,684 citations
TL;DR: This biennial Review summarizes much of particle physics, using data from previous editions.
Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions., plus 2778 new measurements from 645 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors., probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, top quark, muon anomalous magnetic moment, extra dimensions, particle detectors, cosmic background radiation, dark matter, cosmological parameters, and big bang cosmology.
11,048 citations
Authors
Showing all 81105 results
Name | H-index | Papers | Citations |
---|---|---|---|
Trevor W. Robbins | 231 | 1137 | 164437 |
George Davey Smith | 224 | 2540 | 248373 |
Karl J. Friston | 217 | 1267 | 217169 |
Robert J. Lefkowitz | 214 | 860 | 147995 |
Cyrus Cooper | 204 | 1869 | 206782 |
David Miller | 203 | 2573 | 204840 |
Mark I. McCarthy | 200 | 1028 | 187898 |
André G. Uitterlinden | 199 | 1229 | 156747 |
Raymond J. Dolan | 196 | 919 | 138540 |
Michael Marmot | 193 | 1147 | 170338 |
Nicholas G. Martin | 192 | 1770 | 161952 |
David R. Williams | 178 | 2034 | 138789 |
John Hardy | 177 | 1178 | 171694 |
James J. Heckman | 175 | 766 | 156816 |
Kay-Tee Khaw | 174 | 1389 | 138782 |