Institution
Santa Fe Institute
Nonprofit•Santa Fe, New Mexico, United States•
About: Santa Fe Institute is a nonprofit organization based out in Santa Fe, New Mexico, United States. It is known for research contribution in the topics: Population & Context (language use). The organization has 558 authors who have published 4558 publications receiving 396015 citations. The organization is also known as: SFI.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In most cases the authors find that bottlenecks do regularly occur, although in many cases they appear to be virion-concentration dependent, and the evolutionary implications of genetic bott lenecks during virus infection are considered.
Abstract: For virus infections of multicellular hosts, narrow genetic bottlenecks during transmission and within-host spread appear to be widespread. These bottlenecks will affect the maintenance of genetic variation in a virus population and the prevalence of mixed-strain infections, thereby ultimately determining the strength with which different random forces act during evolution. Here we consider different approaches for estimating bottleneck sizes and weigh their merits. We then review quantitative estimates of bottleneck size during cellular infection, within-host spread, horizontal transmission, and finally vertical transmission. In most cases we find that bottlenecks do regularly occur, although in many cases they appear to be virion-concentration dependent. Finally, we consider the evolutionary implications of genetic bottlenecks during virus infection. Although on average strong bottlenecks will lead to declines in fitness, we consider a number of scenarios in which bottlenecks could also be advantageous ...
125 citations
•
TL;DR: This work studies the subtle but important decisions underlying the specification of a configuration model, and investigates the role these choices play in graph sampling procedures and a suite of applications, placing particular emphasis on the importance of specifying the appropriate graph labeling under which to consider a null model.
Abstract: Random graph null models have found widespread application in diverse research communities analyzing network datasets, including social, information, and economic networks, as well as food webs, protein-protein interactions, and neuronal networks. The most popular family of random graph null models, called configuration models, are defined as uniform distributions over a space of graphs with a fixed degree sequence. Commonly, properties of an empirical network are compared to properties of an ensemble of graphs from a configuration model in order to quantify whether empirical network properties are meaningful or whether they are instead a common consequence of the particular degree sequence. In this work we study the subtle but important decisions underlying the specification of a configuration model, and investigate the role these choices play in graph sampling procedures and a suite of applications. We place particular emphasis on the importance of specifying the appropriate graph labeling (stub-labeled or vertex-labeled) under which to consider a null model, a choice that closely connects the study of random graphs to the study of random contingency tables. We show that the choice of graph labeling is inconsequential for studies of simple graphs, but can have a significant impact on analyses of multigraphs or graphs with self-loops. The importance of these choices is demonstrated through a series of three vignettes, analyzing network datasets under many different configuration models and observing substantial differences in study conclusions under different models. We argue that in each case, only one of the possible configuration models is appropriate. While our work focuses on undirected static networks, it aims to guide the study of directed networks, dynamic networks, and all other network contexts that are suitably studied through the lens of random graph null models.
125 citations
••
TL;DR: It is shown that the manner in which conditional entropies converge to their asymptotic value serves as a measure of global correlation and structure for spatial systems in any dimension.
Abstract: We develop information-theoretic measures of spatial structure and pattern in more than one dimension As is well known, the entropy density of a two-dimensional configuration can be efficiently and accurately estimated via a converging sequence of conditional entropies We show that the manner in which these conditional entropies converge to their asymptotic value serves as a measure of global correlation and structure for spatial systems in any dimension We compare and contrast entropy convergence with mutual-information and structure-factor techniques for quantifying and detecting spatial structure
125 citations
••
[...]
TL;DR: A new theory called ‘highly optimized tolerance’ attempts to explain why carefully engineered systems sometimes fail catastrophically despite the best intentions of their designers.
Abstract: Carefully engineered systems sometimes fail catastrophically despite the best intentions of their designers. A new theory called ‘highly optimized tolerance’ attempts to explain how this happens.
125 citations
••
TL;DR: In this article, Bouchaud et al. showed that the autocorrelation of order signs as a function of the lag is asymptotically proportional to the number of executed orders.
Abstract: Recent empirical studies have demonstrated long-memory in the signs of orders to buy or sell in financial markets [J.-P. Bouchaud, Y. Gefen, M. Potters, and M. Wyart, Quant. Finance 4, 176 (2004); F. Lillo and J. D. Farmer Dyn. Syst. Appl. 8, 3 (2004)]. We show how this can be caused by delays in market clearing. Under the common practice of order splitting, large orders are broken up into pieces and executed incrementally. If the size of such large orders is power-law distributed, this gives rise to power-law decaying autocorrelations in the signs of executed orders. More specifically, we show that if the cumulative distribution of large orders of volume $v$ is proportional to ${v}^{\ensuremath{-}\ensuremath{\alpha}}$ and the size of executed orders is constant, the autocorrelation of order signs as a function of the lag $\ensuremath{\tau}$ is asymptotically proportional to ${\ensuremath{\tau}}^{\ensuremath{-}(\ensuremath{\alpha}\ensuremath{-}1)}$. This is a long-memory process when $\ensuremath{\alpha}l2$. With a few caveats, this gives a good match to the data. A version of the model also shows long-memory fluctuations in order execution rates, which may be relevant for explaining the long memory of price diffusion rates.
125 citations
Authors
Showing all 606 results
Name | H-index | Papers | Citations |
---|---|---|---|
James Hone | 127 | 637 | 108193 |
James H. Brown | 125 | 423 | 72040 |
Alan S. Perelson | 118 | 632 | 66767 |
Mark Newman | 117 | 348 | 168598 |
Bette T. Korber | 117 | 392 | 49526 |
Marten Scheffer | 111 | 350 | 73789 |
Peter F. Stadler | 103 | 901 | 56813 |
Sanjay Jain | 103 | 881 | 46880 |
Henrik Jeldtoft Jensen | 102 | 1286 | 48138 |
Dirk Helbing | 101 | 642 | 56810 |
Oliver G. Pybus | 100 | 447 | 45313 |
Andrew P. Dobson | 98 | 322 | 44211 |
Carel P. van Schaik | 94 | 329 | 26908 |
Seth Lloyd | 92 | 490 | 50159 |
Andrew W. Lo | 85 | 378 | 51440 |