scispace - formally typeset
Search or ask a question

Showing papers by "Santa Fe Institute published in 2021"


Journal ArticleDOI
19 Aug 2021
TL;DR: This Primer provides an anatomy of network analysis techniques, describes the current state of the art and discusses open problems, as well as assessment techniques to evaluate network robustness and replicability.
Abstract: In recent years, network analysis has been applied to identify and analyse patterns of statistical association in multivariate psychological data. In these approaches, network nodes represent variables in a data set, and edges represent pairwise conditional associations between variables in the data, while conditioning on the remaining variables. This Primer provides an anatomy of these techniques, describes the current state of the art and discusses open problems. We identify relevant data structures in which network analysis may be applied: cross-sectional data, repeated measures and intensive longitudinal data. We then discuss the estimation of network structures in each of these cases, as well as assessment techniques to evaluate network robustness and replicability. Successful applications of the technique in different research areas are highlighted. Finally, we discuss limitations and challenges for future research. Network analysis allows the investigation of complex patterns and relationships by examining nodes and the edges connecting them. Borsboom et al. discuss the adoption of network analysis in psychological research.

206 citations


Journal ArticleDOI
01 Mar 2021
TL;DR: In this paper, the authors investigate the association between self-reported mask-wearing, physical distancing, and SARS-CoV-2 transmission in the USA, along with the effect of statewide mandates on mask uptake.
Abstract: Summary Background Face masks have become commonplace across the USA because of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) epidemic. Although evidence suggests that masks help to curb the spread of the disease, there is little empirical research at the population level. We investigate the association between self-reported mask-wearing, physical distancing, and SARS-CoV-2 transmission in the USA, along with the effect of statewide mandates on mask uptake. Methods Serial cross-sectional surveys were administered via a web platform to randomly surveyed US individuals aged 13 years and older, to query self-reports of face mask-wearing. Survey responses were combined with instantaneous reproductive number (Rt) estimates from two publicly available sources, the outcome of interest. Measures of physical distancing, community demographics, and other potential sources of confounding (from publicly available sources) were also assessed. We fitted multivariate logistic regression models to estimate the association between mask-wearing and community transmission control (Rt Findings 378 207 individuals responded to the survey between June 3 and July 27, 2020, of which 4186 were excluded for missing data. We observed an increasing trend in reported mask usage across the USA, although uptake varied by geography. A logistic model controlling for physical distancing, population demographics, and other variables found that a 10% increase in self-reported mask-wearing was associated with an increased odds of transmission control (odds ratio 3·53, 95% CI 2·03–6·43). We found that communities with high reported mask-wearing and physical distancing had the highest predicted probability of transmission control. Segmented regression analysis of reported mask-wearing showed no statistically significant change in the slope after mandates were introduced; however, the upward trend in reported mask-wearing was preserved. Interpretation The widespread reported use of face masks combined with physical distancing increases the odds of SARS-CoV-2 transmission control. Self-reported mask-wearing increased separately from government mask mandates, suggesting that supplemental public health interventions are needed to maximise adoption and help to curb the ongoing epidemic. Funding Flu Lab, Google.org (via the Tides Foundation), National Institutes for Health, National Science Foundation, Morris-Singer Foundation, MOOD, Branco Weiss Fellowship, Ending Pandemics, Centers for Disease Control and Prevention (USA).

184 citations


Journal ArticleDOI
TL;DR: Estimates suggest the degree to which lithium-ion technologies' price decline might have been limited by performance requirements other than cost per energy capacity and suggest that battery technologies developed for stationary applications might achieve faster cost declines, though engineering-based mechanistic cost modeling is required.
Abstract: Lithium-ion technologies are increasingly employed to electrify transportation and provide stationary energy storage for electrical grids, and as such their development has garnered much attention. However, their deployment is still relatively limited, and their broader adoption will depend on their potential for cost reduction and performance improvement. Understanding this potential can inform critical climate change mitigation strategies, including public policies and technology development efforts. However, many existing estimates of past cost decline, which often serve as starting points for forecasting models, rely on limited data series and measures of technological progress. Here we systematically collect, harmonize, and combine various data series of price, market size, research and development, and performance of lithium-ion technologies. We then develop representative series for these measures, while separating cylindrical cells from all types of cells. For both, we find that the real price of lithium-ion cells, scaled by their energy capacity, has declined by about 97% since their commercial introduction in 1991. We estimate that between 1992 and 2016, real price per energy capacity declined 13% per year for both all types of cells and cylindrical cells, and upon a doubling of cumulative market size, decreased 20% for all types of cells and 24% for cylindrical cells. We also consider additional performance characteristics including energy density and specific energy. When energy density is incorporated into the definition of service provided by a lithium-ion battery, estimated technological improvement rates increase considerably. The annual decline in real price per service increases from 13 to 17% for both all types of cells and cylindrical cells while learning rates increase from 20 to 27% for all cell shapes and 24 to 31% for cylindrical cells. These increases suggest that previously reported improvement rates might underestimate the rate of lithium-ion technologies' change. Moreover, our improvement rate estimates suggest the degree to which lithium-ion technologies' price decline might have been limited by performance requirements other than cost per energy capacity. These rates also suggest that battery technologies developed for stationary applications, where restrictions on volume and mass are relaxed, might achieve faster cost declines, though engineering-based mechanistic cost modeling is required to further characterize this potential. The methods employed to collect these data and estimate improvement rates are designed to serve as a blueprint for how to work with sparse data when making consequential measurements of technological change.

169 citations


Journal ArticleDOI
01 Feb 2021
TL;DR: In this paper, the authors argue that research institutions devoted to sustainability should focus more on creating the conditions for experimenting with multiple kinds of knowledge and ways of knowing to foster sustainability-oriented learning.
Abstract: Sustainability science needs more systematic approaches for mobilizing knowledge in support of interventions that may bring about transformative change. In this Perspective, we contend that action-oriented knowledge for sustainability emerges when working in integrated ways with the many kinds of knowledge involved in the shared design, enactment and realization of change. The pluralistic and integrated approach we present rejects technocratic solutions to complex sustainability challenges and foregrounds individual and social learning. We argue that research institutions devoted to sustainability should focus more on creating the conditions for experimenting with multiple kinds of knowledge and ways of knowing to foster sustainability-oriented learning. Sustainability science needs to better mobilize a range of knowledge to support transformative change. This Perspective contends that such transformative, action-oriented knowledge emerges from integrating multiple kinds of knowledge and ways of knowing.

139 citations


Journal ArticleDOI
20 Aug 2021-Science
TL;DR: In this paper, the authors investigated the spatial invasion dynamics of lineage B.1.7 by jointly analyzing UK human mobility, virus genomes, and community-based polymerase chain reaction data.
Abstract: Understanding the causes and consequences of the emergence of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants of concern is crucial to pandemic control yet difficult to achieve because they arise in the context of variable human behavior and immunity. We investigated the spatial invasion dynamics of lineage B.1.1.7 by jointly analyzing UK human mobility, virus genomes, and community-based polymerase chain reaction data. We identified a multistage spatial invasion process in which early B.1.1.7 growth rates were associated with mobility and asymmetric lineage export from a dominant source location, enhancing the effects of B.1.1.7's increased intrinsic transmissibility. We further explored how B.1.1.7 spread was shaped by nonpharmaceutical interventions and spatial variation in previous attack rates. Our findings show that careful accounting of the behavioral and epidemiological context within which variants of concern emerge is necessary to interpret correctly their observed relative growth rates.

119 citations


Journal ArticleDOI
TL;DR: The Theory Construction Methodology (TCM) as mentioned in this paper is a set of theoretical principles that putatively explain these phenomena, and the theory is used to construct a formal model, which is then used to evaluate the explanatory adequacy of the model by formalizing its empirical phenomena.
Abstract: This article aims to improve theory formation in psychology by developing a practical methodology for constructing explanatory theories: theory construction methodology (TCM). TCM is a sequence of five steps. First, the theorist identifies a domain of empirical phenomena that becomes the target of explanation. Second, the theorist constructs a prototheory, a set of theoretical principles that putatively explain these phenomena. Third, the prototheory is used to construct a formal model, a set of model equations that encode explanatory principles. Fourth, the theorist investigates the explanatory adequacy of the model by formalizing its empirical phenomena and assessing whether it indeed reproduces these phenomena. Fifth, the theorist studies the overall adequacy of the theory by evaluating whether the identified phenomena are indeed reproduced faithfully and whether the explanatory principles are sufficiently parsimonious and substantively plausible. We explain TCM with an example taken from research on intelligence (the mutualism model of intelligence), in which key elements of the method have been successfully implemented. We discuss the place of TCM in the larger scheme of scientific research and propose an outline for a university curriculum that can systematically educate psychologists in the process of theory formation.

107 citations


Journal ArticleDOI
TL;DR: In this article, the authors synthesize the biogeography of key organisms (vascular and non-vascular vegetation and soil microorganisms), attributes (functional traits, spatial patterns, plant-plant and plant-soil interactions) and processes (productivity and land cover) across global drylands.
Abstract: Despite their extent and socio-ecological importance, a comprehensive biogeographical synthesis of drylands is lacking. Here we synthesize the biogeography of key organisms (vascular and non-vascular vegetation and soil microorganisms), attributes (functional traits, spatial patterns, plant-plant and plant-soil interactions) and processes (productivity and land cover) across global drylands. These areas have a long evolutionary history, are centers of diversification for many plant lineages and include important plant diversity hotspots. This diversity captures a strikingly high portion of the variation in leaf functional diversity observed globally. Part of this functional diversity is associated with the large variation in response and effect traits in the shrubs encroaching dryland grasslands. Aridity and its interplay with the traits of interacting plant species largely shapes biogeographical patterns in plant-plant and plant-soil interactions, and in plant spatial patterns. Aridity also drives the composition of biocrust communities and vegetation productivity, which shows large geographical variation. We finish our review discussing major research gaps, which include: i) studying regular vegetation spatial patterns, ii) establishing large-scale plant and biocrust field surveys assessing individual-level trait measurements, iii) knowing whether plant-plant and plant-soil interactions impacts on biodiversity are predictable and iv) assessing how elevated CO2 modulates future aridity conditions and plant productivity.

104 citations


Journal ArticleDOI
27 May 2021-Nature
TL;DR: In this paper, a simple and robust scaling law was proposed to capture the temporal and spatial spectrum of population movement on the basis of large-scale mobility data from diverse cities around the globe, and it was shown that the spatio-temporal flows to different locations give rise to prominent spatial clusters with an area distribution that follows Zipf's law.
Abstract: Human mobility impacts many aspects of a city, from its spatial structure1-3 to its response to an epidemic4-7. It is also ultimately key to social interactions8, innovation9,10 and productivity11. However, our quantitative understanding of the aggregate movements of individuals remains incomplete. Existing models-such as the gravity law12,13 or the radiation model14-concentrate on the purely spatial dependence of mobility flows and do not capture the varying frequencies of recurrent visits to the same locations. Here we reveal a simple and robust scaling law that captures the temporal and spatial spectrum of population movement on the basis of large-scale mobility data from diverse cities around the globe. According to this law, the number of visitors to any location decreases as the inverse square of the product of their visiting frequency and travel distance. We further show that the spatio-temporal flows to different locations give rise to prominent spatial clusters with an area distribution that follows Zipf's law15. Finally, we build an individual mobility model based on exploration and preferential return to provide a mechanistic explanation for the discovered scaling law and the emerging spatial structure. Our findings corroborate long-standing conjectures in human geography (such as central place theory16 and Weber's theory of emergent optimality10) and allow for predictions of recurrent flows, providing a basis for applications in urban planning, traffic engineering and the mitigation of epidemic diseases.

104 citations


Journal ArticleDOI
29 Jan 2021
TL;DR: In complexity economics, agents explore, react, and constantly change their actions and strategies in response to the outcome they mutually create as mentioned in this paper, and the resulting outcome may not be in equilibrium and may display patterns and emergent phenomena not visible to equilibrium analysis.
Abstract: Conventional, neoclassical economics assumes perfectly rational agents (firms, consumers, investors) who face well-defined problems and arrive at optimal behaviour consistent with — in equilibrium with — the overall outcome caused by this behaviour This rational, equilibrium system produces an elegant economics, but is restrictive and often unrealistic Complexity economics relaxes these assumptions It assumes that agents differ, that they have imperfect information about other agents and must, therefore, try to make sense of the situation they face Agents explore, react and constantly change their actions and strategies in response to the outcome they mutually create The resulting outcome may not be in equilibrium and may display patterns and emergent phenomena not visible to equilibrium analysis The economy becomes something not given and existing but constantly forming from a developing set of actions, strategies and beliefs — something not mechanistic, static, timeless and perfect but organic, always creating itself, alive and full of messy vitality Complexity economics relaxes the assumptions of neoclassical economics to assume that agents differ, that they have imperfect information about other agents and they must, therefore, try to make sense of the situation they face This Perspective sketches the ideas of complexity economics and describes how it links to complexity science more broadly

94 citations


Journal ArticleDOI
TL;DR: In this article, a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance is presented. And the authors show data supporting the predictions of this theory.
Abstract: In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.

90 citations


Journal ArticleDOI
TL;DR: Collective behavior provides a framework for understanding how the actions and properties of groups emerge from the way individuals generate and share information as mentioned in this paper, and the study of collective behavior must rise to a crisis discipline just as medicine, conservation, and climate science have.
Abstract: Collective behavior provides a framework for understanding how the actions and properties of groups emerge from the way individuals generate and share information. In humans, information flows were initially shaped by natural selection yet are increasingly structured by emerging communication technologies. Our larger, more complex social networks now transfer high-fidelity information over vast distances at low cost. The digital age and the rise of social media have accelerated changes to our social systems, with poorly understood functional consequences. This gap in our knowledge represents a principal challenge to scientific progress, democracy, and actions to address global crises. We argue that the study of collective behavior must rise to a “crisis discipline” just as medicine, conservation, and climate science have, with a focus on providing actionable insight to policymakers and regulators for the stewardship of social systems.

Journal ArticleDOI
TL;DR: In this article, the authors assess the economic trade-offs of expanding and accelerating testing for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) across the USA in different transmission scenarios.
Abstract: Summary Background To mitigate the COVID-19 pandemic, countries worldwide have enacted unprecedented movement restrictions, physical distancing measures, and face mask requirements Until safe and efficacious vaccines or antiviral drugs become widely available, viral testing remains the primary mitigation measure for rapid identification and isolation of infected individuals We aimed to assess the economic trade-offs of expanding and accelerating testing for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) across the USA in different transmission scenarios Methods We used a multiscale model that incorporates SARS-CoV-2 transmission at the population level and daily viral load dynamics at the individual level to assess eight surveillance testing strategies that varied by testing frequency (from daily to monthly testing) and isolation period (1 or 2 weeks), compared with the status-quo strategy of symptom-based testing and isolation For each testing strategy, we first estimated the costs (incorporating costs of diagnostic testing and admissions to hospital, and salary lost while in isolation) and years of life lost (YLLs) prevented under rapid and low transmission scenarios We then assessed the testing strategies across a range of scenarios, each defined by effective reproduction number (Re), willingness to pay per YLL averted, and cost of a test, to estimate the probability that a particular strategy had the greatest net benefit Additionally, for a range of transmission scenarios (Re from 1·1 to 3), we estimated a threshold test price at which the status-quo strategy outperforms all testing strategies considered Findings Our modelling showed that daily testing combined with a 2-week isolation period was the most costly strategy considered, reflecting increased costs with greater test frequency and length of isolation period Assuming a societal willingness to pay of US$100 000 per YLL averted and a price of $5 per test, the strategy most likely to be cost-effective under a rapid transmission scenario (Re of 2·2) is weekly testing followed by a 2-week isolation period subsequent to a positive test result Under low transmission scenarios (Re of 1·2), monthly testing of the population followed by 1-week isolation rather than 2-week isolation is likely to be most cost-effective Expanded surveillance testing is more likely to be cost-effective than the status-quo testing strategy if the price per test is less than $75 across all transmission rates considered Interpretation Extensive expansion of SARS-CoV-2 testing programmes with more frequent and rapid tests across communities coupled with isolation of individuals with confirmed infection is essential for mitigating the COVID-19 pandemic Furthermore, resources recouped from shortened isolation duration could be cost-effectively allocated to more frequent testing Funding US National Institutes of Health, US Centers for Disease Control and Prevention, and Love, Tito's

Journal ArticleDOI
TL;DR: In this paper, the authors explored correlates of vaccine hesitancy, considering political believes and psychosocial concepts, conducting a non-probability quota-sampled online survey with 1007 Austrians.
Abstract: BACKGROUND: With the coronavirus disease 2019 (COVID-19) pandemic surging and new mutations evolving, trust in vaccines is essential. METHODS: We explored correlates of vaccine hesitancy, considering political believes and psychosocial concepts, conducting a non-probability quota-sampled online survey with 1007 Austrians. RESULTS: We identified several important correlates of vaccine hesitancy, ranging from demographics to complex factors such as voting behavior or trust in the government. Among those with hesitancy towards a COVID-19 vaccine, having voted for opposition parties (opp) or not voted (novote) were (95% Confidence Intervall (CI)opp, 1.44-2.95) to 2.25-times (95%CInovote, 1.53-3.30) that of having voted for governing parties. Only 46.2% trusted the Austrian government to provide safe vaccines, and 80.7% requested independent scientific evaluations regarding vaccine safety to increase willingness to vaccine. CONCLUSIONS: Contrary to expected, psychosocial dimensions were only weakly correlated with vaccine hesitancy. However, the strong correlation between distrust in the vaccine and distrust in authorities suggests a common cause of disengagement from public discourse.

Journal ArticleDOI
TL;DR: In this article, the authors quantify the impact of parenthood on scholarship using an extensive survey of the timing of parent events, longitudinal publication data, and perceptions of research expectations among 3064 tenure-track faculty at 450 Ph.D.-granting computer science, history, and business departments across the United States and Canada, along with data on institution-specific parental leave policies.
Abstract: Across academia, men and women tend to publish at unequal rates. Existing explanations include the potentially unequal impact of parenthood on scholarship, but a lack of appropriate data has prevented its clear assessment. Here, we quantify the impact of parenthood on scholarship using an extensive survey of the timing of parenthood events, longitudinal publication data, and perceptions of research expectations among 3064 tenure-track faculty at 450 Ph.D.-granting computer science, history, and business departments across the United States and Canada, along with data on institution-specific parental leave policies. Parenthood explains most of the gender productivity gap by lowering the average short-term productivity of mothers, even as parents tend to be slightly more productive on average than nonparents. However, the size of productivity penalty for mothers appears to have shrunk over time. Women report that paid parental leave and adequate childcare are important factors in their recruitment and retention. These results have broad implications for efforts to improve the inclusiveness of scholarship.

Posted ContentDOI
Estee Y Cramer1, Evan L. Ray1, Velma K. Lopez2, Johannes Bracher3  +281 moreInstitutions (53)
05 Feb 2021-medRxiv
TL;DR: In this paper, the authors systematically evaluated 23 models that regularly submitted forecasts of reported weekly incident COVID-19 mortality counts in the US at the state and national level at the CDC.
Abstract: Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies In 2020, the COVID-19 Forecast Hub (https://covid19forecasthuborg/) collected, disseminated, and synthesized hundreds of thousands of specific predictions from more than 50 different academic, industry, and independent research groups This manuscript systematically evaluates 23 models that regularly submitted forecasts of reported weekly incident COVID-19 mortality counts in the US at the state and national level One of these models was a multi-model ensemble that combined all available forecasts each week The performance of individual models showed high variability across time, geospatial units, and forecast horizons Half of the models evaluated showed better accuracy than a naive baseline model In combining the forecasts from all teams, the ensemble showed the best overall probabilistic accuracy of any model Forecast accuracy degraded as models made predictions farther into the future, with probabilistic accuracy at a 20-week horizon more than 5 times worse than when predicting at a 1-week horizon This project underscores the role that collaboration and active coordination between governmental public health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks

Journal ArticleDOI
TL;DR: It is demonstrated that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and the associated learning mechanism is explained with new theory.
Abstract: The ability to store and manipulate information is a hallmark of computational systems. Whereas computers are carefully engineered to represent and perform mathematical operations on structured data, neurobiological systems adapt to perform analogous functions without needing to be explicitly engineered. Recent efforts have made progress in modelling the representation and recall of information in neural systems. However, precisely how neural systems learn to modify these representations remains far from understood. Here, we demonstrate that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and we explain the associated learning mechanism with new theory. Specifically, we drive an RNN with examples of translated, linearly transformed or pre-bifurcated time series from a chaotic Lorenz system, alongside an additional control signal that changes value for each example. By training the network to replicate the Lorenz inputs, it learns to autonomously evolve about a Lorenz-shaped manifold. Additionally, it learns to continuously interpolate and extrapolate the translation, transformation and bifurcation of this representation far beyond the training data by changing the control signal. Furthermore, we demonstrate that RNNs can infer the bifurcation structure of normal forms and period doubling routes to chaos, and extrapolate non-dynamical, kinematic trajectories. Finally, we provide a mechanism for how these computations are learned, and replicate our main results using a Wilson–Cowan reservoir. Together, our results provide a simple but powerful mechanism by which an RNN can learn to manipulate internal representations of complex information, enabling the principled study and precise design of RNNs. Recurrent neural networks (RNNs) can learn to process temporal information, such as speech or movement. New work makes such approaches more powerful and flexible by describing theory and experiments demonstrating that RNNs can learn from a few examples to generalize and predict complex dynamics including chaotic behaviour.

Journal ArticleDOI
07 Jul 2021-Neuron
TL;DR: This paper argued that gender bias is not a single problem but manifests as a collection of distinct issues that impact researchers' lives and disentangled these facets and proposed concrete solutions that can be adopted by individuals, academic institutions, and society.

Journal ArticleDOI
TL;DR: A review of the current modeling methodologies and the challenges and opportunities for integrating them with social science research and risk communication and community engagement (RCCE) practice can be found in this paper.
Abstract: Social and behavioural factors are critical to the emergence, spread and containment of human disease, and are key determinants of the course, duration and outcomes of disease outbreaks. Recent epidemics of Ebola in West Africa and coronavirus disease 2019 (COVID-19) globally have reinforced the importance of developing infectious disease models that better integrate social and behavioural dynamics and theories. Meanwhile, the growth in capacity, coordination and prioritization of social science research and of risk communication and community engagement (RCCE) practice within the current pandemic response provides an opportunity for collaboration among epidemiological modellers, social scientists and RCCE practitioners towards a mutually beneficial research and practice agenda. Here, we provide a review of the current modelling methodologies and describe the challenges and opportunities for integrating them with social science research and RCCE practice. Finally, we set out an agenda for advancing transdisciplinary collaboration for integrated disease modelling and for more robust policy and practice for reducing disease transmission.

Journal ArticleDOI
TL;DR: This Perspective highlights where major differences still exist, and where the field of evolutionary computation could attempt to approach features from biological evolution more closely, namely neutrality and random drift, complex genotype-to-phenotype mappings with rich environmental interactions and major organizational transitions.
Abstract: Evolutionary computation is inspired by the mechanisms of biological evolution. With algorithmic improvements and increasing computing resources, evolutionary computation has discovered creative and innovative solutions to challenging practical problems. This paper evaluates how today’s evolutionary computation compares to biological evolution and how it may fall short. A small number of well-accepted characteristics of biological evolution are considered: openendedness, major transitions in organizational structure, neutrality and genetic drift, multi-objectivity, complex genotype-to-phenotype mappings and co-evolution. Evolutionary computation exhibits many of these to some extent but more can be achieved by scaling up with available computing and by emulating biology more carefully. In particular, evolutionary computation diverges from biological evolution in three key respects: it is based on small populations and strong selection; it typically uses direct genotype-to-phenotype mappings; and it does not achieve major organizational transitions. These shortcomings suggest a roadmap for future evolutionary computation research, and point to gaps in our understanding of how biology discovers major transitions. Advances in these areas can lead to evolutionary computation that approaches the complexity and flexibility of biology, and can serve as an executable model of biological processes. Evolutionary computation is inspired by biological evolution and exhibits characteristics familiar from biology such as openendedness, multi-objectivity and co-evolution. This Perspective highlights where major differences still exist, and where the field of evolutionary computation could attempt to approach features from biological evolution more closely, namely neutrality and random drift, complex genotype-to-phenotype mappings with rich environmental interactions and major organizational transitions.

Journal ArticleDOI
TL;DR: In this article, the authors extended decision theory and game theory to allow for information processing errors and showed that it is necessary and sufficient that each agent's information processing error be (1) nondeluded and balanced so that the agents cannot agree to disagree, (2) non-discriminative and positively balanced, and (3) nonsmooth and KTYK and nested so that agents cannot speculate in equilibrium.
Abstract: Decision theory and game theory are extended to allow for information processing errors. This extended theory is then used to reexamine market speculation and consensus, both when all actions (opinions) are common knowledge and when they may not be. Five axioms of information processing are shown to be especially important to speculation and consensus. They are called nondelusion, knowing that you know (KTYK), nested, balanced, and positively balanced. We show that it is necessary and sufficient that each agent's information processing errors be (1) nondeluded and balanced so that the agents cannot agree to disagree, (2) nondeluded and positively balanced so that it cannot be common knowledge that they are speculating, and (3) nondeluded and KTYK and nested so that agents cannot speculate in equilibrium. Each condition is strictly weaker than the next one, and the last is strictly weaker than partition information.

Journal ArticleDOI
TL;DR: Evidence from the fossil record of macroevolutionary lags between the origin of a novelty and its ecological success demonstrates that novelty may be decoupled from innovation, and only definitions of novelty based on radicality can be assessed without reference to the subsequent history of the clade to which a novelty belongs.
Abstract: Since 1990 the recognition of deep homologies among metazoan developmental processes and the spread of more mechanistic approaches to developmental biology have led to a resurgence of interest in evolutionary novelty and innovation. Other evolutionary biologists have proposed central roles for behaviour and phenotypic plasticity in generating the conditions for the construction of novel morphologies, or invoked the accessibility of new regions of vast sequence spaces. These approaches contrast with more traditional emphasis on the exploitation of ecological opportunities as the primary source of novelty. This definitional cornucopia reflects differing stress placed on three attributes of novelties: their radical nature, the generation of new taxa, and ecological and evolutionary impact. Such different emphasis has led to conflating four distinct issues: the origin of novel attributes (genes, developmental processes, phenotypic characters), new functions, higher clades and the ecological impact of new structures and functions. Here I distinguish novelty (the origin of new characters, deep character transformations, or new combinations) from innovation, the ecological and evolutionary success of clades. Evidence from the fossil record of macroevolutionary lags between the origin of a novelty and its ecological success demonstrates that novelty may be decoupled from innovation, and only definitions of novelty based on radicality (rather than generativity or consequentiality) can be assessed without reference to the subsequent history of the clade to which a novelty belongs. These considerations suggest a conceptual framework for novelty and innovation, involving: (i) generation of the potential for novelty; (ii) the formation of novel attributes; (iii) refinement of novelties through adaptation; (iv) exploitation of novelties by a clade, which may coincide with a new round of ecological or environmental potentiation; followed by (v) the establishment of innovations through ecological processes. This framework recognizes that there is little empirical support for either the dominance of ecological opportunity, nor abrupt discontinuities (often caricatured as 'hopeful monsters'). This general framework may be extended to aspects of cultural and social innovation.

Proceedings ArticleDOI
26 Jun 2021
TL;DR: In this paper, the authors discuss fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field, and also speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable.
Abstract: Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI Spring") and periods of disappointment, loss of confidence, and reduced funding ("AI Winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this talk I will discuss some fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I will also speculate on what is needed for the grand challenge of making AI systems more robust, general, and adaptable --- in short, more intelligent.

Journal ArticleDOI
TL;DR: In this paper, the authors performed a population-based study on the risk of arterial thromboembolism (ATE) and venous thrombinogenesis (VTE) and found a strong association between cancer, ATE and VTE.
Abstract: Aims: An interrelation between cancer and thrombosis is known, but population-based studies on the risk of both arterial thromboembolism (ATE) and venous thromboembolism (VTE) have not been performed. Methods and results: International Classification of Disease 10th Revision (ICD-10) diagnosis codes of all publicly insured persons in Austria (0-90 years) were extracted from the Austrian Association of Social Security Providers dataset covering the years 2006-07 (n = 8 306 244). Patients with a history of cancer or active cancer were defined as having at least one ICD-10 'C' diagnosis code, and patients with ATE and/or VTE as having at least one of I21/I24 (myocardial infarction), I63/I64 (stroke), I74 (arterial embolism), and I26/I80/I82 (venous thromboembolism) diagnosis code. Among 158 675 people with cancer, 8559 (5.4%) had an ATE diagnosis code and 7244 (4.6%) a VTE diagnosis code. In contrast, among 8 147 569 people without cancer, 69 381 (0.9%) had an ATE diagnosis code and 29 307 (0.4%) a VTE diagnosis code. This corresponds to age-stratified random-effects relative risks (RR) of 6.88 [95% confidence interval (CI) 4.81-9.84] for ATE and 14.91 (95% CI 8.90-24.95) for VTE. ATE proportion was highest in patients with urinary tract malignancies (RR: 7.16 [6.74-7.61]) and lowest in patients with endocrine cancer (RR: 2.49 [2.00-3.10]). The corresponding VTE proportion was highest in cancer of the mesothelium/soft tissue (RR: 19.35 [17.44-21.47]) and lowest in oropharyngeal cancer (RR: 6.62 [5.61-7.81]). Conclusion: The RR of both ATE and VTE are significantly higher in persons with cancer. Our population-level meta-data indicate a strong association between cancer, ATE and VTE, and support the concept of shared risk factors and pathobiology between these diseases.Relative risk of ATE and VTE in persons with a cancer diagnosis code versus persons without a cancer diagnosis code.

Journal ArticleDOI
TL;DR: In this paper, a model of the dynamics of policy effectiveness drawing upon the results of a large panel survey implemented in Germany during the first and second waves of the COVID-19 pandemic is presented.
Abstract: What is an effective vaccination policy to end the COVID-19 pandemic? We address this question in a model of the dynamics of policy effectiveness drawing upon the results of a large panel survey implemented in Germany during the first and second waves of the pandemic. We observe increased opposition to vaccinations were they to be legally required. In contrast, for voluntary vaccinations, there was higher and undiminished support. We find that public distrust undermines vaccine acceptance, and is associated with a belief that the vaccine is ineffective and, if enforced, compromises individual freedom. We model how the willingness to be vaccinated may vary over time in response to the fraction of the population already vaccinated and whether vaccination has occurred voluntarily or not. A negative effect of enforcement on vaccine acceptance (of the magnitude observed in our panel or even considerably smaller) could result in a large increase in the numbers that would have to be vaccinated unwillingly in order to reach a herd-immunity target. Costly errors may be avoided if policy makers understand that citizens' preferences are not fixed but will be affected both by the crowding-out effect of enforcement and by conformism. Our findings have broad policy applicability beyond COVID-19 to cases in which voluntary citizen compliance is essential because state capacities are limited and because effectiveness may depend on the ways that the policies themselves alter citizens' beliefs and preferences.

Journal ArticleDOI
TL;DR: In this article, the rank-dependent social dominance of individuals within social dominance hierarchies was analyzed in 172 social groups across 85 species in 23 orders and found that the majority of the groups (133 groups, 77%) follow a downward heuristic, but a significant minority (38 groups, 22%) show more complex social dominance patterns (close competitors or bullying) consistent with higher levels of social information use.
Abstract: Members of a social species need to make appropriate decisions about who, how, and when to interact with others in their group. However, it has been difficult for researchers to detect the inputs to these decisions and, in particular, how much information individuals actually have about their social context. We present a method that can serve as a social assay to quantify how patterns of aggression depend upon information about the ranks of individuals within social dominance hierarchies. Applied to existing data on aggression in 172 social groups across 85 species in 23 orders, it reveals three main patterns of rank-dependent social dominance: the downward heuristic (aggress uniformly against lower-ranked opponents), close competitors (aggress against opponents ranked slightly below self), and bullying (aggress against opponents ranked much lower than self). The majority of the groups (133 groups, 77%) follow a downward heuristic, but a significant minority (38 groups, 22%) show more complex social dominance patterns (close competitors or bullying) consistent with higher levels of social information use. These patterns are not phylogenetically constrained and different groups within the same species can use different patterns, suggesting that heuristic use may depend on context and the structuring of aggression by social information should not be considered a fixed characteristic of a species. Our approach provides opportunities to study the use of social information within and across species and the evolution of social complexity and cognition.

Journal ArticleDOI
01 Sep 2021-Nature
TL;DR: In this article, the authors quantify the impacts of fire on the ranges of species in Amazonia by using remote sensing estimates of fire and deforestation and comprehensive range estimates of 11,514 plant species and 3,079 vertebrate species in the Amazon.
Abstract: Biodiversity contributes to the ecological and climatic stability of the Amazon Basin1,2, but is increasingly threatened by deforestation and fire3,4. Here we quantify these impacts over the past two decades using remote-sensing estimates of fire and deforestation and comprehensive range estimates of 11,514 plant species and 3,079 vertebrate species in the Amazon. Deforestation has led to large amounts of habitat loss, and fires further exacerbate this already substantial impact on Amazonian biodiversity. Since 2001, 103,079–189,755 km2 of Amazon rainforest has been impacted by fires, potentially impacting the ranges of 77.3–85.2% of species that are listed as threatened in this region5. The impacts of fire on the ranges of species in Amazonia could be as high as 64%, and greater impacts are typically associated with species that have restricted ranges. We find close associations between forest policy, fire-impacted forest area and their potential impacts on biodiversity. In Brazil, forest policies that were initiated in the mid-2000s corresponded to reduced rates of burning. However, relaxed enforcement of these policies in 2019 has seemingly begun to reverse this trend: approximately 4,253–10,343 km2 of forest has been impacted by fire, leading to some of the most severe potential impacts on biodiversity since 2009. These results highlight the critical role of policy enforcement in the preservation of biodiversity in the Amazon. Remote-sensing estimates of fires and the estimated geographic ranges of thousands of plant and vertebrate species in the Amazon Basin reveal that fires have impacted the ranges of 77–85% of threatened species over the past two decades.

Journal ArticleDOI
TL;DR: In this paper, the authors present a framework to quantify broken detailed balance by measuring entropy production in macroscopic systems and apply their method to the human brain, an organ whose immense metabolic consumption drives a diverse range of cognitive functions.
Abstract: Living systems break detailed balance at small scales, consuming energy and producing entropy in the environment to perform molecular and cellular functions. However, it remains unclear how broken detailed balance manifests at macroscopic scales and how such dynamics support higher-order biological functions. Here we present a framework to quantify broken detailed balance by measuring entropy production in macroscopic systems. We apply our method to the human brain, an organ whose immense metabolic consumption drives a diverse range of cognitive functions. Using whole-brain imaging data, we demonstrate that the brain nearly obeys detailed balance when at rest, but strongly breaks detailed balance when performing physically and cognitively demanding tasks. Using a dynamic Ising model, we show that these large-scale violations of detailed balance can emerge from fine-scale asymmetries in the interactions between elements, a known feature of neural systems. Together, these results suggest that violations of detailed balance are vital for cognition and provide a general tool for quantifying entropy production in macroscopic systems.

Journal ArticleDOI
TL;DR: The Siberian Traps large igneous province (STLIP) is commonly invoked as the primary driver of global environmental changes that triggered the end-Permian mass extinction (EPME) as discussed by the authors.
Abstract: The Siberian Traps large igneous province (STLIP) is commonly invoked as the primary driver of global environmental changes that triggered the end-Permian mass extinction (EPME). Here, we explore t...

Journal ArticleDOI
TL;DR: The principles, syntheses and recent advances in the controlled formation of functional vesicle structures produced by PISA are discussed, while highlighting the exciting opportunities that can be explored in this emerging field.

Journal ArticleDOI
TL;DR: Physical bioenergetics, which resides at the interface of nonequilibrium physics, energy metabolism, and cell biology, seeks to understand how much energy cells are using, how they partition this energy between different cellular processes, and the associated energetic constraints as discussed by the authors.
Abstract: Cells are the basic units of all living matter which harness the flow of energy to drive the processes of life. While the biochemical networks involved in energy transduction are well-characterized, the energetic costs and constraints for specific cellular processes remain largely unknown. In particular, what are the energy budgets of cells? What are the constraints and limits energy flows impose on cellular processes? Do cells operate near these limits, and if so how do energetic constraints impact cellular functions? Physics has provided many tools to study nonequilibrium systems and to define physical limits, but applying these tools to cell biology remains a challenge. Physical bioenergetics, which resides at the interface of nonequilibrium physics, energy metabolism, and cell biology, seeks to understand how much energy cells are using, how they partition this energy between different cellular processes, and the associated energetic constraints. Here we review recent advances and discuss open questions and challenges in physical bioenergetics.