scispace - formally typeset
Search or ask a question

Showing papers in "The British Journal for the Philosophy of Science in 2022"


Journal ArticleDOI
TL;DR: The authors argue that it is not the complexity or black-box nature of a model that limits how much understanding the model provides, but a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.
Abstract: Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this article, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.

33 citations


Journal ArticleDOI
TL;DR: In this paper , a theory of typicality is proposed to explain a variety of phenomena, from everyday phenomena to the statistical mechanical behaviour of gases, and a formalism for typicality explanations is provided.
Abstract: Typicality is routinely invoked in everyday contexts: bobcats are typically short-tailed; people are typically less than seven feet tall. Typicality is invoked in scientific contexts as well: typical gases expand; typical quantum systems exhibit probabilistic behaviour. And typicality facts like these support many explanations, both quotidian and scientific. But what is it for something to be typical? And how do typicality facts explain? In this paper, I propose a general theory of typicality. I analyse the notion of a typical property. I provide a formalism for typicality explanations, and I give an account of why typicality explanations are explanatory. Along the way, I show how typicality facts explain a variety of phenomena, from everyday phenomena to the statistical mechanical behaviour of gases. Finally, I argue that typicality is not the same thing as probability.

13 citations



Journal ArticleDOI
TL;DR: In this article , it is argued that with a suitable choice of ontology, these theories are in fact time reversal invariant in the sense of Albert and Callender, in agreement with the standard view.
Abstract: Albert and Callender have challenged the received view that theories like classical electrodynamics and non-relativistic quantum mechanics are time reversal invariant. They claim that time reversal should correspond to the mere reversal of the temporal order of the instantaneous states, without any accompanying change of the instantaneous state as in the standard view. As such, Albert and Callender claim, these theories are not time reversal invariant. The view of Albert and Callender has been much criticized, with many philosophers arguing that time reversal may correspond to more than the reversal of the temporal order. In this paper, we will not so much engage with that aspect of the debate, but rather deflate the disagreement by exploiting the ontological underdetermination. Namely, it will be argued that with a suitable choice of ontology, these theories are in fact time reversal invariant in the sense of Albert and Callender, in agreement with the standard view.

8 citations


Journal ArticleDOI
TL;DR: In this paper , the concept of chance in evolution has been discussed in the light of contemporary work in evo-devo, and a causal understanding of variational probabilities under which development acquires a creative, rather than a constraining role in evolution is provided.
Abstract: While the notion of chance has been central in discussions over the probabilistic nature of natural selection and genetic drift, its role in the production of variants on which populational sampling takes place has received much less philosophical attention. This article discusses the concept of chance in evolution in the light of contemporary work in evo-devo. We distinguish different levels at which randomness and chance can be defined in this context, and argue that recent research on variability and evolvability demands a causal understanding of variational probabilities under which development acquires a creative, rather than a constraining role in evolution. We then provide a propensity interpretation of variational probabilities that solves a conceptual confusion between causal properties, variational probabilities and extant variation present in the literature, and explore some metaphysical consequences that follow from our interpretation, specifically with regards to the nature of developmental types.

8 citations


Journal ArticleDOI
TL;DR: In this paper , the authors argue that if a niche is understood as the feature-factor relationship, then there are three fundamental ways in which organisms can engage in niche construction: constitutive, relational, and external niche construction.
Abstract: Niche construction theory concerns how organisms can change selection pressures by altering the feature–factor relationship between themselves and their environment. These alterations are standardly understood to be brought about through two kinds of organism–environment interaction: perturbative and relocational niche construction. We argue that a reconceptualization is needed on the grounds that if a niche is understood as the feature–factor relationship, then there are three fundamental ways in which organisms can engage in niche construction: constitutive, relational, and external niche construction. We further motivate our reconceptualization by showing some examples of organismic activities that fall outside of the current categorization of niche construction, but nonetheless should be included. We end by discussing two objections to niche construction and show how our reconceptualization helps to undercut these objections.

7 citations


Journal ArticleDOI
TL;DR: In this paper , the authors identify an important fallacy made by those defending instrumentalism about the free energy principle (FEP), which is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto realworld, target systems.
Abstract: Disagreement about how best to think of the relation between theories and the realities they represent has a longstanding and venerable history. We take up this debate in relation to the free energy principle (FEP) a contemporary framework in computational neuroscience, theoretical biology and the philosophy of cognitive science. The FEP is very ambitious, extending from the brain sciences to the biology of self-organisation. In this context, some find apparent discrepancies between the map (the FEP) and the territory (target systems) a compelling reason to defend instrumentalism about the FEP. We take this to be misguided. We identify an important fallacy made by those defending instrumentalism about the FEP. We call it the literalist fallacy: this is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto real-world, target systems. We conclude that scientific realism about the FEP is a live and tenable option.

7 citations


Journal ArticleDOI
TL;DR: In this article , the authors show that the past hypothesis is neither necessary nor sufficient to explain the psychological arrow on the basis of current physics, and propose two necessary conditions on the workings of the brain that any account of the psychological arrows of time must satisfy.
Abstract: Can the second law of thermodynamics explain our mental experience of the direction of time? According to an influential approach, the past hypothesis of universal low entropy (required to explain the temporal directionality of the second law in terms of fundamental physics, which is time-symmetric) also explains how the psychological arrow comes about. We argue that although this approach has many attractive features, it cannot explain the psychological arrow after all. In particular, we show that the past hypothesis is neither necessary nor sufficient to explain the psychological arrow on the basis of current physics. We propose two necessary conditions on the workings of the brain that any account of the psychological arrow of time must satisfy. And we propose a new reductive physical account of the psychological arrow of time compatible with time-symmetric physics, according to which these two conditions are also sufficient. Our proposal has some radical implications, for example, that the psychological arrow of time is fundamental, whereas the temporal direction of entropy increase in the second law of thermodynamics and the past hypothesis is derived from it, rather than the other way around.

7 citations


Journal ArticleDOI
TL;DR: In this article , a conceptual framework that connects biological heredity and organization is developed, where the cross-generation conservation of functional elements is defined as a constraint, which is defined by the authors as a cross-generative conservation process.
Abstract: AbstractWe develop a conceptual framework that connects biological heredity and organization. We refer to heredity as the cross-generation conservation of functional elements, defined as constraint...

7 citations


Journal ArticleDOI
TL;DR: In this paper , the authors draw parallels between brain death and other pathological conditions and argue that whenever one regards the absence or the artificial replacement of a certain function in these pathological conditions as compatible with organismic unity, then one equally ought to tolerate that function's loss or replacement in brain death.
Abstract: Fifty years have passed since brain death was first proposed as a criterion of death. Its advocates believe that with the destruction of the brain, integrated functioning ceases irreversibly, somatic unity dissolves, and the organism turns into a corpse. In this article, I put forward two objections against this assertion. First, I draw parallels between brain death and other pathological conditions and argue that whenever one regards the absence or the artificial replacement of a certain function in these pathological conditions as compatible with organismic unity, then one equally ought to tolerate that function’s loss or replacement in brain death. Second, I show that the neurological criterion faces an additional problem that is only coming to light as life-supporting technology improves: the growing sophistication of the latter gives rise to a dangerous decoupling of the actual performance of a vital function from the retention of neurological control over it. Half a century after its introduction, the neurological criterion is facing the same fate as its cardiopulmonary predecessor.

6 citations


Journal ArticleDOI
TL;DR: In this paper , the role of mass in conservation of mass, inertial role, and source for gravitation is investigated in the context of gravitoelectromagnetism.
Abstract: By mass-energy equivalence, the gravitational field has a relativistic mass density proportional to its energy density. I seek to better understand this mass of the gravitational field by asking whether it plays three traditional roles of mass: the role in conservation of mass, the inertial role, and the role as source for gravitation. The difficult case of general relativity is compared to the more straightforward cases of Newtonian gravity and electromagnetism by way of gravitoelectromagnetism, an intermediate theory of gravity that resembles electromagnetism.

Journal ArticleDOI
TL;DR: In this article , the authors discuss the concept of HARKing from a philosophical standpoint and then undertakes a critical review of Kerr's ([1998]) twelve potential costs of Harking, concluding that these potential costs are either misconceived, misattributed to HARK, lacking evidence, or do not take into account pre-and post-publication peer review and public availability to research materials and data.
Abstract: Kerr ([1998]) coined the term ‘HARKing’ to refer to the practice of ‘hypothesizing after the results are known’. This questionable research practice has received increased attention in recent years because it is thought to have contributed to low replication rates in science. The present article discusses the concept of HARKing from a philosophical standpoint and then undertakes a critical review of Kerr’s ([1998]) twelve potential costs of HARKing. It is argued that these potential costs are either misconceived, misattributed to HARKing, lacking evidence, or that they do not take into account pre- and post-publication peer review and public availability to research materials and data. It is concluded that it is premature to conclude that HARKing has led to low replication rates.

Journal ArticleDOI
TL;DR: In this article , the authors defend three claims about concrete or physical models: (i) these models remain important in science and engineering, (ii) they are often essentially idealized, in a sense to be made precise, and (iii) despite these essential idealizations, some of these models may be reliably used for the purpose of causal explanation.
Abstract: This paper defends three claims about concrete or physical models: (i) these models remain important in science and engineering, (ii) they are often essentially idealized, in a sense to be made precise, and (iii) despite these essential idealizations, some of these models may be reliably used for the purpose of causal explanation. This discussion of concrete models is pursued using a detailed case study of some recent models of landslide generated impulse waves. Practitioners show a clear awareness of the idealized character of these models, and yet address these concerns through a number of methods. This paper focuses on experimental arguments that show how certain failures to accurately represent feature X are consistent with accurately representing some causes of feature Y, even when X is causally relevant to Y. To analyse these arguments, the claims generated by a model must be carefully examined and grouped into types. Only some of these types can be endorsed by practitioners, but I argue that these endorsed claims are sufficient for limited forms of causal explanation.

Journal ArticleDOI
TL;DR: Arvan et al. as discussed by the authors proposed FreeJury Theorems for Peer Review (FJTTH) for peer review in the British Journal for the Philosophy of Science (BJPS).
Abstract: Previous articleNext article FreeJury Theorems for Peer ReviewMarcus Arvan, Liam Kofi Bright, and Remco HeesenMarcus Arvan Search for more articles by this author , Liam Kofi Bright Search for more articles by this author , and Remco Heesen Search for more articles by this author PDFPDF PLUS Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinkedInRedditEmail SectionsMoreDetailsFiguresReferencesCited by The British Journal for the Philosophy of Science Just Accepted Society: The British Society for the Philosophy of Science Views: 1195Total views on this site Views: 1195Total views on this site Article DOIhttps://doi.org/10.1086/719117 Views: 1195Total views on this site HistoryAccepted January 20, 2022 PDF download Crossref reports no articles citing this article.

Journal ArticleDOI
TL;DR: In this paper , a multi-dimensional account of explanatory depth is developed and applied towards a comparative analysis of inflationary and bouncing paradigms in primordial cosmology. But the analysis is restricted to the context of the trans-planckian censorship conjecture.
Abstract: We develop and apply a multi-dimensional account of explanatory depth towards a comparative analysis of inflationary and bouncing paradigms in primordial cosmology. Our analysis builds on earlier work due to Azhar and Loeb (2021) that establishes initial conditions fine-tuning as a dimension of explanatory depth relevant to debates in contemporary cosmology. We propose dynamical fine-tuning and autonomy as two further dimensions of depth in the context of problems with instability and trans-Planckian modes that afflict bouncing and inflationary approaches respectively. In the context of the latter issue, we argue that the recently formulated trans-Planckian censorship conjecture leads to a trade-off for inflationary models between dynamical fine-tuning and autonomy. We conclude with the suggestion that explanatory preference with regard to the different dimensions of depth is best understood in terms of differing attitudes towards heuristics for future model building.

Journal ArticleDOI
TL;DR: The authors argue that the very features of experimental design that enable resting state fMRI to support exploratory science also generate a novel confound, and explore the consequences of this "mixture view" for attempts to theorize about the cognitive or psychological functions of resting state networks, as well as the value of exploratory experiments.
Abstract: In this article, we examine the use of resting state fMRI data for psychological inferences. We argue that resting state studies hold the paired promises of discovering novel functional brain networks, and of avoiding some of the limitations of task-based fMRI. However, we argue that the very features of experimental design that enable resting state fMRI to support exploratory science also generate a novel confound. We argue that seemingly key features of resting state functional connectivity networks may be artefacts resulting from sampling a ‘mixture distribution’ of diverse brain networks active at different times during the scan. We explore the consequences of this ‘mixture view’ for attempts to theorize about the cognitive or psychological functions of resting state networks, as well as the value of exploratory experiments.

Journal ArticleDOI
TL;DR: In this article , the authors consider how complex logical operations might self-assemble in a signalling-game context via composition of simpler underlying dispositions, and provide another facet of the evolutionary story of how sufficiently rich cognitive or linguistic capacities may arise from simpler precursors.
Abstract: I consider how complex logical operations might self-assemble in a signalling-game context via composition of simpler underlying dispositions. On the one hand, agents may take advantage of pre-evolved dispositions; on the other hand, they may co-evolve dispositions as they simultaneously learn to combine them to display more complex behaviour. In either case, the evolution of complex logical operations can be more efficient than evolving such capacities from scratch. Showing how complex phenomena like these might evolve provides an additional path to the possibility of evolving more or less rich notions of compositionality. This helps provide another facet of the evolutionary story of how sufficiently rich, human-level cognitive or linguistic capacities may arise from simpler precursors.

Journal ArticleDOI
TL;DR: In this article , the authors characterize the cartographic approach and argue that one of its key steps, registration, should be carried out in a way that is sensitive to the target of investigation.
Abstract: Neuroscience has become increasingly reliant on multi-subject research in addition to studies of unusual single patients. This research has brought with it a challenge: how are data from different human brains to be combined? The dominant strategy for aggregating data across brains is what I call the ‘cartographic approach’, which involves mapping data from individuals to a spatial template. Here I characterize the cartographic approach and argue that one of its key steps, registration, should be carried out in a way that is sensitive to the target of investigation. Because registration aims to align homologous brain locations, but not all homologous locations can be simultaneously aligned, a multiplicity of registration methods is required to meet the needs of researchers investigating different phenomena. I call this position ‘registration pluralism’. Registration pluralism has potential implications for neuroscientific practice, three of which I discuss here. This work shows the importance of reflecting more carefully on data aggregation methods, especially in light of the substantial individual differences that exist between brains.

Journal ArticleDOI
Wei Zhu1
TL;DR: In this article , the cosmological constant problem is best understood as a bet about future physics made on the basis of particular interpretational choices in general relativity and quantum field theory.
Abstract: The ‘cosmological constant problem’ (CCP) has historically been understood as describing a conflict between cosmological observations in the framework of general relativity (GR) and theoretical predictions from quantum field theory (QFT), which a future theory of quantum gravity ought to resolve. I argue that this view of the CCP is best understood in terms of a bet about future physics made on the basis of particular interpretational choices in GR and QFT, respectively. Crucially, each of these choices must be taken as itself grounded in the sucesses of the respective theory for this bet to be justified.

Journal ArticleDOI
TL;DR: The authors argued that biological individuality, including evolutionary individuality, comes in degrees and that evolutionary individuality presents a puzzle when compared to degree-based evolutionary individuality. But evolutionary individuality does not necessarily come in degrees.
Abstract: AbstractPhilosophers are approaching a consensus that biological individuality, including evolutionary individuality, comes in degrees. Graded evolutionary individuality presents a puzzle when juxt...

Journal ArticleDOI
TL;DR: In this article , the authors provide a Bayesian analysis of the problem of determining when measurements from ordinal scales can be used to confirm hypotheses about relative group averages, and conclude that the prohibition cannot be upheld, even in a qualified sense.
Abstract: There is a widely held view on measurement inferences, that goes back to Stevens’s ([1946]) theory of measurement scales and ‘permissible statistics’. This view defends the following prohibition: you should not make inferences from averages taken with ordinal scales (versus quantitative scales: interval or ratio). This prohibition is general—it applies to all ordinal scales—and it is sometimes endorsed without qualification. Adhering to it dramatically limits the research that the social and biomedical sciences can conduct. I provide a Bayesian analysis of this inferential problem, determining when measurements from ordinal scales can be used to confirm hypotheses about relative group averages. The prohibition, I conclude, cannot be upheld, even in a qualified sense. The beliefs needed to make average comparisons are less demanding than those appropriate for quantitative scales. I illustrate with the alleged paradigm ordinal scale, Mohs’ scale of mineral hardness, arguing that the literature has mischaracterized it.

Journal ArticleDOI
TL;DR: In this paper , an analysis of the cascade concept in science and the causal structure it refers to is presented, and the main features of this causal structure, analogies it is associated with, and strategies used to study it.
Abstract: According to mainstream philosophical views causal explanation in biology and neuroscience is mechanistic. As the term “mechanism” gets regular use in these fields it is unsurprising that philosophers consider it important to scientific explanation. What is surprising is that they consider it the only causal term of importance. This paper provides an analysis of a new causal concept–it examines the cascade concept in science and the causal structure it refers to. I argue that this concept is importantly different from the notion of mechanism and that this difference matters for our understanding of causation and explanation in science. This paper provides an analysis of the cascade concept in science and the causal structure it refers to. 2 I examine the main features of this causal structure, analogies it is associated with, and strategies used to study it. While scientific work supports distinguishing the cascade and mechanism concepts, this analysis is not merely descriptive. Instead, it provides a theoretical framework for how these concepts should be understood. This framework matters for our assessment of the causal structure of the world, how we study this structure, use it to produce particular outcomes, and communicate about it to others. Before proceeding with this analysis, two clarifications are in order. First, I do not suggest that scientists always use these causal terms in the distinct ways indicated in this analysis, but that they often do and should use them in this way. This reveals normative features of this work and an important way that philosophy can contribute to science, namely, by making suggestions for how these concepts should be understood and used. Second, my analysis of these concepts articulates clear ways in which they differ, but leaves space for some structures in science to be borderline. The presence of such cases should not prevent us from articulating useful categories that distinguish causal structures in the majority of cases, even if the distinction can (in rare cases) be blurred.

Journal ArticleDOI
TL;DR: The Bohmian Approach to the Problems of Cosmological Quantum Fluctuations as discussed by the authors was proposed by Goldstein, Ward Struyve, and Roderich Tumulka to solve the problems of cosmological quantum fluctuations.
Abstract: Previous articleNext article No AccessThe Bohmian Approach to the Problems of Cosmological Quantum FluctuationsSheldon Goldstein, Ward Struyve, and Roderich TumulkaSheldon Goldstein Search for more articles by this author , Ward Struyve Search for more articles by this author , and Roderich Tumulka Search for more articles by this author PDFPDF PLUS Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinkedInRedditEmail SectionsMoreDetailsFiguresReferencesCited by The British Journal for the Philosophy of Science Just Accepted Society: The British Society for the Philosophy of Science Views: 5Total views on this site Views: 5Total views on this site Article DOIhttps://doi.org/10.1086/721531 Views: 5Total views on this site HistoryAccepted June 19, 2022 PDF download Crossref reports no articles citing this article.

Journal ArticleDOI
TL;DR: In this article , the "Typical Principle" was proposed and defended, which links rational belief to facts about what is typical, which avoids several problems that other, seemingly similar principles face.
Abstract: If a proposition is typically true, given your evidence, then you should believe that proposition; or so I argue here. In particular, in this paper, I propose and defend a principle of rationality—call it the ‘Typical Principle’—which links rational belief to facts about what is typical. As I show, this principle avoids several problems that other, seemingly similar principles face. And as I show, in many cases, this principle implies the verdicts of the Principal Principle: so ultimately, the Typical Principle may be the more fundamental of the two.

Journal ArticleDOI
TL;DR: The authors argue that if we want to be a verisimilitude-valuing accuracy-firster, we must be able to think of the value of verisimilarity as somehow built into the score of gradational-accuracy.
Abstract: It seems like we care about at least two features of our credence function: gradational-accuracy (high credences in truths, low credences in falsehoods) and verisimilitude (investing higher credence in worlds that are more similar to the actual world). Accuracy-first epistemology requires that we care about one feature of our credence function: gradational-accuracy. So if you want to be a verisimilitude-valuing accuracy-firster, you must be able to think of the value of verisimilitude as somehow built into the value of gradational-accuracy. Can this be done? In a recent article, Oddie has argued that it cannot, at least if we want the accuracy measure to be proper. I argue that it can.

Journal ArticleDOI
TL;DR: This paper argued that a number of methods recently developed for decisionmaking under deep uncertainty have a good claim to be understood as general-purpose decision-making heuristics suitable for a broad range of institutional decision problems.
Abstract: Recent work in judgment and decisionmaking has stressed that institutions, like individuals, often rely on decisionmaking heuristics. But most of the institutional decisionmaking heuristics studied to date are highly firm- and industry-specific. This contrasts to the individual case, in which many heuristics are general-purpose rules suitable for a wide range of decision problems. Are there also general-purpose heuristics for institutional decisionmaking? In this paper, I argue that a number of methods recently developed for decisionmaking under deep uncertainty have a good claim to be understood as general-purpose decisionmaking heuristics suitable for a broad range of institutional decision problems.

Journal ArticleDOI
TL;DR: A simple model in which agents have to learn which feature in their environment is the signal, and it is demonstrated that simple reinforcement learning agents can still learn to coordinate in contexts in which they do not already know what the signal is and the other features in the agents’ environment are uncorrelated with the signal.
Abstract: Signalling games are useful for understanding how language emerges. In the standard models the dynamics in some sense already knows what the signals are, even if they do not yet have meaning. In this paper we relax this assumption, and develop a simple model we call an ‘attention game’ in which agents have to learn which feature in their environment is the signal. We demonstrate that simple reinforcement learning agents can still learn to coordinate in contexts in which (i) the agents do not already know what the signal is and (ii) the other features in the agents’ environment are uncorrelated with the signal. Furthermore, we show that, in cases in which other features are correlated with the signal, there is a surprising trade-off between learning what the signal is, and success in action. We show that the mutual information between a signal and a feature plays a key role in governing the accuracy and attention of the agent.

Journal ArticleDOI
TL;DR: In this paper , the authors defend the use of quasi-experiments for causal estimation in economics against the widespread objection that quasiexperimental estimates lack external validity, arguing that replication of estimates can yield defeasible evidence for external validity.
Abstract: This article defends the use of quasi-experiments for causal estimation in economics against the widespread objection that quasi-experimental estimates lack external validity. The defence is that quasi-experimental replication of estimates can yield defeasible evidence for external validity. The article then develops a different objection. The stable unit treatment value assumption (SUTVA), on which quasi-experiments rely, is argued to be implausible due to the influence of social interaction effects on economic outcomes. A more plausible stable marginal unit treatment value assumption (SMUTVA) is proposed, but it is demonstrated to severely limit the usefulness of quasi-experiments for economic policy evaluation.

Journal ArticleDOI
TL;DR: It is argued that the idea of falsification is central to the methodology of machine learning and taking both aspects together gives rise to a falsi-cationist account of arti ficial neural networks.
Abstract: Machine learning operates at the intersection of statistics and computer science. This raises the question as to its underlying methodology. While much emphasis has been put on the close link between the process of learning from data and induction, the falsificationist component of machine learning has received minor attention. In this paper, we argue that the idea of falsification is central to the methodology of machine learning. It is commonly thought that machine learning algorithms infer general prediction rules from past observations. This is akin to a statistical procedure by which estimates are obtained from a sample of data. But machine learning algorithms can also be described as choosing one prediction rule from an entire class of functions. In particular, the algorithm that determines the weights of an artificial neural network operates by empirical risk minimization and rejects prediction rules that lack empirical adequacy. It also exhibits a behavior of implicit regularization that pushes hypothesis choice toward simpler prediction rules. We argue that taking both aspects together gives rise to a falsificationist account of artificial neural networks.

Journal ArticleDOI
TL;DR: The authors argue that an excessive focus on the mass extinction framing can be misleading for present conservation efforts and may lead us to miss out on the many other valuable insights that Earth's deep time can offer in guiding our future, and argue that the conceptual challenges in defining mass extinction, uncertainties in past and present diversity assessments, and data incommensurabilities undermine a straightforward answer to the question of whether we are in, or entering, a sixth mass extinction today.
Abstract: In both scientific and popular circles it is often said that we are in the midst of a sixth mass extinction. Although the urgency of our present environmental crises is not in doubt, such claims of a present mass extinction are highly controversial scientifically. Our aims are, first, to get to the bottom of this scientific debate by shedding philosophical light on the many conceptual and methodological challenges involved in answering this scientific question, and, second, to offer new philosophical perspectives on what the value of asking this question has been — and whether that value persists today. We show that the conceptual challenges in defining ‘mass extinction’, uncertainties in past and present diversity assessments, and data incommensurabilities undermine a straightforward answer to the question of whether we are in, or entering, a sixth mass extinction today. More broadly we argue that an excessive focus on the mass extinction framing can be misleading for present conservation efforts and may lead us to miss out on the many other valuable insights that Earth’s deep time can offer in guiding our future.