scispace - formally typeset
Search or ask a question

Showing papers in "Philosophy of Science in 2022"


Journal ArticleDOI
TL;DR: The World in the Wave Function: A Metaphysics for Quantum Physics by Alyssa Ney as mentioned in this paper tries to make this blurry picture of reality more precise, even if this picture will turn out to be heterodox and unfamiliar.
Abstract: There is not much of a consensus on almost anything about quantum mechanics. I take it, however, that the minimum consensus is that although quantum mechanics is empirically successful, quantum mechanics is hard to understand . Quantum mechanics, in the way it is presented in most textbooks, indeed does not provide a clear picture of reality that would make it a theory to be understood. 1 In her new book The World in the Wave Function: A Metaphysics for Quantum Physics , Alyssa Ney tries to make this blurry picture of reality more precise, even if this picture will turn out to be heterodox and unfamiliar. defending wave function realism and an articulation of optimism and its implications.

15 citations


Journal ArticleDOI
TL;DR: Godfrey-Smith as mentioned in this paper reviewed Metazoa: Animal Minds and the Birth of Consciousness, a book about the birth of consciousness, focusing on the relationship between animals and humans.
Abstract: Review of Peter Godfrey-Smith’s Metazoa: Animal Minds and the Birth of Consciousness - Peter Godfrey-Smith, Metazoa: Animal Minds and the Birth of Consciousness. Glasgow: William Collins (2020), 288 pp., $24.99 (hardcover; also available in paperback, nook, and audiobook formats). - Volume 89 Issue 3

13 citations


Journal ArticleDOI
TL;DR: It is argued that the growing field of Explainable AI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions.
Abstract: Abstract The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI (XAI) has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a shortcoming of the field of XAI, I argue that it is broadly the right approach to the problem.

12 citations


Journal ArticleDOI
TL;DR: In this paper , an agent-based model that captures two empirically supported hypotheses about how demographic diversity can improve group performance is presented, and the results of their simulations suggest that, even when social identities are not associated with distinctive task-related cognitive resources, demographic diversity, in certain circumstances, benefit collective performance by counteracting two types of conformity in homogeneous groups.
Abstract: Abstract Previous simulation models have found positive effects of cognitive diversity on group performance, but have not explored effects of diversity in demographics (e.g., gender, ethnicity). In this paper, we present an agent-based model that captures two empirically supported hypotheses about how demographic diversity can improve group performance. The results of our simulations suggest that, even when social identities are not associated with distinctive task-related cognitive resources, demographic diversity can, in certain circumstances, benefit collective performance by counteracting two types of conformity that can arise in homogeneous groups: those relating to group-based trust and those connected to normative expectations toward in-groups.

12 citations


Journal ArticleDOI
TL;DR: It is proved that homogeneous partitions, the core notion of Salmon’s model, are equivalent to minimal sufficient statistics, an important notion from statistical inference, which establishes a link to deep neural networks via the so-called Information Bottleneck method.
Abstract: Abstract This paper argues that a notion of statistical explanation, based on Salmon’s statistical relevance model, can help us better understand deep neural networks. It is proved that homogeneous partitions, the core notion of Salmon’s model, are equivalent to minimal sufficient statistics, an important notion from statistical inference. This establishes a link to deep neural networks via the so-called Information Bottleneck method, an information-theoretic framework, according to which deep neural networks implicitly solve an optimization problem that generalizes minimal sufficient statistics. The resulting notion of statistical explanation is general, mathematical, and subcausal.

10 citations


Journal ArticleDOI
TL;DR: In this paper , a distinction among causal relationships that has yet to receive attention in the philosophical literature, namely, whether causal relationships are reversible or irreversible, is explored and an analysis of this distinction is provided, which has important implications for causal inference and modeling.
Abstract: Abstract This paper explores a distinction among causal relationships that has yet to receive attention in the philosophical literature, namely, whether causal relationships are reversible or irreversible. We provide an analysis of this distinction and show how it has important implications for causal inference and modeling. This work also clarifies how various familiar puzzles involving preemption and over-determination play out differently depending on whether the causation involved is reversible.

8 citations


Journal ArticleDOI
TL;DR: In this paper , the cosmological constant problem is shown to be a reduction ad absurdum and that one should reject the assumption that general relativity can generically be treated as an EFT.
Abstract: Abstract The cosmological constant problem stems from treating quantum field theory and general relativity as an effective field theory (EFT). We argue that the problem is a reduction ad absurdum and that one should reject the assumption that general relativity can generically be treated as an EFT. This marks a failure of naturalness and provides an internal signal that EFT methods do not apply in all spacetime domains. We then take an external view, showing that the assumptions for using EFTs are violated in general relativistic domains where Λ is relevant. We highlight some ways forward that do not depend on naturalness.

8 citations


Journal ArticleDOI
TL;DR: In a recent paper as mentioned in this paper , Pettigrew (2022) reported a generalization of the celebrated accuracydominance theorem due to Predd et al. (2009), but this generalization is incorrect.
Abstract: Abstract In a recent paper, Pettigrew (2022) reports a generalization of the celebrated accuracy-dominance theorem due to Predd et al. (2009), but Pettigrew’s proof is incorrect. I will explain the mistakes and provide a correct proof.

8 citations


Journal ArticleDOI
TL;DR: This article showed that the central mathematical theorem on which each depends goes through without assuming additivity, and showed how to strengthen the arguments based on these conditions by showing that each depends on the same theorem.
Abstract: Abstract Accuracy arguments for the core tenets of Bayesian epistemology differ mainly in the conditions they place on the legitimate ways of measuring the inaccuracy of our credences. The best existing arguments rely on three conditions: Continuity, additivity, and strict propriety. In this paper, I show how to strengthen the arguments based on these conditions by showing that the central mathematical theorem on which each depends goes through without assuming additivity.

8 citations


Journal ArticleDOI
TL;DR: The main reason why I still recommend Pettigrew's book to those looking for practical guidance is simple: I believe that no work in decision theory can ultimately do much better as mentioned in this paper .
Abstract: doubtlessly offers helpful food for thought for struggling decision-makers. At the same time, it is undeniable that a myriad of (possibly interacting) considerations bear on the reasonableness of difficult choices and that the guidance offered by Choosing for Changing Selves will thus generally be limited to providing some potentially useful pointers. The main reason why I nevertheless recommend Pettigrew’s book to those looking for practical guidance is simple: I believe that no work in decision theory can ultimately do much better.

6 citations


Journal ArticleDOI
TL;DR: A source of strain on the concept of animal welfare is highlighted: consciousness-involving definitions are better able to capture the normative significance of welfare, whereas consciousness-free definitions facilitate the validation of welfare indicators.
Abstract: Abstract Definitions of animal welfare often invoke consciousness or sentience. Marian Stamp Dawkins has argued that to define animal welfare this way is a mistake. In Dawkins’s alternative view, an animal with good welfare is one that is healthy and “has what it wants.” The dispute highlights a source of strain on the concept of animal welfare: consciousness-involving definitions are better able to capture the normative significance of welfare, whereas consciousness-free definitions facilitate the validation of welfare indicators. I reflect on how the field should respond to this strain, ultimately recommending against splitting the concept and in favor of consciousness-involving definitions.

Journal ArticleDOI
TL;DR: The authors argue that neuroplasticity fails to provide empirical evidence of multiple realization, its inability to do so strengthens the mind-body identity theory and flat physicalism can be used to explain the current state of the mind body problem more adequately.
Abstract: Abstract It is commonly maintained that neuroplastic mechanisms in the brain provide empirical support for the hypothesis of multiple realizability. We show in various case studies that neuroplasticity stems from preexisting mechanisms and processes inherent in the neural (or biochemical) structure of the brain. We argue that not only does neuroplasticity fail to provide empirical evidence of multiple realization, its inability to do so strengthens the mind-body identity theory. Finally, we argue that a recently proposed identity theory called Flat Physicalism can be enlisted to explain the current state of the mind-body problem more adequately.

Journal ArticleDOI
TL;DR: In this paper , the authors argue that computable approximations to the Solomonoff prior do not always converge, and that there is a tension between these two responses, which is why computable approximation to Solomonoff prediction does not necessarily converge.
Abstract: Abstract The framework of Solomonoff prediction assigns prior probability to hypotheses inversely proportional to their Kolmogorov complexity. There are two well-known problems. First, the Solomonoff prior is relative to a choice of universal Turing machine. Second, the Solomonoff prior is not computable. However, there are responses to both problems. Different Solomonoff priors converge with more and more data. Further, there are computable approximations to the Solomonoff prior. I argue that there is a tension between these two responses. This is because computable approximations to Solomonoff prediction do not always converge.

Journal ArticleDOI
TL;DR: In this article , the authors discuss certain pathologies of general relativity that might be taken to signal that the theory is breaking down, and consider how one might expect a successor theory to do better.
Abstract: It is widely accepted by physicists and philosophers of physics alike that there are certain contexts in which general relativity will “break down”. In such cases, one expects to need some as-yet undiscovered successor theory. This paper will discuss certain pathologies of general relativity that might be taken to signal that the theory is breaking down, and consider how one might expect a successor theory to do better. The upshot will be an unconventional interpretation of the “Strong Cosmic Censorship Hypothesis”.

Journal ArticleDOI
TL;DR: This article argued that patients with mental disorders are indispensable resources to enhance psychiatric epistemology, especially in the context of the crisis, controversy, and uncertainty surrounding mental health research and treatment.
Abstract: Abstract This paper challenges the exclusion of patients from epistemic practices in psychiatry by examining the creation and revision processes of the Diagnostic and Statistical Manual of Mental Disorders (DSM), a document produced by the American Psychiatric Association that identifies the properties of mental disorders and thereby guides research, diagnosis, treatment, and various administrative tasks. It argues there are epistemic—rather than exclusively social/political—reasons for including patients in the DSM revision process. Individuals with mental disorders are indispensable resources to enhance psychiatric epistemology, especially in the context of the crisis, controversy, and uncertainty surrounding mental health research and treatment.

Journal ArticleDOI
TL;DR: This paper accounts for broad definitions of memory, which extend to paradigmatic memory phenomena, like episodic memory in humans, and phenomena in worms and sea snails, and concludes that a definition as a hypothesis is the basis to test inferences about phenomena.
Abstract: Abstract This paper accounts for broad definitions of memory, which extend to paradigmatic memory phenomena, like episodic memory in humans, and phenomena in worms and sea snails. These definitions may seem too broad, suggesting that they extend to phenomena that don’t count as memory or illustrate that memory is not a natural kind. However, these responses fail to consider a definition as a hypothesis. As opposed to construing definitions as expressing memory’s properties, a definition as a hypothesis is the basis to test inferences about phenomena. A definition as a hypothesis is valuable when the “kinding” of phenomena is ongoing.

Journal ArticleDOI
TL;DR: The authors argue that this interpretation comes at various costs and propose an alternative that fares better along two dimensions: (1) agreement with practice and (2) ontological and epistemological parsimony.
Abstract: Abstract Models typically have actual, existing targets. However, some models are viewed as having non-actual targets. I argue that this interpretation comes at various costs and propose an alternative that fares better along two dimensions: (1) agreement with practice and (2) ontological and epistemological parsimony. My proposal is that many of these models actually have actual targets.

Journal ArticleDOI
TL;DR: This article conducted a quantitative study with scientists from the natural and social sciences and compared their views to those held by philosophers, finding that there is broad agreement across all three groups about how the virtues are to be ranked, all groups agree that unification is an epistemic virtue and there is even some evidence that simplicity is viewed as epistemic by scientists.
Abstract: Theoretical virtues play an important role in the acceptance and belief of theories in science and philosophy. Philosophers have well-developed views on which virtues ought and ought not to influence one’s acceptance and belief. But what do scientists think? This paper presents the results of a quantitative study with scientists from the natural and social sciences and compared their views to those held by philosophers. Some of the main results are: (i) there is broad agreement across all three groups about how the virtues are to be ranked, (ii) all groups agree that unification is an epistemic virtue and there is even some evidence that simplicity is viewed as epistemic by scientists, (iii) all groups consider syntactic parsimony as more important than ontological parsimony, and (iv) all groups consider unifying power as independent from simplicity.

Journal ArticleDOI
TL;DR: In this paper , the authors argue that the argument for autonomous mental disorder is unsound and that mental disorders are brain disorders necessarily, and they introduce the "natural dysfunction analysis" of disorder, before outlining the AAMD.
Abstract: Abstract According to the Argument for Autonomous Mental Disorder (AAMD), mental disorder can occur in the absence of brain disorder, just as software problems can occur in the absence of hardware problems in a computer. This article argues that the AAMD is unsound. I begin by introducing the “natural dysfunction analysis” of disorder, before outlining the AAMD. I then analyze the necessary conditions for realizer autonomous dysfunction. Building on this, I show that software functions disassociate from hardware functions in a way that mental functions do not disassociate from brain functions. It follows that mental disorders are brain disorders necessarily.

Journal ArticleDOI
TL;DR: The authors argue that the semantic thesis of scientific realism should be relaxed because it is possible for scientific statements to be partially true, and hence approximately true, without being false without making any concessions concerning the epistemic or methodological theses that lie at realism's core.
Abstract: Abstract First, I show that the semantic thesis of scientific realism may be relaxed significantly—to allow that some scientific discourse is not truth-valued—without making any concessions concerning the epistemic or methodological theses that lie at realism’s core. Second, I illustrate how relaxing the semantic thesis allows realists to avoid positing abstract entities and to fend off objections to the “no miracles” argument from positions such as cognitive instrumentalism. Third, I argue that the semantic thesis of scientific realism should be relaxed because it is possible for scientific statements to be partially true, and hence approximately true, without being false.

Journal ArticleDOI
TL;DR: In this paper , the authors argue that there are a limited set of cases in which scientists can, consistently with a commitment to democratized science, set aside the public's judgments, but they do not specify such cases.
Abstract: Abstract Scientists are frequently called upon to “democratize” science, by bringing the public into scientific research. One appealing point for public involvement concerns the nonepistemic values involved in science. Suppose, though, a scientist invites the public to participate in making such value-laden determinations but finds that the public holds values the scientist considers morally unacceptable. Does the argument for democratizing science commit the scientist to accepting the public’s objectionable values, or may she veto them? I argue that there are a limited set of cases in which scientists can, consistently with a commitment to democratized science, set aside the public’s judgments.

Journal ArticleDOI
TL;DR: The authors argue that open science as currently conceptualized and implemented does not take sufficient account of epistemic diversity within research, and identify four aspects of diverse research practices that should serve as reference points for debates around Open Science: specificity to local conditions, entrenchment within repertoires, permeability to newcomers, and demarcation strategies.
Abstract: Abstract I argue that Open Science as currently conceptualized and implemented does not take sufficient account of epistemic diversity within research. I use three case studies to exemplify how Open Science threatens to privilege some forms of inquiry over others, thus exasperating divides within and across systems of practice, and overlooking important sources and forms of epistemic diversity. Building on insights from pluralist philosophy, I then identify four aspects of diverse research practices that should serve as reference points for debates around Open Science: (1) specificity to local conditions, (2) entrenchment within repertoires, (3) permeability to newcomers, and (4) demarcation strategies.

Journal ArticleDOI
TL;DR: In this paper , the benefits of an ethnographic approach in philosophy of science are identified and a purpose-guided form of cognitive ethnography that mediates between the explanatory and normative interests of philosophy is proposed.
Abstract: Abstract We lay groundwork for applying ethnographic methods in philosophy of science. We frame our analysis in terms of two tasks: (1) to identify the benefits of an ethnographic approach in philosophy of science and (2) to structure an ethnographic approach for philosophical investigation best adapted to provide information relevant to philosophical interests and epistemic values. To this end, we advocate for a purpose-guided form of cognitive ethnography that mediates between the explanatory and normative interests of philosophy of science, while maintaining openness and independence when framing such an investigation to achieve robust unbiased results.

Journal ArticleDOI
TL;DR: It is argued that the criticism of the postdata severity evaluation of testing results based on a small n by Rochefort-Maranda (2020) is meritless because it conflates misuses of testing with genuine foundational problems and reveals several misconceptions about trustworthy evidence and estimation-based effect sizes.
Abstract: Abstract For model-based frequentist statistics, based on a parametric statistical model ${{\cal M}_\theta }({\bf{x}})$ , the trustworthiness of the ensuing evidence depends crucially on (i) the validity of the probabilistic assumptions comprising ${{\cal M}_\theta }({\bf{x}})$ , (ii) the optimality of the inference procedures employed, and (iii) the adequateness of the sample size (n) to learn from data by securing (i)–(ii). It is argued that the criticism of the postdata severity evaluation of testing results based on a small n by Rochefort-Maranda (2020) is meritless because it conflates [a] misuses of testing with [b] genuine foundational problems. Interrogating this criticism reveals several misconceptions about trustworthy evidence and estimation-based effect sizes, which are uncritically embraced by the replication crisis literature.

Journal ArticleDOI
TL;DR: In this paper , the authors argue that recognizing captive animals as cultural has important welfare implications and that best practices for welfare should therefore require concern for animals' cultural needs, but the relationship between culture and welfare is also extremely complex, requiring us to rethink standard assumptions about what constitutes and contributes to welfare.
Abstract: Abstract Following recent arguments that cultural practices in wild animal populations have important conservation implications, we argue that recognizing captive animals as cultural has important welfare implications. Having a culture is of deep importance for cultural animals, wherever they live. Without understanding the cultural capacities of captive animals, we will be left with a deeply impoverished view of what they need to flourish. Best practices for welfare should therefore require concern for animals’ cultural needs, but the relationship between culture and welfare is also extremely complex, requiring us to rethink standard assumptions about what constitutes and contributes to welfare.

Journal ArticleDOI
TL;DR: In this article , the authors established connections between accuracy dominance and coherence when credence functions are defined on an infinite set of propositions and established the necessary results to extend the classic accuracy argument for probabilism to certain classes of infinite sets of propositions, including countably infinite partitions.
Abstract: Abstract There is a well-known equivalence between avoiding accuracy dominance and having probabilistically coherent credences (see, e.g., de Finetti 1974; Joyce 2009; Predd et al. 2009; Pettigrew 2016). However, this equivalence has been established only when the set of propositions on which credence functions are defined is finite. In this article, I establish connections between accuracy dominance and coherence when credence functions are defined on an infinite set of propositions. In particular, I establish the necessary results to extend the classic accuracy argument for probabilism to certain classes of infinite sets of propositions, including countably infinite partitions.

Journal ArticleDOI
TL;DR: This work analyzes synthetic biology’s goal of making biology easier to engineer through the combinatorial theory of possibility, which reduces possibility to (re)combinations of individuals and their attributes in the actual world.
Abstract: Abstract Synthetic biology has a strong modal dimension that is part and parcel of its engineering agenda. In turning hypothetical biological designs into actual synthetic constructs, synthetic biologists reach toward potential biology instead of concentrating on naturally evolved organisms. We analyze synthetic biology’s goal of making biology easier to engineer through the combinatorial theory of possibility, which reduces possibility to (re)combinations of individuals and their attributes in the actual world. While the last decades of synthetic biology explorations have shown biology to be much more difficult to engineer than originally conceived, synthetic biology has not given up its combinatorial approach.

Journal ArticleDOI
TL;DR: In this paper , an intrinsic essentialist account of HPC kinds is presented, which implies that human sciences (e.g., medicine, psychiatry) that aim to formulate predictive kind categories should classify biological kinds.
Abstract: A minimal essentialism (‘intrinsic biological essentialism’) about natural kinds is required to explain the projectability of human science terms. Human classifications that yield robust and ampliative projectable inferences refer to biological kinds. I articulate this argument with reference to an intrinsic essentialist account of HPC kinds. This account implies that human sciences (e.g., medicine, psychiatry) that aim to formulate predictive kind categories should classify biological kinds. Issues concerning psychiatric classification and pluralism are examined.

Journal ArticleDOI
TL;DR: Attention to energy requirements undermines the use of substrate independence to support claims about the feasibility of artificial intelligence, the moral standing of robots, the possibility that the authors may be living in a computer simulation, the plausibility of transferring minds into computers, and the autonomy of psychology from neuroscience.
Abstract: Abstract Substrate independence and mind-body functionalism claim that thinking does not depend on any particular kind of physical implementation. But real-world information processing depends on energy, and energy depends on material substrates. Biological evidence for these claims comes from ecology and neuroscience, while computational evidence comes from neuromorphic computing and deep learning. Attention to energy requirements undermines the use of substrate independence to support claims about the feasibility of artificial intelligence, the moral standing of robots, the possibility that we may be living in a computer simulation, the plausibility of transferring minds into computers, and the autonomy of psychology from neuroscience.

Journal ArticleDOI
TL;DR: The authors argue that several apparently distinct responses to the hole argument, all invoking formal or mathematical considerations, should be viewed as a unified "mathematical response" and rebut two prominent critiques of the mathematical response before reflecting on what is ultimately at issue in this literature.
Abstract: Abstract We argue that several apparently distinct responses to the hole argument, all invoking formal or mathematical considerations, should be viewed as a unified “mathematical response.” We then consider and rebut two prominent critiques of the mathematical response before reflecting on what is ultimately at issue in this literature.