scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2021"


Journal ArticleDOI
01 Jan 2021-Synthese
TL;DR: A multiscale integrationist interpretation of the boundaries of cognitive systems, using the Markov blanket formalism of the variational free energy principle, is presented, intended as a corrective for the philosophical debate over internalist and externalist interpretations of cognitive boundaries.
Abstract: We present a multiscale integrationist interpretation of the boundaries of cognitive systems, using the Markov blanket formalism of the variational free energy principle. This interpretation is intended as a corrective for the philosophical debate over internalist and externalist interpretations of cognitive boundaries; we stake out a compromise position. We first survey key principles of new radical (extended, enactive, embodied) views of cognition. We then describe an internalist interpretation premised on the Markov blanket formalism. Having reviewed these accounts, we develop our positive multiscale account. We argue that the statistical seclusion of internal from external states of the system—entailed by the existence of a Markov boundary—can coexist happily with the multiscale integration of the system through its dynamics. Our approach does not privilege any given boundary (whether it be that of the brain, body, or world), nor does it argue that all boundaries are equally prescient. We argue that the relevant boundaries of cognition depend on the level being characterised and the explanatory interests that guide investigation. We approach the issue of how and where to draw the boundaries of cognitive systems through a multiscale ontology of cognitive systems, which offers a multidisciplinary research heuristic for cognitive science.

87 citations


Journal ArticleDOI
01 May 2021-Synthese
TL;DR: It is proposed that these two approaches to embodied cognitive science can be brought together into a productive synthesis, and the key is to recognize that the two approaches are pursuing different but complementary types of explanation.
Abstract: Radical embodied cognitive science is split into two camps: the ecological approach and the enactive approach. We propose that these two approaches can be brought together into a productive synthesis. The key is to recognize that the two approaches are pursuing different but complementary types of explanation. Both approaches seek to explain behavior in terms of the animal–environment relation, but they start at opposite ends. Ecological psychologists pursue an ontological strategy. They begin by describing the habitat of the species, and use this to explain how action possibilities are constrained for individual actors. Enactivists, meanwhile, pursue an epistemic strategy: start by characterizing the exploratory, self-regulating behavior of the individual organism, and use this to understand how that organism brings forth its animal-specific umwelt. Both types of explanation are necessary: the ontological strategy explains how structure in the environment constrains how the world can appear to an individual, while the epistemic strategy explains how the world can appear differently to different members of the same species, relative to their skills, abilities, and histories. Making the distinction between species habitat and animal-specific umwelt allows us to understand the environment in realist terms while acknowledging that individual living organisms are phenomenal beings.

72 citations


Journal ArticleDOI
01 Jun 2021-Synthese
TL;DR: Limits and prospects are identified for the free-energy principle as a first principle in the life sciences and its epistemic status is clarified.
Abstract: The free-energy principle claims that biological systems behave adaptively maintaining their physical integrity only if they minimize the free energy of their sensory states. Originally proposed to account for perception, learning, and action, the free-energy principle has been applied to the evolution, development, morphology, and function of the brain, and has been called a “postulate,” a “mandatory principle,” and an “imperative.” While it might afford a theoretical foundation for understanding the complex relationship between physical environment, life, and mind, its epistemic status and scope are unclear. Also unclear is how the free-energy principle relates to prominent theoretical approaches to life science phenomena, such as organicism and mechanicism. This paper clarifies both issues, and identifies limits and prospects for the free-energy principle as a first principle in the life sciences.

64 citations


Journal ArticleDOI
01 Oct 2021-Synthese
TL;DR: This work designs an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction, and characterises the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time.
Abstract: We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions.

49 citations


Journal ArticleDOI
01 Oct 2021-Synthese
TL;DR: In this article, the authors suggest an account of these concepts, building on and refining an existing view due to Jones (in: Jones MR, Cartwright N (eds) Idealization XII: correcting the model).
Abstract: Idealization and abstraction are central concepts in the philosophy of science and in science itself. My goal in this paper is suggest an account of these concepts, building on and refining an existing view due to Jones (in: Jones MR, Cartwright N (eds) Idealization XII: correcting the model. Idealization and abstraction in the sciences, vol 86. Rodopi, Amsterdam, pp 173–217, 2005) and Godfrey-Smith (in: Barberousse A, Morange M, Pradeu T (eds) Mapping the future of biology: evolving concepts and theories. Springer, Berlin, 2009). On this line of thought, abstraction—which I call, for reasons to be explained, abstractness—involves the omission of detail, whereas idealization consists in a deliberate mismatch between a description (or a model) and the world. I will suggest that while the core idea underlying these authors’ view is correct, they make several assumptions and stipulations that are best avoided. For one thing, they tie abstractness too close to truth. For another, they do not allow sufficient room to the difference between idealization and error. Taking these points into account leads to a refined account of the distinction, in which abstractness is seen in terms of relative richness of detail, and idealization is seen as closely connected with the knowledge and intentions of idealizers. I lay out these accounts in turn, and then discuss the relationship between the two concepts, and several other upshots of the present way of construing the distinction.

48 citations


Journal ArticleDOI
01 May 2021-Synthese
TL;DR: It is argued that through JAM, children discover that mental contents can be evaluated under various attitudes, and that this discovery transforms their mind-reading and reasoning abilities.
Abstract: Growing evidence indicates that our higher rational capacities depend on social interaction—that only through engaging with others do we acquire the ability to evaluate beliefs as true or false, or to reflect on and evaluate the reasons that support our beliefs. Up to now, however, we have had little understanding of how this works. Here we argue that a uniquely human socio-linguistic phenomenon which we call ‘joint attention to mental content’ (JAM) plays a key role. JAM is the ability to focus together in conversation on the content of our mental states, such as beliefs and reasons. In such conversations it can be made clear that our attitudes to beliefs or reasons may conflict—that what I think is true, you might think is false, or that what I think is a good reason for believing something, you might think is a bad reason. We argue that through JAM, children discover that mental contents can be evaluated under various attitudes, and that this discovery transforms their mind-reading and reasoning abilities.

48 citations


Journal ArticleDOI
01 Jan 2021-Synthese
TL;DR: In this paper, it is argued that the idea of behavioral or organic coordination within the enactive approach gives rise to the sensorimotor abilities of the organism, while the ecological approach emphasizes the coordination at a higher level between organism and environment through the agent's exploratory behavior for perceiving affordances.
Abstract: This paper argues that it is possible to combine enactivism and ecological psychology in a single post-cognitivist research framework if we highlight the common pragmatist assumptions of both approaches. These pragmatist assumptions or starting points are shared by ecological psychology and the enactive approach independently of being historically related to pragmatism, and they are based on the idea of organic coordination, which states that the evolution and development of the cognitive abilities of an organism are explained by appealing to the history of interactions of that organism with its environment. It is argued that the idea of behavioral or organic coordination within the enactive approach gives rise to the sensorimotor abilities of the organism, while the ecological approach emphasizes the coordination at a higher-level between organism and environment through the agent’s exploratory behavior for perceiving affordances. As such, these two different processes of organic coordination can be integrated in a post-cognitivist research framework, which will be based on two levels of analysis: the subpersonal one (the neural dynamics of the sensorimotor contingencies and the emergence of enactive agency) and the personal one (the dynamics that emerges from the organism-environment interaction in ecological terms). If this proposal is on the right track, this may be a promising first step for offering a systematized and consistent post-cognitivist approach to cognition that retain the full potential of both enactivism and ecological psychology.

45 citations


Journal ArticleDOI
01 Dec 2021-Synthese
TL;DR: It is argued that the free energy principle is best understood as offering a conceptual and mathematical analysis of the concept of existence of self-organising systems, and can uniquely serve as a regulatory principle for process theories, to ensure that process theories conforming to it enable self-supervision.
Abstract: The free energy principle says that any self-organising system that is at nonequilibrium steady-state with its environment must minimize its (variational) free energy. It is proposed as a grand unifying principle for cognitive science and biology. The principle can appear cryptic, esoteric, too ambitious, and unfalsifiable—suggesting it would be best to suspend any belief in the principle, and instead focus on individual, more concrete and falsifiable ‘process theories’ for particular biological processes and phenomena like perception, decision and action. Here, I explain the free energy principle, and I argue that it is best understood as offering a conceptual and mathematical analysis of the concept of existence of self-organising systems. This analysis offers a new type of solution to long-standing problems in neurobiology, cognitive science, machine learning and philosophy concerning the possibility of normatively constrained, self-supervised learning and inference. The principle can therefore uniquely serve as a regulatory principle for process theories, to ensure that process theories conforming to it enable self-supervision. This is, at least for those who believe self-supervision is a foundational explanatory task, good reason to believe the free energy principle.

42 citations


Journal ArticleDOI
01 Oct 2021-Synthese
TL;DR: It is argued similarities between algorithmic and cognitive biases indicate a disconcerting sense in which sources of bias emerge out of seemingly innocuous patterns of information processing, making it difficult to identify, mitigate, or evaluate using standard resources in epistemology and ethics.
Abstract: Often machine learning programs inherit social patterns reflected in their training data without any directed effort by programmers to include such biases. Computer scientists call this algorithmic bias. This paper explores the relationship between machine bias and human cognitive bias. In it, I argue similarities between algorithmic and cognitive biases indicate a disconcerting sense in which sources of bias emerge out of seemingly innocuous patterns of information processing. The emergent nature of this bias obscures the existence of the bias itself, making it difficult to identify, mitigate, or evaluate using standard resources in epistemology and ethics. I demonstrate these points in the case of mitigation techniques by presenting what I call ‘the Proxy Problem’. One reason biases resist revision is that they rely on proxy attributes, seemingly innocuous attributes that correlate with socially-sensitive attributes, serving as proxies for the socially-sensitive attributes themselves. I argue that in both human and algorithmic domains, this problem presents a common dilemma for mitigation: attempts to discourage reliance on proxy attributes risk a tradeoff with judgement accuracy. This problem, I contend, admits of no purely algorithmic solution.

40 citations


Journal ArticleDOI
01 Dec 2021-Synthese
TL;DR: The current study represents the first systematic, pre-registered attempt to establish whether and to what extent the recommender system tends to recommend extremist content and conspiracy theories, as such videos are especially likely to capture and keep users’ attention.
Abstract: YouTube has been implicated in the transformation of users into extremists and conspiracy theorists. The alleged mechanism for this radicalizing process is YouTube’s recommender system, which is optimized to amplify and promote clips that users are likely to watch through to the end. YouTube optimizes for watch-through for economic reasons: people who watch a video through to the end are likely to then watch the next recommended video as well, which means that more advertisements can be served to them. This is a seemingly innocuous design choice, but it has a troubling side-effect. Critics of YouTube have alleged that the recommender system tends to recommend extremist content and conspiracy theories, as such videos are especially likely to capture and keep users’ attention. To date, the problem of radicalization via the YouTube recommender system has been a matter of speculation. The current study represents the first systematic, pre-registered attempt to establish whether and to what extent the recommender system tends to promote such content. We begin by contextualizing our study in the framework of technological seduction. Next, we explain our methodology. After that, we present our results, which are consistent with the radicalization hypothesis. Finally, we discuss our findings, as well as directions for future research and recommendations for users, industry, and policy-makers.

40 citations


Journal ArticleDOI
01 Oct 2021-Synthese
TL;DR: It is claimed that the public ought to embrace the charge of virtue signalling, rather than angrily reject it, because virtue signalling is morally appropriate.
Abstract: The accusation of virtue signalling is typically understood as a serious charge. Those accused usually respond (if not by an admission of fault) by attempting to show that they are doing no such thing. In this paper, I argue that we ought to embrace the charge, rather than angrily reject it. I argue that this response can draw support from cognitive science, on the one hand, and from social epistemology on the other. I claim that we may appropriately concede that what we are doing is (inter alia) virtue signalling, because virtue signalling is morally appropriate. It neither expresses vices, nor is hypocritical, nor does it degrade the quality of public moral discourse. Signalling our commitment to norms is a central and justifiable function of moral discourse, and the same signals provide (higher-order) evidence that is appropriately taken into account in forming moral beliefs.

Journal ArticleDOI
19 May 2021-Synthese
TL;DR: This paper uses an intuitive yet precise graphical model called causal influence diagrams to formalize reward tampering problems, and describes a number of modifications to the reinforcement learning objective that prevent incentives for reward tampering.
Abstract: Can humans get arbitrarily capable reinforcement learning (RL) agents to do their bidding? Or will sufficiently capable RL agents always find ways to bypass their intended objectives by shortcutting their reward signal? This question impacts how far RL can be scaled, and whether alternative paradigms must be developed in order to build safe artificial general intelligence. In this paper, we study when an RL agent has an instrumental goal to tamper with its reward process, and describe design principles that prevent instrumental goals for two different types of reward tampering (reward function tampering and RF-input tampering). Combined, the design principles can prevent reward tampering from being an instrumental goal. The analysis benefits from causal influence diagrams to provide intuitive yet precise formalizations.

Journal ArticleDOI
01 May 2021-Synthese
TL;DR: A truism of commonsense psychology that perception and action constitute the boundaries of the mind is developed by using the notion of a Markov blanket originally employed to describe the topological properties of causal networks to show how both of their objections to the extended mind fail.
Abstract: We develop a truism of commonsense psychology that perception and action constitute the boundaries of the mind. We do so however not on the basis of commonsense psychology, but by using the notion of a Markov blanket originally employed to describe the topological properties of causal networks. We employ the Markov blanket formalism to propose precise criteria for demarcating the boundaries of the mind that unlike other rival candidates for “marks of the cognitive” avoids begging the question in the extended mind debate. Our criteria imply that the boundary of the mind is nested and multiscale sometimes extending beyond the individual agent to incorporate items located in the environment. Chalmers has used commonsense psychology to develop what he sees as the most serious challenge to the view that minds sometimes extend into the world. He has argued that perception and action should be thought of as interfaces that separate minds from their surrounding environment. In a series of recent papers Hohwy has defended a similar claim using the Markov blanket formalism. We use the Markov blanket formalism to show how both of their objections to the extended mind fail.

Journal ArticleDOI
01 Aug 2021-Synthese
TL;DR: This paper will argue that accountability, responsibility-as-virtue and the willingness to take responsibility are crucial for responsible innovation.
Abstract: The notion of responsible innovation suggests that innovators carry additional responsibilities (to society, stakeholders, users) beyond those commonly suggested. In this paper, we will discuss the meaning of these novel responsibilities focusing on two philosophical problems of attributing such responsibilities to innovators. The first is the allocation of responsibilities to innovators. Innovation is a process that involves a multiplicity of agents and unpredictable, far-reaching causal chains from innovation to social impacts, which creates great uncertainty. A second problem is constituted by possible trade-offs between different kinds of responsibility. It is evident that attributing backward-looking responsibility for product failures diminishes the willingness to learn about such defects and to take forward-looking responsibility. We will argue that these problems can be overcome by elaborating what it is exactly that innovators are responsible for. In this manner, we will distinguish more clearly between holding responsible and taking responsibility. This opens a space for ‘supererogatory’ responsibilities. Second, we will argue that both innovation processes and outcomes can be objects of innovators’ responsibility. Third, we will analyze different kinds of responsibility (blameworthiness, accountability, liability, obligation and virtue) and show that the functions of their attribution are not necessarily contradictory. Based on this conceptual refinement, we will argue that accountability, responsibility-as-virtue and the willingness to take responsibility are crucial for responsible innovation.

Journal ArticleDOI
01 Jan 2021-Synthese
TL;DR: It is argued that the viability of conceptual engineering crucially depends on the ability to bring about meaning change, and that, contrary to first appearance, causal theories of reference do allow for a sufficient degree of meaning control.
Abstract: Unlike conceptual analysis, conceptual engineering does not aim to identify the content that our current concepts do have, but the content which these concepts should have. For this method to show the results that its practitioners typically aim for, being able to change meanings seems to be a crucial presupposition. However, certain branches of semantic externalism raise doubts about whether this presupposition can be met. To the extent that meanings are determined by external factors such as causal histories or microphysical structures, it seems that they cannot be changed intentionally. This paper gives an extended discussion of this ‘externalist challenge’. Pace Herman Cappelen’s recent take on this issue, it argues that the viability of conceptual engineering crucially depends on our ability to bring about meaning change. Furthermore, it argues that, contrary to first appearance, causal theories of reference do allow for a sufficient degree of meaning control. To this purpose, it argues that there is a sense of what is called ‘collective long-range control’, and that popular versions of the causal theory of reference imply that people have this kind of control over meanings.

Journal ArticleDOI
01 May 2021-Synthese
TL;DR: It is argued that psychology as a science of molar behaviour will need to make appeal both to the concepts of shared publicly available affordances, and of the multiplicity of relevant affordances that invite an individual to act to better account for this sociomaterial reality of the geographical environment.
Abstract: The smooth integration of the natural sciences with everyday lived experience is an important ambition of radical embodied cognitive science. In this paper we start from Koffka’s recommendation in his Principles of Gestalt Psychology that to realize this ambition psychology should be a “science of molar behaviour”. Molar behavior refers to the purposeful behaviour of the whole organism directed at an environment that is meaningfully structured for the animal. Koffka made a sharp distinction between the “behavioural environment” and the “geographical environment”. We show how this distinction picks out the difference between the environment as perceived by an individual organism, and the shared publicly available environment. The ecological psychologist James Gibson was later critical of Koffka for inserting a private phenomenal reality in between animals and the shared environment. Gibson tried to make do with just the concept of affordances in his explanation of molar behaviour. We argue however that psychology as a science of molar behaviour will need to make appeal both to the concepts of shared publicly available affordances, and of the multiplicity of relevant affordances that invite an individual to act. A version of Koffka’s distinction between the two environments remains alive today in a distinction we have made between the field and landscape of affordances. Having distinguished the two environments, we go on to provide an account of how the two environments are related. Koffka suggested that the behavioural environment forms out of the causal interaction of the individual with a pre-existing, ready-made geographical environment. We argue that such an account of the relation between the two environments fails to do justice to the complex entanglement of the social with the material aspects of the geographical environment. To better account for this sociomaterial reality of the geographical environment, we propose a process-perspective on our distinction between the landscape and field of affordances. While the two environments can be conceptually distinguished, we argue they should also be viewed as standing in a relation of reciprocal and mutual dependence.

Journal ArticleDOI
Alisa Bokulich1
01 Oct 2021-Synthese
TL;DR: This paper examines the case of how paleodiversity data models are constructed from the fossil data and argues for the following related theses: first, the ‘purity’ of a data model is not a measure of its epistemic reliability, instead it is the fidelity of the data that matters.
Abstract: Despite an enormous philosophical literature on models in science, surprisingly little has been written about data models and how they are constructed. In this paper, I examine the case of how paleodiversity data models are constructed from the fossil data. In particular, I show how paleontologists are using various model-based techniques to correct the data. Drawing on this research, I argue for the following related theses: first, the ‘purity’ of a data model is not a measure of its epistemic reliability. Instead it is the fidelity of the data that matters. Second, the fidelity of a data model in capturing the signal of interest is a matter of degree. Third, the fidelity of a data model can be improved ‘vicariously’, such as through the use of post hoc model-based correction techniques. And, fourth, data models, like theoretical models, should be assessed as adequate (or inadequate) for particular purposes.

Journal ArticleDOI
01 Mar 2021-Synthese
TL;DR: This article reviews studies that explore children’s number knowledge at various points during this acquisition process, and assumes the theoretical framework proposed by Carey (The origin of concepts, 2009).
Abstract: This article focuses on how young children acquire concepts for exact, cardinal numbers (e.g., three, seven, two hundred, etc.). I believe that exact numbers are a conceptual structure that was invented by people, and that most children acquire gradually, over a period of months or years during early childhood. This article reviews studies that explore children’s number knowledge at various points during this acquisition process. Most of these studies were done in my own lab, and assume the theoretical framework proposed by Carey (The origin of concepts, 2009). In this framework, the counting list (‘one,’ ‘two,’ ‘three,’ etc.) and the counting routine (i.e., reciting the list and pointing to objects, one at a time) form a placeholder structure. Over time, the placeholder structure is gradually filled in with meaning to become a conceptual structure that allows the child to represent exact numbers (e.g., There are 24 children in my class, so I need to bring 24 cupcakes for the party.) A number system is a socially shared, structured set of symbols that pose a learning challenge for children. But once children have acquired a number system, it allows them to represent information (i.e., large, exact cardinal values) that they had no way of representing before.

Journal ArticleDOI
01 Feb 2021-Synthese
TL;DR: It is argued that the ecological concept of uncertainty relies on a systemic view of uncertainty that features it as a property of the organism–environment system and in what cases ignoring probabilities emerges as a proper response to uncertainty.
Abstract: Despite the ubiquity of uncertainty, scientific attention has focused primarily on probabilistic approaches, which predominantly rely on the assumption that uncertainty can be measured and expressed numerically. At the same time, the increasing amount of research from a range of areas including psychology, economics, and sociology testify that in the real world, people’s understanding of risky and uncertain situations cannot be satisfactorily explained in probabilistic and decision-theoretical terms. In this article, we offer a theoretical overview of an alternative approach to uncertainty developed in the framework of the ecological rationality research program. We first trace the origins of the ecological approach to uncertainty in Simon’s bounded rationality and Brunswik’s lens model framework and then proceed by outlining a theoretical view of uncertainty that ensues from the ecological rationality approach. We argue that the ecological concept of uncertainty relies on a systemic view of uncertainty that features it as a property of the organism–environment system. We also show how simple heuristics can deal with unmeasurable uncertainty and in what cases ignoring probabilities emerges as a proper response to uncertainty.

Journal ArticleDOI
01 Apr 2021-Synthese
TL;DR: Limiting axiomatic rationality to small worlds, a naturalized version of rationality is proposed for situations of intractability and uncertainty (as opposed to risk), all of which are not in (S, C).
Abstract: Axiomatic rationality is defined in terms of conformity to abstract axioms Savage (The foundations of statistics, Wiley, New York, 1954) limited axiomatic rationality to small worlds (S, C), that is, situations in which the exhaustive and mutually exclusive set of future states S and their consequences C are known Others have interpreted axiomatic rationality as a categorical norm for how human beings should reason, arguing in addition that violations would lead to real costs such as money pumps Yet a review of the literature shows little evidence that violations are actually associated with any measurable costs Limiting axiomatic rationality to small worlds, I propose a naturalized version of rationality for situations of intractability and uncertainty (as opposed to risk), all of which are not in (S, C) In these situations, humans can achieve their goals by relying on heuristics that may violate axiomatic rationality The study of ecological rationality requires formal models of heuristics and an analysis of the structures of environments these can exploit It lays the foundation of a moderate naturalism in epistemology, providing statements about heuristics we should use in a given situation Unlike axiomatic rationality, ecological rationality can explain less-is-more effects (when using less information can be expected to generate more accurate predictions), formalize when one should move from ‘is’ to ‘ought,’ and be evaluated by goals beyond coherence, such as predictive accuracy, frugality, and efficiency Ecological rationality can be seen as a formalization of means–end instrumentalist rationality, based on Herbert Simon’s insight that rational behavior is a function of the mind and its environment

Journal ArticleDOI
01 May 2021-Synthese
TL;DR: While appeals to the ordinary view of time may do some work in the context of adjudicating disputes between dynamists and non-dynamists, they likely cannot do any such work adjudicates disputes between particular brands of dynamism (or non-Dynamism).
Abstract: We investigated, experimentally, the contention that the folk view, or naive theory, of time, amongst the population we investigated (i.e. U.S. residents) is dynamical. We found that amongst that population, (i) ~ 70% have an extant theory of time (the theory they deploy after some reflection, whether it be naive or sophisticated) that is more similar to a dynamical than a non-dynamical theory, and (ii) ~ 70% of those who deploy a naive theory of time (the theory they deploy on the basis of naive interactions with the world and not on the basis of scientific investigation or knowledge) deploy a naive theory that is more similar to a dynamical than a non-dynamical theory. Interestingly, while we found stable results across our two experiments regarding the percentage of participants that have a dynamical or non-dynamical extant theory of time, we did not find such stability regarding which particular dynamical or non-dynamical theory of time they take to be most similar to our world. This suggests that there might be two extant theories in the population—a broadly dynamical one and a broadly non-dynamical one—but that those theories are sufficiently incomplete that participants do not stably choose the same dynamical (or non-dynamical) theory as being most similar to our world. This suggests that while appeals to the ordinary view of time may do some work in the context of adjudicating disputes between dynamists and non-dynamists, they likely cannot do any such work adjudicating disputes between particular brands of dynamism (or non-dynamism).

Journal ArticleDOI
01 Aug 2021-Synthese
TL;DR: This paper analyses the paradigm-level assumptions that are (implicitly) being brought forward by the different conceptualizations of RI and helps to raise the self-awareness of the RI community about their presuppositions and the paradigm level barriers and enablers to reaching the RI ideal.
Abstract: The current challenges of implementing responsible innovation (RI) can in part be traced back to the (implicit) assumptions behind the ways of thinking that ground the different pre-existing theories and approaches that are shared under the RI-umbrella. Achieving the ideals of RI, therefore not only requires a shift on an operational and systemic level but also at the paradigm-level. In order to develop a deeper understanding of this paradigm shift, this paper analyses the paradigm-level assumptions that are (implicitly) being brought forward by the different conceptualizations of RI. To this purpose it deploys (1) a pragmatic stance on paradigms that allows discerning ontological and axiological elements shared by the RI community and (2) an accompanying critical hermeneutic research approach that enables the profiling of paradigmatic beliefs and assumptions of accounts of RI. The research surfaces the distance of four salient RI accounts from the currently dominant techno-economic innovation paradigm RI seeks to shift. With this, our contribution helps to raise the self-awareness of the RI community about their presuppositions and the paradigm level barriers and enablers to reaching the RI ideal. This insight is needed for a successful transition to responsible research and innovation practices.

Journal ArticleDOI
01 Jul 2021-Synthese
TL;DR: A factive view of understanding that fully accommodates the puzzling fact that falsehoods can have an epistemic value by virtue of being the very falsehoods they are is offered.
Abstract: Science is replete with falsehoods that epistemically facilitate understanding by virtue of being the very falsehoods they are. In view of this puzzling fact, some have relaxed the truth requirement on understanding. I offer a factive view of understanding (i.e., the extraction view) that fully accommodates the puzzling fact in four steps: (i) I argue that the question how these falsehoods are related to the phenomenon to be understood and the question how they figure into the content of understanding it are independent. (ii) I argue that the falsehoods do not figure into the understanding’s content by being elements of its periphery or core. (iii) Drawing lessons from case studies, I argue that the falsehoods merely enable understanding. When working with such falsehoods, only the truths we extract from them are elements of the content of our understanding. (iv) I argue that the extraction view is compatible with the thesis that falsehoods can have an epistemic value by virtue of being the very falsehoods they are.

Journal ArticleDOI
01 Jan 2021-Synthese
TL;DR: This paper responds to recent criticisms of the idea that true causal claims can differ in the extent to which they satisfy other conditions—called stability and proportionality—that are relevant to their use in explanatory theorizing.
Abstract: This paper responds to recent criticisms of the idea that true causal claims, satisfying a minimal “interventionist” criterion for causation, can differ in the extent to which they satisfy other conditions—called stability and proportionality—that are relevant to their use in explanatory theorizing. It reformulates the notion of proportionality so as to avoid problems with previous formulations. It also introduces the notion of conditional independence or irrelevance, which I claim is central to understanding the respects and the extent to which upper level explanations can be “autonomous”.

Journal ArticleDOI
01 Aug 2021-Synthese
TL;DR: The article formulates the thesis that composition is identity in a plural language, a symbolic language that includes counterparts of plural constructions of natural languages, and shows that it implies that nothing has a proper part.
Abstract: Say that some things compose something, if the latter is a whole, fusion, or mereological sum of the former. Then the thesis that composition is identity holds that the composition relation is a kind of identity relation, a plural cousin of singular identity. On this thesis, any things that compose a whole (taken together) are identical with the whole. This article argues that the thesis is incoherent. To do so, the article formulates the thesis in a plural language, a symbolic language that includes counterparts of plural constructions of natural languages, and shows that it implies that nothing has a proper part. Then the article argues that the thesis, as its proponents take it, is incoherent because they take it to imply or presuppose that some things have proper parts.

Journal ArticleDOI
01 Oct 2021-Synthese
TL;DR: A critical examination of the answer that structural mapping accounts offer to the former problem leads to identify a lacuna in these accounts: they have to presuppose that target systems are structured and yet leave this presupposition unexplained.
Abstract: How does mathematics apply to something non-mathematical? We distinguish between a general application problem and a special application problem. A critical examination of the answer that structural mapping accounts offer to the former problem leads us to identify a lacuna in these accounts: they have to presuppose that target systems are structured and yet leave this presupposition unexplained. We propose to fill this gap with an account that attributes structures to targets through structure generating descriptions. These descriptions are physical descriptions and so there is no such thing as a solely mathematical account of a target system.

Journal ArticleDOI
04 Jun 2021-Synthese
TL;DR: It is argued that many standard learning algorithms should rather be understood as model-dependent: in each application they also require for input a model, representing a bias, and Generic algorithms themselves, they can be given a model-relative justification.
Abstract: The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather be understood as model-dependent: in each application they also require for input a model, representing a bias. Generic algorithms themselves, they can be given a model-relative justification.

Journal ArticleDOI
15 Apr 2021-Synthese
TL;DR: In this paper, an enactive approach to pain and the transition to chronicity is proposed, in which behavioral learning, neural reorganization, and socio-cultural practices are considered.
Abstract: In recent years, the societal and personal impacts of pain, and the fact that we still lack an effective method of treatment, has motivated researchers from diverse disciplines to try to think in new ways about pain and its management. In this paper, we aim to develop an enactive approach to pain and the transition to chronicity. Two aspects are central to this project. First, the paper conceptualizes differences between acute and chronic pain, as well as the dynamic process of pain chronification, in terms of changes in the field of affordances. This is, in terms of the possibilities for action perceived by subjects in pain. As such, we aim to do justice to the lived experience of patients as well as the dynamic role of behavioral learning, neural reorganization, and socio-cultural practices in the generation and maintenance of pain. Second, we aim to show in which manners such an enactive approach may contribute to a comprehensive understanding of pain that avoids conceptual and methodological issues of reductionist and fragmented approaches. It proves particularly beneficial as a heuristic in pain therapy addressing the heterogenous yet dynamically intertwined aspects that may contribute to pain and its chronification.

Journal ArticleDOI
01 Jul 2021-Synthese
TL;DR: It is explained why the necessity to define a probability distribution renders arguments from naturalness internally contradictory, and why it is conceptually questionable to single out assumptions about dimensionless parameters from among a host of other assumptions.
Abstract: We critically analyze the rationale of arguments from finetuning and naturalness in particle physics and cosmology, notably the small values of the mass of the Higgs-boson and the cosmological constant. We identify several new reasons why these arguments are not scientifically relevant. Besides laying out why the necessity to define a probability distribution renders arguments from naturalness internally contradictory, it is also explained why it is conceptually questionable to single out assumptions about dimensionless parameters from among a host of other assumptions. Some other numerological coincidences and their problems are also discussed.

Journal ArticleDOI
01 Jan 2021-Synthese
TL;DR: This work offers two puzzles—one concerning the essences of non-compossible, complementary entities, and a second involving entities whose essences are modally ‘loaded’—that together strongly call into question the possibility of reducing modality to essence.
Abstract: It is a truth universally acknowledged that a claim of metaphysical modality, in possession of good alethic standing, must be in want of an essentialist foundation. Or at least so say the advocates of the reductive-essence-first view (the REF, for short), according to which all (metaphysical) modality is to be reductively defined in terms of essence. Here, I contest this bit of current wisdom. In particular, I offer two puzzles—one concerning the essences of non-compossible, complementary entities, and a second involving entities whose essences are modally ‘loaded’—that together strongly call into question the possibility of reducing modality to essence.