scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2018"


Journal ArticleDOI
01 Jan 2018-Synthese
TL;DR: A basic review of neuroscientific, cognitive, and philosophical approaches to PP is presented, to illustrate how these range from solidly cognitivist applications—with a firm commitment to modular, internalistic mental representation—to more moderate views emphasizing the importance of ‘body-representations’, and finally to those which fit comfortably with radically enactive, embodied, and dynamic theories of mind.
Abstract: Predictive processing (PP) approaches to the mind are increasingly popular in the cognitive sciences. This surge of interest is accompanied by a proliferation of philosophical arguments, which seek to either extend or oppose various aspects of the emerging framework. In particular, the question of how to position predictive processing with respect to enactive and embodied cognition has become a topic of intense debate. While these arguments are certainly of valuable scientific and philosophical merit, they risk underestimating the variety of approaches gathered under the predictive label. Here, we first present a basic review of neuroscientific, cognitive, and philosophical approaches to PP, to illustrate how these range from solidly cognitivist applications—with a firm commitment to modular, internalistic mental representation—to more moderate views emphasizing the importance of ‘body-representations’, and finally to those which fit comfortably with radically enactive, embodied, and dynamic theories of mind. Any nascent predictive processing theory (e.g., of attention or consciousness) must take into account this continuum of views, and associated theoretical commitments. As a final point, we illustrate how the Free Energy Principle (FEP) attempts to dissolve tension between internalist and externalist accounts of cognition, by providing a formal synthetic account of how internal ‘representations’ arise from autopoietic self-organization. The FEP thus furnishes empirically productive process theories (e.g., predictive processing) by which to guide discovery through the formal modelling of the embodied mind.

255 citations


Journal ArticleDOI
01 Jan 2018-Synthese
TL;DR: It is argued that the free energy principle and the ecological and enactive approach to mind and life make for a much happier marriage of ideas, and that thefree energy principle is best understood in ecological andEnactive terms.
Abstract: In this paper, we argue for a theoretical separation of the free-energy principle from Helmholtzian accounts of the predictive brain. The free-energy principle is a theoretical framework capturing the imperative for biological self-organization in information-theoretic terms. The free-energy principle has typically been connected with a Bayesian theory of predictive coding, and the latter is often taken to support a Helmholtzian theory of perception as unconscious inference. If our interpretation is right, however, a Helmholtzian view of perception is incompatible with Bayesian predictive coding under the free-energy principle. We argue that the free energy principle and the ecological and enactive approach to mind and life make for a much happier marriage of ideas. We make our argument based on three points. First we argue that the free energy principle applies to the whole animal–environment system, and not only to the brain. Second, we show that active inference, as understood by the free-energy principle, is incompatible with unconscious inference understood as analagous to scientific hypothesis-testing, the main tenet of a Helmholtzian view of perception. Third, we argue that the notion of inference at work in Bayesian predictive coding under the free-energy principle is too weak to support a Helmholtzian theory of perception. Taken together these points imply that the free energy principle is best understood in ecological and enactive terms set out in this paper.

241 citations


Journal ArticleDOI
01 Feb 2018-Synthese
TL;DR: This paper draws upon two cases of interdisciplinary collaboration; those between ecologists and economists, and those between molecular biologists and systems biologists, to illustrate some of the cognitive barriers which have contributed to failures and difficulties of interactions between these fields.
Abstract: Research on interdisciplinary science has for the most part concentrated on the institutional obstacles that discourage or hamper interdisciplinary work, with the expectation that interdisciplinary interaction can be improved through institutional reform strategies such as through reform of peer review systems. However institutional obstacles are not the only ones that confront interdisciplinary work. The design of policy strategies would benefit from more detailed investigation into the particular cognitive constraints, including the methodological and conceptual barriers, which also confront attempts to work across disciplinary boundaries. Lessons from cognitive science and anthropological studies of labs in sociology of science suggest that scientific practices may be very domain specific, where domain specificity is an essential aspect of science that enables researchers to solve complex problems in a cognitively manageable way. The limit or extent of domain specificity in scientific practice, and how it constrains interdisciplinary research, is not yet fully understood, which attests to an important role for philosophers of science in the study of interdisciplinary science. This paper draws upon two cases of interdisciplinary collaboration; those between ecologists and economists, and those between molecular biologists and systems biologists, to illustrate some of the cognitive barriers which have contributed to failures and difficulties of interactions between these fields. Each exemplify some aspect of domain specificity in scientific practice and show how such specificity may constrain interdisciplinary work

132 citations


Journal ArticleDOI
01 Jan 2018-Synthese
TL;DR: It is asked how the concept of active inference under each model informs discussions of social cognition and the alternative model of enactivist hermeneutics is explored.
Abstract: We distinguish between three philosophical views on the neuroscience of predictive models: predictive coding (associated with internal Bayesian models and prediction error minimization), predictive processing (associated with radical connectionism and ‘simple’ embodiment) and predictive engagement (associated with enactivist approaches to cognition). We examine the concept of active inference under each model and then ask how this concept informs discussions of social cognition. In this context we consider Frith and Friston’s proposal for a neural hermeneutics, and we explore the alternative model of enactivist hermeneutics.

128 citations


Journal ArticleDOI
01 Jun 2018-Synthese
TL;DR: Systematizing the theoretical virtues in this manner clarifies each virtue and suggests how they might have a coordinated and cumulative role in theory formation and evaluation across the disciplines—with allowance for discipline specific modification.
Abstract: There are at least twelve major virtues of good theories: evidential accuracy, causal adequacy, explanatory depth, internal consistency, internal coherence, universal coherence, beauty, simplicity, unification, durability, fruitfulness, and applicability. These virtues are best classified into four classes: evidential, coherential, aesthetic, and diachronic. Each virtue class contains at least three virtues that sequentially follow a repeating pattern of progressive disclosure and expansion. Systematizing the theoretical virtues in this manner clarifies each virtue and suggests how they might have a coordinated and cumulative role in theory formation and evaluation across the disciplines—with allowance for discipline specific modification. An informal and flexible logic of theory choice is in the making here. Evidential accuracy (empirical fit), according to my systematization, is not a largely isolated trait of good theories, as some (realists and antirealists) have made it out to be. Rather, it bears multifaceted relationships, constituting significant epistemic entanglements, with other theoretical virtues.

82 citations


Journal ArticleDOI
01 Jun 2018-Synthese
TL;DR: It is claimed that the free energy principle (FEP) should be preferred to the theory of AT, as classically formulated, and that the FEP and the recently formulated framework of autopoietic enactivism can be shown to be genuinely continuous on a number of central issues, thus raising the possibility of a joint venture when it comes to answering the life–mind continuity thesis.
Abstract: The life–mind continuity thesis is difficult to study, especially because the relation between life and mind is not yet fully understood, and given that there is still no consensus view neither on what qualifies as life nor on what defines mind. Rather than taking up the much more difficult task of addressing the many different ways of explaining how life relates to mind, and vice versa, this paper considers two influential accounts addressing how best to understand the life–mind continuity thesis: first, the theory of autopoiesis (AT) developed in biology and in enactivist theories of mind; and second, the recently formulated free energy principle in theoretical neurobiology, with roots in thermodynamics and statistical physics. This paper advances two claims. The first is that the free energy principle (FEP) should be preferred to the theory of AT, as classically formulated. The second is that the FEP and the recently formulated framework of autopoietic enactivism can be shown to be genuinely continuous on a number of central issues, thus raising the possibility of a joint venture when it comes to answering the life–mind continuity thesis.

75 citations


Journal ArticleDOI
01 Jun 2018-Synthese
TL;DR: How certain longstanding philosophical questions about mental representation may be answered on the assumption that cognitive and perceptual systems implement hierarchical generative models, such as those discussed within the prediction error minimization (PEM) framework are considered.
Abstract: In this paper, we consider how certain longstanding philosophical questions about mental representation may be answered on the assumption that cognitive and perceptual systems implement hierarchical generative models, such as those discussed within the prediction error minimization (PEM) framework. We build on existing treatments of representation via structural resemblance, such as those in Gladziejewski (Synthese 193(2):559–582, 2016) and Gladziejewski and Milkowski (Biol Philos, 2017), to argue for a representationalist interpretation of the PEM framework. We further motivate the proposed approach to content by arguing that it is consistent with approaches implicit in theories of unsupervised learning in neural networks. In the course of this discussion, we argue that the structural representation proposal, properly understood, has more in common with functional-role than with causal/informational or teleosemantic theories. In the remainder of the paper, we describe the PEM framework for approximate Bayesian inference in some detail, and discuss how structural representations might arise within the proposed Bayesian hierarchies. After explicating the notion of variational inference, we define a subjectively accessible measure of misrepresentation for hierarchical Bayesian networks by appeal to the Kullbach–Leibler divergence between posterior generative and approximate recognition densities, and discuss a related measure of objective misrepresentation in terms of correspondence with the facts.

74 citations


Journal ArticleDOI
01 May 2018-Synthese
TL;DR: In the picture the authors develop, online cognitive function cannot be assigned to either the cortical or the sub-cortical component, but instead emerges from their tight co-ordination, generating a more truly ‘embodied’ vision of the predictive brain in action.
Abstract: Recent work in cognitive and computational neuroscience depicts the human cortex as a multi-level prediction engine. This ‘predictive processing’ framework shows great promise as a means of both understanding and integrating the core information processing strategies underlying perception, reasoning, and action. But how, if at all, do emotions and sub-cortical contributions fit into this emerging picture? The fit, we shall argue, is both profound and potentially transformative. In the picture we develop, online cognitive function cannot be assigned to either the cortical or the sub-cortical component, but instead emerges from their tight co-ordination. This tight co-ordination involves processes of continuous reciprocal causation that weave together bodily information and ‘top-down’ predictions, generating a unified sense of what’s out there and why it matters. The upshot is a more truly ‘embodied’ vision of the predictive brain in action.

72 citations


Journal ArticleDOI
Yanjing Wang1
01 Oct 2018-Synthese
TL;DR: A sound and complete proof system is given to capture valid reasoning patterns, which highlights the compositional nature of “knowing how” and the logical language is extended to handle knowing how to achieve a goal while maintaining other conditions.
Abstract: In this paper, we propose a decidable single-agent modal logic for reasoning about goal-directed “knowing how”, based on ideas from linguistics, philosophy, modal logic, and automated planning in AI. We first define a modal language to express “I know how to guarantee \(\varphi \) given \(\psi \)” with a semantics based not on standard epistemic models but on labeled transition systems that represent the agent’s knowledge of his own abilities. The semantics is inspired by conformant planning in AI. A sound and complete proof system is given to capture valid reasoning patterns, which highlights the compositional nature of “knowing how”. The logical language is further extended to handle knowing how to achieve a goal while maintaining other conditions.

68 citations


Journal ArticleDOI
17 Sep 2018-Synthese
TL;DR: It is argued that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain.
Abstract: In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence However, it has proven difficult to explain why DCNNs perform so well In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to a faculty of abstraction Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works In this paper, I tie these two questions together, to the mutual benefit of both disciplines I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction” Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain

67 citations


Journal ArticleDOI
01 Aug 2018-Synthese
TL;DR: This paper defends a conception of practical reasons that is called ‘Factualism’, which says that all reasons are facts, and argues that this conception provides plausible answers to the second and third questions above.
Abstract: What kind of thing is a reason for action? What is it to act for a reason? And what is the connection between acting for a reason and rationality? There is controversy about the many issues raised by these questions. In this paper I shall answer the first question with a conception of practical reasons that I call ‘Factualism’, which says that all reasons are facts. I defend this conception against its main rival, Psychologism, which says that practical reasons are mental states or mental facts, and also against a variant of Factualism that says that some practical reasons are facts and others are false beliefs. I argue that the conception of practical reasons defended here (i) provides plausible answers to the second and third questions above; and (ii) gives a more unified and satisfactory picture of practical reasons than those offered by its rivals.

Journal ArticleDOI
01 May 2018-Synthese
TL;DR: This paper examined the reconstruction of the "Sherlock Holmes sense of deduction" proposed jointly by M.B. and J.Hintikka, and argued that the latter better supports the claim that the imi provides a "logic of discovery".
Abstract: This paper examines critically the reconstruction of the ‘Sherlock Holmes sense of deduction’ proposed jointly by M.B. Hintikka (1939–1987) and J. Hintikka (1929–2016) in the 1980s, and its successor, the interrogative model of inquiry (imi) developed by J. Hintikka and his collaborators in the 1990s. The Hintikkas’ model explicitly used game theory in order to formalize a naturalistic approach to inquiry, but the imi abandoned both the game-theoretic formalism, and the naturalistic approach. It is argued that the latter better supports the claim that the imi provides a ‘logic of discovery’, and safeguards its empirical adequacy. Technical changes necessary to this interpretation are presented, and examples are discussed, both formal and informal, that are better analyzed when these changes are in place. The informal examples are borrowed from Conan Doyle’s The Case of Silver Blaze, a favorite of M.B. and J. Hintikka.

Journal ArticleDOI
01 Apr 2018-Synthese
TL;DR: It is shown how network approaches support and extend traditional mechanistic strategies but also offer novel strategies for dealing with biological complexity.
Abstract: The increasing application of network models to interpret biological systems raises a number of important methodological and epistemological questions. What novel insights can network analysis provide in biology? Are network approaches an extension of or in conflict with mechanistic research strategies? When and how can network and mechanistic approaches interact in productive ways? In this paper we address these questions by focusing on how biological networks are represented and analyzed in a diverse class of case studies. Our examples span from the investigation of organizational properties of biological networks using tools from graph theory to the application of dynamical systems theory to understand the behavior of complex biological systems. We show how network approaches support and extend traditional mechanistic strategies but also offer novel strategies for dealing with biological complexity.

Journal ArticleDOI
Colin Klein1
01 Jun 2018-Synthese
TL;DR: Real-world, real-time systems may embody motivational states in a variety of ways consistent with idealized principles like FEP, including ways that are intuitively embodied and extended, which may allow predictive coding theorists to reconcile their account with embodied principles, even if it ultimately undermines loftier ambitions.
Abstract: The so-called “dark room problem” makes vivd the challenges that purely predictive models face in accounting for motivation. I argue that the problem is a serious one. Proposals for solving the dark room problem via predictive coding architectures are either empirically inadequate or computationally intractable. The Free Energy principle might avoid the problem, but only at the cost of setting itself up as a highly idealized model, which is then literally false to the world. I draw at least one optimistic conclusion, however. Real-world, real-time systems may embody motivational states in a variety of ways consistent with idealized principles like FEP, including ways that are intuitively embodied and extended. This may allow predictive coding theorists to reconcile their account with embodied principles, even if it ultimately undermines loftier ambitions.

Journal ArticleDOI
01 Jan 2018-Synthese
TL;DR: In this article, network science is discussed from a methodological perspective, and two central theses are defended: that network science exploits the very properties that make a system complex, and that network representations are particularly helpful in explaining the properties of non-decomposable systems.
Abstract: In this article, network science is discussed from a methodological perspective, and two central theses are defended. The first is that network science exploits the very properties that make a system complex. Rather than using idealization techniques to strip those properties away, as is standard practice in other areas of science, network science brings them to the fore, and uses them to furnish new forms of explanation. The second thesis is that network representations are particularly helpful in explaining the properties of non-decomposable systems. Where part-whole decomposition is not possible, network science provides a much-needed alternative method of compressing information about the behavior of complex systems, and does so without succumbing to problems associated with combinatorial explosion. The article concludes with a comparison between the uses of network representation analyzed in the main discussion, and an entirely distinct use of network representation that has recently been discussed in connection with mechanistic modeling.

Journal ArticleDOI
01 Jan 2018-Synthese
TL;DR: This work suggests that people who reject the fact that the Earth’s climate is changing due to greenhouse gas emissions (or any other body of well-established scientific knowledge) oppose whatever inconvenient finding they are confronting in piece-meal fashion, rather than systematically, and without considering the implications of this rejection to the rest of the relevant scientific theory and findings.
Abstract: Science strives for coherence. For example, the findings from climate science form a highly coherent body of knowledge that is supported by many independent lines of evidence: greenhouse gas (GHG) emissions from human economic activities are causing the global climate to warm and unless GHG emissions are drastically reduced in the near future, the risks from climate change will continue to grow and major adverse consequences will become unavoidable. People who oppose this scientific body of knowledge because the implications of cutting GHG emissions—such as regulation or increased taxation—threaten their worldview or livelihood cannot provide an alternative view that is coherent by the standards of conventional scientific thinking. Instead, we suggest that people who reject the fact that the Earth’s climate is changing due to greenhouse gas emissions (or any other body of well-established scientific knowledge) oppose whatever inconvenient finding they are confronting in piece-meal fashion, rather than systematically, and without considering the implications of this rejection to the rest of the relevant scientific theory and findings. Hence, claims that the globe “is cooling” can coexist with claims that the “observed warming is natural” and that “the human influence does not matter because warming is good for us.” Coherence between these mutually contradictory opinions can only be achieved at a highly abstract level, namely that “something must be wrong” with the scientific evidence in order to justify a political position against climate change mitigation. This high-level coherence accompanied by contradictory subordinate propositions is a known attribute of conspiracist ideation, and conspiracism may be implicated when people reject well-established scientific propositions.

Journal ArticleDOI
01 Sep 2018-Synthese
TL;DR: It is argued that critically rethinking the nature and uses of definitions can provide new insights into the epistemic roles of definitions of life for different research practices, and the pragmatic utility of what are called operational definitions that serve as theoretical and epistemic tools in scientific practice.
Abstract: Despite numerous and increasing attempts to define what life is, there is no consensus on necessary and sufficient conditions for life. Accordingly, some scholars have questioned the value of definitions of life and encouraged scientists and philosophers alike to discard the project. As an alternative to this pessimistic conclusion, we argue that critically rethinking the nature and uses of definitions can provide new insights into the epistemic roles of definitions of life for different research practices. This paper examines the possible contributions of definitions of life in scientific domains where such definitions are used most (e.g., Synthetic Biology, Origins of Life, Alife, and Astrobiology). Rather than as classificatory tools for demarcation of natural kinds, we highlight the pragmatic utility of what we call operational definitions that serve as theoretical and epistemic tools in scientific practice. In particular, we examine contexts where definitions integrate criteria for life into theoretical models that involve or enable observable operations. We show how these definitions of life play important roles in influencing research agendas and evaluating results, and we argue that to discard the project of defining life is neither sufficiently motivated, nor possible without dismissing important theoretical and practical research.

Journal ArticleDOI
01 Jan 2018-Synthese
TL;DR: Topological explanations are pervasive both in the study of networks—whose importance has been increasingly acknowledged at each level of the biological hierarchy—and in contexts where the notion of selective neutrality is crucial; this allows the paper to capture the difference between mechanisms and topological explanations in terms of practical modelling practices.
Abstract: Besides mechanistic explanations of phenomena, which have been seriously investigated in the last decade, biology and ecology also include explanations that pinpoint specific mathematical properties as explanatory of the explanandum under focus. Among these structural explanations, one finds topological explanations, and recent science pervasively relies on them. This reliance is especially due to the necessity to model large sets of data with no practical possibility to track the proper activities of all the numerous entities. The paper first defines topological explanations and then explains why topological explanations and mechanisms are different in principle. Then it shows that they are pervasive both in the study of networks—whose importance has been increasingly acknowledged at each level of the biological hierarchy—and in contexts where the notion of selective neutrality is crucial; this allows me to capture the difference between mechanisms and topological explanations in terms of practical modelling practices. The rest of the paper investigates how in practice mechanisms and topologies are combined. They may be articulated in theoretical structures and explanatory strategies, first through a relation of constraint, second in interlevel theories (Sect. 3), or they may condition each other (Sect. 4). Finally, I explore how a particular model can integrate mechanistic informations, by focusing on the recent practice of merging networks in ecology and its consequences upon multiscale modelling (Sect. 5).

Journal ArticleDOI
01 Aug 2018-Synthese
TL;DR: This paper challenges the idea that, if the content of experience is rich, then perception is cognitively penetrable and argues that the very same criteria that help us vindicate the truly sensory nature of the authors' rich experiences speak against their being cognitively penetrateable.
Abstract: According to so-called “thin” views about the content of experience, we can only visually experience low-level features such as colour, shape, texture or motion. According to so-called “rich” views, we can also visually experience some high-level properties, such as being a pine tree or being threatening. One of the standard objections against rich views is that high-level properties can only be represented at the level of judgment. In this paper, I first challenge this objection by relying on some recent studies in social vision. Secondly, I tackle a different but related issue, namely, the idea that, if the content of experience is rich, then perception is cognitively penetrable. Against this thesis, I argue that the very same criteria that help us vindicate the truly sensory nature of our rich experiences speak against their being cognitively penetrable.

Journal ArticleDOI
01 Aug 2018-Synthese
TL;DR: It is claimed that the objection to the mechanistic view of concrete computation can be put to rest once the account is appropriately amended: computational individuation is indeed functional, while mechanistic explanation plays a role in accounting for computational implementation.
Abstract: I examine a major objection to the mechanistic view of concrete computation, stemming from an apparent tension between the abstract nature of computational explanation and the tenets of the mechanistic framework: while computational explanation is medium-independent, the mechanistic framework insists on the importance of providing some degree of structural detail about the systems target of the explanation. I show that a common reply to the objection, i.e. that mechanistic explanation of computational systems involves only weak structural constraints, is not enough to save the standard mechanistic view of computation—it trivialises the appeal to mechanism, and thus makes the account collapse into a purely functional view. I claim, however, that the objection can be put to rest once the account is appropriately amended: computational individuation is indeed functional, while mechanistic explanation plays a role in accounting for computational implementation. Since individuation and implementation are crucial elements in a satisfying account of computation in physical systems, mechanism keeps its central importance in the theory of concrete computation. Finally, I argue that my version of the mechanistic view helps to provide a convincing reply to a powerful objection against non-semantic theories of concrete computation: the argument from the multiplicity of computations.

Journal ArticleDOI
01 Apr 2018-Synthese
TL;DR: This paper argues that the ontological bases of their projectibility are the causal properties and relations associated with the natural kinds themselves, and offers a unified causal account of natural kinds.
Abstract: In this paper I offer a unified causal account of natural kinds. Using as a startingpointthewidelyheldviewthatnaturalkindtermsorpredicatesareprojectible, I argue that the ontological bases of their projectibility are the causal properties and relations associated with the natural kinds themselves. Natural kinds are not just con- catenations of properties but ordered hierarchies of properties, whose instances are related to one another as causes and effects in recurrent causal processes. The result- ing account of natural kinds as clusters of core causal properties that give rise to clusters of derivative properties enables us to distinguish genuine natural kinds from non-natural kinds. For instance, it enables us to say why some of the purely conven- tional categories derived from the social domain do not correspond to natural kinds, though other social categories may.

Journal ArticleDOI
01 May 2018-Synthese
TL;DR: It is argued that one important aspect of the “cognitive neuroscience revolution” identified by Boone and Piccinini (Synthese 193(5):1509–1534) is a dramatic shift away from thinking of cognitive representations as arbitrary symbols towards thinking of them as icons that replicate structural characteristics of their targets.
Abstract: We argue that one important aspect of the “cognitive neuroscience revolution” identified by Boone and Piccinini (Synthese 193(5):1509–1534. doi: 10.1007/s11229-015-0783-4 , 2015) is a dramatic shift away from thinking of cognitive representations as arbitrary symbols towards thinking of them as icons that replicate structural characteristics of their targets. We argue that this shift has been driven both “from below” and “from above”—that is, from a greater appreciation of what mechanistic explanation of information-processing systems involves (“from below”), and from a greater appreciation of the problems solved by bio-cognitive systems, chiefly regulation and prediction (“from above”). We illustrate these arguments by reference to examples from cognitive neuroscience, principally representational similarity analysis and the emergence of (predictive) dynamical models as a central postulate in neurocognitive research.

Journal ArticleDOI
01 Feb 2018-Synthese
TL;DR: This article analyzed the epistemological implications of online collaboration between experts and amateurs on scientific research, with a focus on Zooniverse, the world's largest citizen science web portal, and highlighted the essential role played by technologically mediated social interaction in contemporary knowledge production.
Abstract: Recent years have seen a surge in online collaboration between experts and amateurs on scientific research In this article, we analyse the epistemological implications of these crowdsourced projects, with a focus on Zooniverse, the world’s largest citizen science web portal We use quantitative methods to evaluate the platform’s success in producing large volumes of observation statements and high impact scientific discoveries relative to more conventional means of data processing Through empirical evidence, Bayesian reasoning, and conceptual analysis, we show how information and communication technologies enhance the reliability, scalability, and connectivity of crowdsourced e-research, giving online citizen science projects powerful epistemic advantages over more traditional modes of scientific investigation These results highlight the essential role played by technologically mediated social interaction in contemporary knowledge production We conclude by calling for an explicitly sociotechnical turn in the philosophy of science that combines insights from statistics and logic to analyse the latest developments in scientific research

Journal ArticleDOI
01 Jun 2018-Synthese
TL;DR: The argument of this paper is that research on enculturation and recent work on predictive processing are complementary and that predictive processing needs to be complemented by additional considerations and conceptual tools to realize its full explanatory potential.
Abstract: Many of our cognitive capacities are the result of enculturation. Enculturation is the temporally extended transformative acquisition of cognitive practices in the cognitive niche. Cognitive practices are embodied and normatively constrained ways to interact with epistemic resources in the cognitive niche in order to complete a cognitive task. The emerging predictive processing perspective offers new functional principles and conceptual tools to account for the cerebral and extra-cerebral bodily components that give rise to cognitive practices. According to this emerging perspective, many cases of perception, action, and cognition are realized by the on-going minimization of prediction error. Predictive processing provides us with a mechanistic perspective that helps investigate the functional details of the acquisition of cognitive practices. The argument of this paper is that research on enculturation and recent work on predictive processing are complementary. The main reason is that predictive processing operates at a sub-personal level and on a physiological time scale of explanation only. A complete account of enculturated cognition needs to take additional levels and temporal scales of explanation into account. This complementarity assumption leads to a new framework—enculturated predictive processing—that operates on multiple levels and temporal scales for the explanation of the enculturated predictive acquisition of cognitive practices. Enculturated predictive processing is committed to explanatory pluralism. That is, it subscribes to the idea that we need multiple perspectives and explanatory strategies to account for the complexity of enculturation. The upshot is that predictive processing needs to be complemented by additional considerations and conceptual tools to realize its full explanatory potential.

Journal ArticleDOI
01 Feb 2018-Synthese
TL;DR: This paper presents a new proposal for defining actual causation, i.e., the problem of deciding if one event caused another, within the popular counterfactual tradition initiated by Lewis, which is characterised by attributing a fundamental role tocounterfactual dependence.
Abstract: In this paper we present a new proposal for defining actual causation, i.e., the problem of deciding if one event caused another. We do so within the popular counterfactual tradition initiated by Lewis, which is characterised by attributing a fundamental role to counterfactual dependence. Unlike the currently prominent definitions, our approach proceeds from the ground up: we start from basic principles, and construct a definition of causation that satisfies them. We define the concepts of counterfactual dependence and production, and put forward principles such that dependence is an unnecessary but sufficient condition for causation, whereas production is an insufficient but necessary condition. The resulting definition of causation is a suitable compromise between dependence and production. Every principle is introduced by means of a paradigmatic example of causation. We illustrate some of the benefits of our approach with two examples that have spelled trouble for other accounts. We make all of this formally precise using structural equations, which we extend with a timing over all events.

Journal ArticleDOI
01 Feb 2018-Synthese
TL;DR: This paper considers the signaling game introduced by Lewis (Convention 1969) as a starting point and finds that in experimental settings, small groups can quickly develop conventions of signal meaning in these games.
Abstract: In this paper we use an experimental approach to investigate how linguistic conventions can emerge in a society without explicit agreement. As a starting point we consider the signaling game introduced by Lewis (Convention 1969). We find that in experimental settings, small groups can quickly develop conventions of signal meaning in these games. We also investigate versions of the game where the theoretical literature indicates that meaning will be less likely to arise—when there are more than two states for actors to transfer meaning about and when some states are more likely than others. In these cases, we find that actors are less likely to arrive at strategies where signals have clear conventional meaning. We conclude with a proposal for extending the use of the methodology of experimental economics in experimental philosophy.

Journal ArticleDOI
01 Feb 2018-Synthese
TL;DR: This paper examines the five most sensible explications in the literature, and concludes that none of them actually circumvents the problem of intractability, and describes how rational explanations could satisfy the tractability constraint.
Abstract: The plausibility of so-called ‘rational explanations’ in cognitive science is often contested on the grounds of computational intractability. Some have argued that intractability is a pseudoproblem, however, because cognizers do not actually perform the rational calculations posited by rational models; rather, they only behave as if they do. Whether or not the problem of intractability is dissolved by this gambit critically depends, inter alia, on the semantics of the ‘as if’ connective. First, this paper examines the five most sensible explications in the literature, and concludes that none of them actually circumvents the problem. Hence, rational ‘as if’ explanations must obey the minimal computational constraint of tractability. Second, this paper describes how rational explanations could satisfy the tractability constraint. Our approach suggests a computationally unproblematic interpretation of ‘as if’ that is compatible with the original conception of rational analysis.

Journal ArticleDOI
01 Jun 2018-Synthese
TL;DR: It is proposed that movement sensations in dreams are associated with a basic and developmentally early kind of bodily self-sampling, which affords a central role to active inference and can then be broadened to explain other aspects of self- and world-simulation in dreams.
Abstract: In this paper, I discuss the relationship between bodily experiences in dreams and the sleeping, physical body. I question the popular view that dreaming is a naturally and frequently occurring real-world example of cranial envatment. This view states that dreams are functionally disembodied states: in a majority of dreams, phenomenal experience, including the phenomenology of embodied selfhood, unfolds completely independently of external and peripheral stimuli and outward movement. I advance an alternative and more empirically plausible view of dreams as weakly phenomenally-functionally embodied states. The view predicts that bodily experiences in dreams can be placed on a continuum with bodily illusions in wakefulness. It also acknowledges that there is a high degree of variation across dreams and different sleep stages in the degree of causal coupling between dream imagery, sensory input, and outward motor activity. Furthermore, I use the example of movement sensations in dreams and their relation to outward muscular activity to develop a predictive processing account. I propose that movement sensations in dreams are associated with a basic and developmentally early kind of bodily self-sampling. This account, which affords a central role to active inference, can then be broadened to explain other aspects of self- and world-simulation in dreams. Dreams are world-simulations centered on the self, and important aspects of both self- and world-simulation in dreams are closely linked to bodily self-sampling, including muscular activity, illusory own-body perception, and vestibular orienting in sleep. This is consistent with cognitive accounts of dream generation, in which long-term beliefs and expectations, as well as waking concerns and memories play an important role. What I add to this picture is an emphasis on the real-body basis of dream imagery. This offers a novel perspective on the formation of dream imagery and suggests new lines of research.

Journal ArticleDOI
01 Nov 2018-Synthese
TL;DR: This paper defends the naïve thesis that the method of experiment has per se an epistemic superiority over the methodof computer simulation, a view that has been rejected by some philosophers writing about simulation and whose grounds have been hard to pin down by its defenders.
Abstract: This paper defends the naive thesis that the method of experiment has per se an epistemic superiority over the method of computer simulation, a view that has been rejected by some philosophers writing about simulation, and whose grounds have been hard to pin down by its defenders. I further argue that this superiority does not come from the experiment’s object being materially similar to the target in the world that the investigator is trying to learn about, as both sides of dispute over the epistemic superiority thesis have assumed. The superiority depends on features of the question and on a property of natural kinds that has been mistaken for material similarity. Seeing this requires holding other things equal in the comparison of the two methods, thereby exposing that, under the conditions that will be specified, the simulation is necessarily epistemically one step behind the corresponding experiment. Practical constraints like feasibility and morality mean that scientists do not often face an other-things- equal comparison when they choose between experiment and simulation. Nevertheless, I argue, awareness of this superiority and of the general distinction between experiment and simulation is important for maintaining motivation to seek answers to new questions.

Journal ArticleDOI
Collin Rice1
01 Jun 2018-Synthese
TL;DR: This paper argues against various attempts to justify idealizations in scientific models by showing that they are harmless and isolable distortions of irrelevant features and proposes a view in which idealized models are characterized as providing holistically distorted representations of their target system(s).
Abstract: In this paper, I first argue against various attempts to justify idealizations in scientific models that explain by showing that they are harmless and isolable distortions of irrelevant features. In response, I propose a view in which idealized models are characterized as providing holistically distorted representations of their target system(s). I then suggest an alternative way that idealized modeling can be justified by appealing to universality.