scispace - formally typeset
Search or ask a question

Showing papers in "Philosophy and Phenomenological Research in 2019"


Journal ArticleDOI
TL;DR: In this paper, a unified and uniform account of grounding and essence is proposed, one which understand them both in terms of a generalized notion of identity examined in recent work by Fabrice Correia, Cian Dorr, Agustín Rayo, and others.
Abstract: Recent metaphysics has turned its focus to two notions that are—as well as having a common Aristotelian pedigree—widely thought to be intimately related: grounding and essence. Yet how, exactly, the two are related remains opaque. We develop a unified and uniform account of grounding and essence, one which understands them both in terms of a generalized notion of identity examined in recent work by Fabrice Correia, Cian Dorr, Agustín Rayo, and others. We argue that the account comports with antecedently plausible principles governing grounding, essence, and identity taken individually, and illuminates how the three interact. We also argue that the account compares favorably to an alternative unification of grounding and essence recently proposed by Kit Fine. Recent metaphysics has turned its focus to two notions that are—as well as having a common Aristotelian pedigree—widely thought to be intimately related: grounding (when some phenomenon non-causally ‘derives’ from another) and essence (when some phenomenon is in the ‘nature’ of another). However, how they’re related remains quite opaque.1 We aim to clarify their link by proposing a unified and uniform account of both notions that analyzes them in terms of a third: what we call, following Linnebo (2014), generalized identity. Along with the intrinsic desirability of accounting for either notion alone (which has proven elusive), our proposal illuminates how the two interact by means of a single, relatively wellbehaved conceptual tool. What do we mean by “generalized” identity? Objectual identities (e.g. “Hesperus is Phosphorus”) are familiar, and display a canonical form: an identity-indicating * This article is the product of full and equal collaboration between its authors; the order of authorship is alphabetical. 1 How grounding and essence interact is explicitly taken up in Audi (2012; 2015), Carnino (2014), Correia (2005; 2013), Dasgupta (2014; 2016), Fine (2012; 2015), Guigon (forthcoming), Greenberg (2014), Kment (2014), Koslicki (2012; 2015), Rosen (2012; 2015), Skiles (2015), Trogdon (2015), and Zylstra (forthcoming).

64 citations


Journal ArticleDOI
TL;DR: It is argued that intentions and motor representations have different representational formats, and that motor representations ‘ground the directedness of actions to outcomes’ but it is not clear how they do so.
Abstract: In bodily intentional action, an agent exercises control over her bodily behavior. An important part of the explanation of this involves a mental state of commitment to an action plan—that is, the agent’s intention. The agent’s intention (or its acquisition) initiates the action, and the continuance of the intention throughout the unfolding action plays important causal roles in sustaining and guiding the action to completion. But the agent’s intention is not the only mental state operative in bodily intentional action. Recent work has emphasized important roles for lower-level states as well: so-called motor representations (Decety et al. 1994, Pacherie 2008). These lower-level states specify movement details and movement outcomes in ways that respect fine-grained biomechanical and temporal constraints upon intention satisfaction. Butterfill and Sinigaglia (2014) have argued that in so doing motor representations are far from “philosophically irrelevant enabling conditions” (120). Rather, motor representations ‘ground the directedness of actions to outcomes’ (124). But, according to Butterfill and Sinigaglia, it is not clear how they do so. For they argue that intentions and motor representations have different representational formats. Intentions have a propositional format, and as such integrate with states and processes involved in practical reasoning. Motor representations have a “distinctively motor, non-propositional format” (120). This generates a problem. Butterfill and Sinigaglia explain:

49 citations


Journal ArticleDOI
TL;DR: In this paper, a meta-ethical argument for two-boxing in Newcomb's problem is presented, one that eschews the causal dominance principle in favor of a principle linking rational choice to guidance and actual value maximization.
Abstract: The crucial premise of the standard argument for two-boxing in Newcomb's problem, a causal dominance principle, is false. We present some counterexamples. We then offer a metaethical explanation for why the counterexamples arise. Our explanation reveals a new and superior argument for two-boxing, one that eschews the causal dominance principle in favor of a principle linking rational choice to guidance and actual value maximization.

32 citations


Journal ArticleDOI
TL;DR: The authors argue that it will be more difficult for the uniquer than for the permissivst to explain why, if your aim is accuracy, you would want to be rational.
Abstract: The inspiration for this paper came from a number of recent arguments against permissivism (the denial of UNIQUENESS): arguments by Horowitz (2014), Dogramaci and Horowitz (forthcoming), Greco and Hedden (forthcoming) and Levinstein (forthcoming). These arguments all take a somewhat different tack, but what they have in common is that they present the permissivist with a challenge: roughly, how can the permissivist explain why, if you’re aiming to be accurate, you’d want to be rational. I won’t respond to these arguments directly. Instead, my aim is to turn the challenge on its head and argue that, in fact, it will be more difficult for the uniquer than for the permissivst to explain why, if your aim is accuracy, you’d want to be rational. Before presenting the argument, I’d like to flag two assumptions that I’ll be making: INTERNALISM: What it is rational for an agent to believe supervenes on her non-factive mental states.

27 citations


Journal ArticleDOI
TL;DR: In this article, a bridge principle is defined as a general principle articulating a substantive and systematic link between logical entailment and norms of reasoning, based on the skeptical challenge of Gilbert Harman.
Abstract: Logic, the tradition has it, is normative for reasoning But is that really so? And if so, in what sense is logic normative for reasoning? As Gilbert Harman has reminded us, devising a logic and devising a theory of reasoning are two separate enterprises Hence, logic's normative authority cannot reside in the fact that principles of logic just are norms of reasoning Once we cease to identify the two, we are left with a gap To bridge the gap one would need to produce what John MacFarlane has appropriately called a 'bridge principle', ie a general principle articulating a substantive and systematic link between logical entailment and norms of reasoning This is Harman's skeptical challenge In this paper, I argue that Harman's skeptical challenge can be met I show how candidate bridge principles can be systematically generated and evaluated against a set of well-motivated desiderata Moreover, I argue that bridge principles advanced by MacFarlane himself and others, for all their merit, fail to address the problem originally set forth by Harman and so do not meet the skeptical challenge Finally, I develop a bridge principle that meets Harman's requirements as well as being substantive

22 citations


Journal ArticleDOI
TL;DR: In this article, the putative theoretical virtue of strength is used in abductive arguments to the correct logic in the epistemology of logic, and it is shown that logically weaker theories are not necessarily worse or better than logically stronger ones.
Abstract: This paper is about the putative theoretical virtue of strength , as it might be used in abductive arguments to the correct logic in the epistemology of logic. It argues for three theses. The first is that the well‐defined property of logical strength is neither a virtue nor a vice, so that logically weaker theories are not—all other things being equal—worse or better theories than logically stronger ones. The second thesis is that logical strength does not entail the looser characteristic of scientific strength , and the third is that many modern logics are on a par—or can be made to be on a par—with respect to scientific strength.

21 citations




Journal ArticleDOI
TL;DR: This article pointed out that indexicals seem to be interpreted in the same way no matter how they are embedded, so that if David utters a sentence like (1), “I” picks out David (rather than Otto):
Abstract: In his seminal work on context-sensitivity, Kaplan (1989) famously claimed that monsters—operators that (in Kaplan’s framework) shift the context, and so “control the character of indexicals within [their] scope”—do not exist in English and “could not be added to it” (1989: 510). Kaplan pointed out that indexicals (like the English words “I” and “you”) seem to be interpreted in the same way no matter how they are embedded, so that (for example) if David utters a sentence like (1), “I” picks out David (rather than Otto):

18 citations


Journal ArticleDOI
TL;DR: In this article, the authors focus on the individual and her response to information, rather than the social structure within which the individual is located, and identify a problem with the way in which unjust social structures in particular "gerrymander" the regularities an individual is exposed to, and by extension the priors their visual system draws on.
Abstract: Visual perception relies on stored information and environmental associations to arrive at a determinate representation of the world. This opens up the disturbing possibility that our visual experiences could themselves be subject to a kind of racial bias, simply in virtue of accurately encoding previously encountered environmental regularities. This possibility raises the following question: what, if anything, is wrong with beliefs grounded upon these prejudicial experiences? They are consistent with a range of epistemic norms, including evidentialist and reliabilist standards for justification. I argue that we will struggle to locate a flaw with these sorts of perceptual beliefs so long as we focus our analysis at the level of the individual and her response to information. We should instead broaden our analysis to include the social structure within which the individual is located. Doing so lets us identify a problem with the way in which unjust social structures in particular “gerrymander” the regularities an individual is exposed to, and by extension the priors their visual system draws on. I argue that in this way, social structures can cap perceptual skill.

18 citations



Journal ArticleDOI
TL;DR: The project of social ontology is built on the observation that social facts are not "brute" facts in nature as mentioned in this paper, and that social objects are not necessarily facts about people.
Abstract: The project of social ontology is built on the observation that social facts are not “brute” facts in nature. The fact that Tufts is a university, that the Federal Reserve is raising interest rates, that the word ‘Aristotle’ refers to Aristotle, and that Mario Batali is a restaurateur, are all the case—at least in part—in virtue of various facts about people. Theories of social ontology identify, implicitly or explicitly, some cohesive set of social facts or objects such as “institutional facts,” “semantic facts,” “artifacts,” etc. For that set, they work to provide an account of the other facts in virtue of which social facts are the case, or in virtue of which social objects exist. (Epstein 2013: 54)

Journal ArticleDOI
TL;DR: A young person can tell a zebra from the way it looks and competently deduce the proposition that ¬CDM = it is not a cleverly disguised mule (C2) and thereupon come to believe ¬ CDM (C3) [Dretske 1970].
Abstract: Zebra. You are at the zoo. Currently you are standing in front of the zebra enclosure and see a black-and-white striped equine creature inside. Since you can tell a zebra from the way it looks, you come to know the proposition that Z = it is a zebra (C1). From this you competently deduce the proposition that ¬CDM = it is not a cleverly disguised mule (C2) and you thereupon come to believe ¬CDM (C3) [Dretske 1970].

Journal ArticleDOI
TL;DR: In this article, a novel virtue reliabilist account of justified belief is developed, which incorporates insights from both process reliablism and extant versions of virtue relabilism.
Abstract: In this paper, I aim to develop a novel virtue reliabilist account of justified belief, which incorporates insights from both process reliabilism and extant versions of virtue reliabilism. Like extant virtue reliabilist accounts of justi- fied belief, the proposed view takes it that justified belief is a kind of competent performance and that competent performances require reliable agent abilities. However, unlike extant versions of virtue reliabilism, the view takes abilities to essentially involve reliable processes. In this way, the proposed takes a leaf from process reliabilism. Finally, I will provide reason to believe that the view compares favourably with both extant versions of virtue reliabilism and process reliabilism. In particular, I will show that in taking abilities to essentially involve reliable processes, the view has an edge over extant versions of virtue reliabilism. Moreover, I will argue that the proposed view can either solve or defuse a number of classical problems of process reliabilism, including the new evil demon problem, the problem of clairvoyant cases and the generality problem.

Journal ArticleDOI
TL;DR: In this article, the authors defend a model of non-linguistic inferences that shows how they could be practically rational, and explore empirical results that show how non-Linguistic agents can be sensitive to these similarity assessments in a way that grants them control over their opaque judgments.
Abstract: A surge of empirical research demonstrating flexible cognition in animals and young infants has raised interest in the possibility of rational decision-making in the absence of language. A venerable position, which I here call “Classical Inferentialism”, holds that nonlinguistic agents are incapable of rational inferences. Against this position, I defend a model of nonlinguistic inferences that shows how they could be practically rational. This model vindicates the Lockean idea that we can intuitively grasp rational connections between thoughts by developing the Davidsonian idea that practical inferences are at bottom categorization judgments. From this perspective, we can see how similarity-based categorization processes widely studied in human and animal psychology might count as practically rational. The solution involves a novel hybrid of internalism and externalism: intuitive inferences are psychologically rational (in the explanatory sense) given the intensional sensitivity of the similarity assessment to the internal structure of the agent's reasons for acting, but epistemically rational (in the justificatory sense) given an ecological fit between the features matched by that assessment and the structure of the agent's environment. The essay concludes by exploring empirical results that show how nonlinguistic agents can be sensitive to these similarity assessments in a way that grants them control over their opaque judgments.


Journal ArticleDOI
TL;DR: The authors argue that the probabilistic approach is illsuited to the roles that a notion of risk is required to play and explore the prospects for a form of pluralism about risk, embracing both the probablistic and the normalcy conceptions.
Abstract: The notion of risk plays a central role in economics, finance, health, psychology, law and elsewhere, and is prevalent in managing challenges and resources in day-to-day life. In recent work, Duncan Pritchard (2015, 2016) has argued against the orthodox probabilistic conception of risk on which the risk of a hypothetical scenario is determined by how probable it is, and in favour of a modal conception on which the risk of a hypothetical scenario is determined by how modally close it is. In this article, we use Pritchard’s discussion as a springboard for a more wide-ranging discussion of the notion of risk. We introduce three different conceptions of risk: the standard probabilistic conception, Pritchard’s modal conception, and a normalcy conception that is new (though it has some precursors in the psychological literature on risk perception). Ultimately, we argue that the modal conception is ill-suited to the roles that a notion of risk is required to play and explore the prospects for a form of pluralism about risk, embracing both the probabilistic and the normalcy conceptions.



Journal ArticleDOI
Neil Mehta1
TL;DR: In this paper, the explanatory gap is considered to be a general phenomenon, surrounding consciousness, normativity, intentionality, and more, and it is assumed that there exists a common template for reductivist arguments based on explanatory gaps.
Abstract: I assume that there exists a general phenomenon, the phenomenon of the explanatory gap, surrounding consciousness, normativity, intentionality, and more. Explanatory gaps are often thought to foreclose reductive possibilities wherever they appear. In response, reductivists who grant the existence of these gaps have offered countless local solutions. But all such reductivist responses have had a serious shortcoming: because they appeal to essentially domain-specific features, they cannot be fully generalized, and in this sense these responses have been not just local but parochial. Here I do better. Taking for granted that the explanatory gap is a genuine phenomenon, I offer a fully general diagnosis that unifies these previously fragmented reductivist responses. That we, like Descartes, can conceive of phenomenally unconscious physical duplicates of ourselves; that we, like Moore, can coherently ask whether what we desire to desire is truly good; that we, like Searle, can imagine an operator in a Chinese Room who lacks the slightest understanding of Chinese – I regard these facts, and many more, as mere instances of a general and philosophically pervasive phenomenon that I call the explanatory gap.1 Explanatory gaps are often thought to foreclose reductive possibilities wherever they appear. Here I set myself three tasks. First, I show that these familiar antireductivist arguments based on explanatory gaps can be fitted to a common template. On the one hand, we have standard reductive claims, like the claim that water reduces to (say) H2O. Such reductive claims are entailed by an a priori conceptual truth stating a possible condition for what it is, essentially speaking, to be the entity in question, together with outside information of a certain limited kind telling us what meets that condition. In this sense, all standard reductive claims are transparent. On the other hand, we have gappy reductive claims – claims purporting to reduce the phenomenally conceived to the non-phenomenally conceived, the normatively conceived to the non-normatively conceived, and so on. What distinguishes these gappy reductions is that we are missing the relevant kind of a priori conceptual truths about essences.2 As a consequence, such reductions are never transparent. The anti-reductivist would have us conclude, by inference to the best explanation, that the gappy reductions are spurious (§1-§2). Many reductivists have responded by denying the existence of the asymmetry, and thereby denying that explanatory gaps are real. Though I, too, 1 The term “explanatory gap” is of course most familiar from the phenomenal case and originates in Levine (1983). 2 As I suggest in a later footnote, a similar argument template can be constructed using identitybased rather than essence-based models of reduction. Identity-based models appear in Lewis (1972), Jackson (1996), and Chalmers (2012).

Journal ArticleDOI
TL;DR: In this article, the authors argue that racial profiling is a serious injustice that is exacerbated when it forms part of a larger pattern of similar actions that collectively realize a state of cumulative injustice.
Abstract: This paper tries to explain why racial profiling involves a serious injustice and to do so in a way that avoids the problems of existing philosophical accounts. An initially plausible view maintains that racial profiling is pro tanto wrong in and of itself by violating a constraint on fair treatment that is generally violated by acts of statistical discrimination based on ascribed characteristics. However, consideration of other cases involving statistical discrimination suggests that violating a constraint of this kind may not be an especially serious wrong in and of itself. To fully capture the significant wrong that occurs when racial profiling is targeted at black Americans or other similarly situated groups, it is argued that we should appeal to the idea that this basic injustice is exacerbated when it forms part of a larger pattern of similar actions that collectively realize a state of cumulative injustice.

Journal ArticleDOI
Sukaina Hirji1
TL;DR: Aristotelian virtue ethics are defined as traits that reliably promote an agent's own flourishing and virtuous actions as the sorts of actions a virtuous agent reliably performs under the relevant circumstances.
Abstract: It is commonly assumed that Aristotle’s ethical theory shares deep structural similarities with neo-Aristotelian virtue ethics. I argue that this assumption is a mistake, and that Aristotle’s ethical theory is both importantly distinct from the theories his work has inspired, and independently compelling. I take neoAristotelian virtue ethics to be characterized by two central commitments: (i) virtues of character are defined as traits that reliably promote an agent’s own flourishing, and (ii) virtuous actions are defined as the sorts of actions a virtuous agent reliably performs under the relevant circumstances. I argue that neither of these commitments are features of Aristotle’s own view, and I sketch an alternative explanation for the relationship between virtue and happiness in the Nicomachean Ethics. Although, on the interpretation I defend, we do not find in Aristotle a distinctive normative theory alongside deontology and consequentialism, what we do find is a way of thinking about how prudential and moral reasons can come to be aligned through a certain conception of practical agency.

Journal ArticleDOI
TL;DR: The authors argue that permissive situations clearly are possible in the sense that a cognitively healthy person exhibits direct control over her beliefs without exhibiting any kind of cognitive defect, and that there are situations where we are rationally permitted to believe some proposition and simultaneously suspend judgment on that proposition.
Abstract: According to what I will call ‘the disanalogy thesis,’ beliefs differ from actions in at least the following important way: while cognitively healthy people often exhibit direct control over their actions, there is no possible scenario where a cognitively healthy person exhibits direct control over her beliefs. Recent arguments against the disanalogy thesis maintain that, if you find yourself in what I will call a ‘permissive situation’ with respect to p, then you can have direct control over whether you believe p, and do so without manifesting any cognitive defect. These arguments focus primarily on the idea that we can have direct doxastic control in permissive situations, but they provide insufficient reason for thinking that permissive situations are actually possible, since they pay inadequate attention to the following worries: permissive situations seem inconsistent with the uniqueness thesis, permissive situations seem inconsistent with natural thoughts about epistemic akrasia, and vagueness threatens even if we push these worries aside. In this paper I argue that, on the understanding of permissive situations that is most useful for evaluating the disanalogy thesis, permissive situations clearly are possible. Epistemologists have grown increasingly interested in the question how epistemic rationality compares to practical rationality (Berker 2013, Cohen 2016, Rinard 2017, etc.), but epistemologists have been asking the more general question how belief relates to action for a very long time—for at least as long as they’ve wondered whether, and to what extent, we can control our beliefs. According to the currently dominant view, beliefs differ from actions in at least this way: while we often have direct control over our actions, we never have direct control over our beliefs. On this view, just as we might cause ourselves to blush by thinking about something embarrassing, we might cause ourselves to believe (e.g.) that the lights are on by looking at the lights and turning them on (Feldman 2001). But on this view, we can’t form the belief that the lights are on, or any other belief, by simply deciding to form it, the way we can (for example) raise our arms by simply deciding to raise them. On this view, if direct control over our beliefs isn’t fully conceptually impossible, it’s at least impossible for cognitively healthy people like you and me. Perhaps Bennett’s Credamites can do it (1990), but they aren’t functioning properly, and we can’t do it without getting ourselves into a defective cognitive state like theirs. Thus, while cognitively healthy people often exhibit direct control over their actions, there is no possible scenario where a cognitively healthy person exhibits direct control over her beliefs. Call this thesis about the relationship between belief and action the ‘disanalogy thesis.’ As Kurt Sylvan notes (2016), Joseph Raz (1999), Carl Ginet (2001), Keith Frankish (2007), Philip Nickel (2010), and Conor McHugh (2013) all reject the disanalogy thesis, and they all reject it because they all think that, if a person finds herself in a situation where she’s rationally permitted to believe some proposition and simultaneously rationally permitted to suspend judgment on that proposition, she can have direct control over whether she believes it without exhibiting any kind of cognitive defect. I think these authors are on to something. But it’s not obvious that there are situations where we’re rationally permitted to believe some proposition and simultaneously * Penultimate draft. Please cite final draft forthcoming in Philosophy and Phenomenological Research.


Journal ArticleDOI
TL;DR: In this article, the authors explore the view that Frege's puzzle is a source of straightforward counterexamples to Leibniz's law and provide the resources for a straightforward semantics of attitude reports that is consistent with the Millian thesis that the meaning of a name is just the thing it stands for.
Abstract: We explore the view that Frege’s puzzle is a source of straightforward counterexamples to Leibniz’s law. Taking this seriously requires us to revise the classical logic of quantifiers and identity; we work out the options, in the context of higher-order logic. The logics we arrive at provide the resources for a straightforward semantics of attitude reports that is consistent with the Millian thesis that the meaning of a name is just the thing it stands for. We provide models to show that some of these logics are non-degenerate. 1 Opacity and Export Having incomplete knowledge of classical astronomy (and being unacquainted with the literature on Frege’s Puzzle) Asher assents to ‘Hesperus is bright’, but not to ‘Phosphorus is bright’. It’s natural, then, to describe Asher’s doxastic state like this: Asher believes that Hesperus is bright, but Asher does not believe that Phosphorus is bright. But of course, Hesperus and Phosphorus are the very same thing: the planet Venus. Taking this familiar story at face-value, we might formalize these claims as follows: