scispace - formally typeset
Search or ask a question

Showing papers on "Counterfactual conditional published in 2019"


Proceedings ArticleDOI
TL;DR: This work proposes a framework for generating and evaluating a diverse set of counterfactual explanations based on determinantal point processes, and provides metrics that enable comparison ofcounterfactual-based methods to other local explanation methods.
Abstract: Post-hoc explanations of machine learning models are crucial for people to understand and act on algorithmic predictions. An intriguing class of explanations is through counterfactuals, hypothetical examples that show people how to obtain a different prediction. We posit that effective counterfactual explanations should satisfy two properties: feasibility of the counterfactual actions given user context and constraints, and diversity among the counterfactuals presented. To this end, we propose a framework for generating and evaluating a diverse set of counterfactual explanations based on determinantal point processes. To evaluate the actionability of counterfactuals, we provide metrics that enable comparison of counterfactual-based methods to other local explanation methods. We further address necessary tradeoffs and point to causal implications in optimizing for counterfactuals. Our experiments on four real-world datasets show that our framework can generate a set of counterfactuals that are diverse and well approximate local decision boundaries, outperforming prior approaches to generating diverse counterfactuals. We provide an implementation of the framework at this https URL.

409 citations


Proceedings Article
27 May 2019
TL;DR: This work builds on standard theory and tools from formal verification and proposes a novel algorithm that solves a sequence of satisfiability problems, where both the distance function (objective) and predictive model (constraints) are represented as logic formulae.
Abstract: Predictive models are being increasingly used to support consequential decision making at the individual level in contexts such as pretrial bail and loan approval As a result, there is increasing social and legal pressure to provide explanations that help the affected individuals not only to understand why a prediction was output, but also how to act to obtain a desired outcome To this end, several works have proposed optimization-based methods to generate nearest counterfactual explanations However, these methods are often restricted to a particular subset of models (eg, decision trees or linear models) and differentiable distance functions In contrast, we build on standard theory and tools from formal verification and propose a novel algorithm that solves a sequence of satisfiability problems, where both the distance function (objective) and predictive model (constraints) are represented as logic formulae As shown by our experiments on real-world data, our algorithm is: i) model-agnostic ({non-}linear, {non-}differentiable, {non-}convex); ii) data-type-agnostic (heterogeneous features); iii) distance-agnostic ($\ell_0, \ell_1, \ell_\infty$, and combinations thereof); iv) able to generate plausible and diverse counterfactuals for any sample (ie, 100% coverage); and v) at provably optimal distances

134 citations


Posted Content
TL;DR: The problem of feasibility is formulated as preserving causal relationships among input features and a method is presented that uses (partial) structural causal models to generate actionable counterfactuals that better satisfy feasibility constraints than existing methods.
Abstract: To construct interpretable explanations that are consistent with the original ML model, counterfactual examples---showing how the model's output changes with small perturbations to the input---have been proposed. This paper extends the work in counterfactual explanations by addressing the challenge of feasibility of such examples. For explanations of ML models in critical domains such as healthcare and finance, counterfactual examples are useful for an end-user only to the extent that perturbation of feature inputs is feasible in the real world. We formulate the problem of feasibility as preserving causal relationships among input features and present a method that uses (partial) structural causal models to generate actionable counterfactuals. When feasibility constraints cannot be easily expressed, we consider an alternative mechanism where people can label generated CF examples on feasibility: whether it is feasible to intervene and realize the candidate CF example from the original input. To learn from this labelled feasibility data, we propose a modified variational auto encoder loss for generating CF examples that optimizes for feasibility as people interact with its output. Our experiments on Bayesian networks and the widely used ''Adult-Income'' dataset show that our proposed methods can generate counterfactual explanations that better satisfy feasibility constraints than existing methods.. Code repository can be accessed here: \textit{this https URL}

118 citations


Posted Content
TL;DR: This article proposed a method for explaining the predictions of any classifier by generating w-counterfactual explanations that state minimum changes necessary to flip a prediction's classification, which can be used to measure and improve the fidelity of its regressions.
Abstract: We propose a novel method for explaining the predictions of any classifier. In our approach, local explanations are expected to explain both the outcome of a prediction and how that prediction would change if 'things had been different'. Furthermore, we argue that satisfactory explanations cannot be dissociated from a notion and measure of fidelity, as advocated in the early days of neural networks' knowledge extraction. We introduce a definition of fidelity to the underlying classifier for local explanation models which is based on distances to a target decision boundary. A system called CLEAR: Counterfactual Local Explanations via Regression, is introduced and evaluated. CLEAR generates w-counterfactual explanations that state minimum changes necessary to flip a prediction's classification. CLEAR then builds local regression models, using the w-counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method, which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. CLEAR's regressions are found to have significantly higher fidelity than LIME's, averaging over 45% higher in this paper's four case studies.

64 citations


Posted Content
TL;DR: In this paper, the authors propose a novel algorithm that solves a sequence of satisfiability problems, where both the distance function (objective) and predictive model (constraints) are represented as logic formulae.
Abstract: Predictive models are being increasingly used to support consequential decision making at the individual level in contexts such as pretrial bail and loan approval. As a result, there is increasing social and legal pressure to provide explanations that help the affected individuals not only to understand why a prediction was output, but also how to act to obtain a desired outcome. To this end, several works have proposed optimization-based methods to generate nearest counterfactual explanations. However, these methods are often restricted to a particular subset of models (e.g., decision trees or linear models) and differentiable distance functions. In contrast, we build on standard theory and tools from formal verification and propose a novel algorithm that solves a sequence of satisfiability problems, where both the distance function (objective) and predictive model (constraints) are represented as logic formulae. As shown by our experiments on real-world data, our algorithm is: i) model-agnostic ({non-}linear, {non-}differentiable, {non-}convex); ii) data-type-agnostic (heterogeneous features); iii) distance-agnostic ($\ell_0, \ell_1, \ell_\infty$, and combinations thereof); iv) able to generate plausible and diverse counterfactuals for any sample (i.e., 100% coverage); and v) at provably optimal distances.

64 citations


Journal ArticleDOI
TL;DR: An analytic framework for integrating causality and policy inference is developed that accepts the mandate of causal rigor but is conceptually rather than methodologically driven and applies to two substantive areas that have generated high-visibility experimental research and that have considerable policy influence.
Abstract: The randomized experiment has achieved the status of the gold standard for estimating causal effects in criminology and the other social sciences. Although causal identification is indeed important...

58 citations


Journal ArticleDOI
TL;DR: This study aligns the recently proposed Linear Interpretable Model-agnostic Explainer and Shapley Additive Explanations with the notion of counterfactual explanations, and empirically benchmarks their effectiveness and efficiency against SEDC using a collection of 13 data sets.
Abstract: We study the interpretability of predictive systems that use high-dimensonal behavioral and textual data. Examples include predicting product interest based on online browsing data and detecting spam emails or objectionable web content. Recently, counterfactual explanations have been proposed for generating insight into model predictions, which focus on what is relevant to a particular instance. Conducting a complete search to compute counterfactuals is very time-consuming because of the huge dimensionality. To our knowledge, for behavioral and text data, only one model-agnostic heuristic algorithm (SEDC) for finding counterfactual explanations has been proposed in the literature. However, there may be better algorithms for finding counterfactuals quickly. This study aligns the recently proposed Linear Interpretable Model-agnostic Explainer (LIME) and Shapley Additive Explanations (SHAP) with the notion of counterfactual explanations, and empirically benchmarks their effectiveness and efficiency against SEDC using a collection of 13 data sets. Results show that LIME-Counterfactual (LIME-C) and SHAP-Counterfactual (SHAP-C) have low and stable computation times, but mostly, they are less efficient than SEDC. However, for certain instances on certain data sets, SEDC's run time is comparably large. With regard to effectiveness, LIME-C and SHAP-C find reasonable, if not always optimal, counterfactual explanations. SHAP-C, however, seems to have difficulties with highly unbalanced data. Because of its good overall performance, LIME-C seems to be a favorable alternative to SEDC, which failed for some nonlinear models to find counterfactuals because of the particular heuristic search algorithm it uses. A main upshot of this paper is that there is a good deal of room for further research. For example, we propose algorithmic adjustments that are direct upshots of the paper's findings.

49 citations


Journal ArticleDOI
02 May 2019-Entropy
TL;DR: In this paper, a formal framework for quantifying actual causation in discrete dynamical systems is presented, based on a set of basic requirements for causation (realization, composition, information, integration, and exclusion).
Abstract: Actual causation is concerned with the question: "What caused what?" Consider a transition between two states within a system of interacting elements, such as an artificial neural network, or a biological brain circuit. Which combination of synapses caused the neuron to fire? Which image features caused the classifier to misinterpret the picture? Even detailed knowledge of the system's causal network, its elements, their states, connectivity, and dynamics does not automatically provide a straightforward answer to the "what caused what?" question. Counterfactual accounts of actual causation, based on graphical models paired with system interventions, have demonstrated initial success in addressing specific problem cases, in line with intuitive causal judgments. Here, we start from a set of basic requirements for causation (realization, composition, information, integration, and exclusion) and develop a rigorous, quantitative account of actual causation, that is generally applicable to discrete dynamical systems. We present a formal framework to evaluate these causal requirements based on system interventions and partitions, which considers all counterfactuals of a state transition. This framework is used to provide a complete causal account of the transition by identifying and quantifying the strength of all actual causes and effects linking the two consecutive system states. Finally, we examine several exemplary cases and paradoxes of causation and show that they can be illuminated by the proposed framework for quantifying actual causation.

40 citations


Journal ArticleDOI
TL;DR: In this paper, a systematic procedure for searching for key missing events is proposed, which explores alternative realizations of them where things turned for the worse, termed downward counterfactuals, by repeatedly exploring ways in which the event loss might have been incrementally worse.
Abstract: An event catalog is a foundation of the risk analysis for any natural hazard. Especially if the catalog is comparatively brief relative to the return periods of possible events, it may well be deficient in extreme events that are of special importance to risk stakeholders. It is common practise for quantitative risk analysts to construct ensembles of future scenarios that include extreme events that are not in the event catalog. But past poor experience for many hazards shows that these ensembles are still liable to be missing crucial unknown events. An explicit systematic procedure is proposed here for searching for these key missing events. This procedure starts with the historical catalog events, and explores alternative realizations of them where things turned for the worse. These are termed downward counterfactuals. By repeatedly exploring ways in which the event loss might have been incrementally worse, missing events can be discovered that may take risk analysts, and risk stakeholders, by surprise. The downward counterfactual search for extreme events is illustrated with examples drawn from a variety of natural hazards. Attention is drawn to the problem of overfitting to the historical record, and the value of stochastic modeling of the past.

38 citations


Posted Content
TL;DR: A simple approximation technique is introduced that is effective for finding counterfactual explanations for predictions of the original model using a range of distance metrics and is significantly closer to the original instances compared to other methods designed for tree ensembles for four distance metrics.
Abstract: Model interpretability has become an important problem in machine learning (ML) due to the increased effect algorithmic decisions have on humans. Counterfactual explanations can help users understand not only why ML models make certain decisions, but also give insight into how these decisions can be modified. We frame the problem of finding counterfactual explanations as an optimization task and extend previous work that could only be applied to differentiable models. In order to accommodate non-differentiable models such as tree ensembles, we propose using probabilistic model approximations in the optimization framework. We introduce a simple approximation technique that is effective for finding counterfactual explanations for predictions of the original model using a range of distance metrics. We show that our counterfactual examples are significantly closer to the original instances compared to other methods designed for tree ensembles for four distance metrics.

36 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors propose a framework that generates attainable counterfactuals by identifying the smallest change made to a feature vector to qualitatively influence a prediction; for example, from "loan rejected" to "awarded" or from 'high risk of cardiovascular disease' to "low risk".
Abstract: Counterfactual explanations can be obtained by identifying the smallest change made to a feature vector to qualitatively influence a prediction; for example, from 'loan rejected' to 'awarded' or from 'high risk of cardiovascular disease' to 'low risk'. Previous approaches often emphasized that counterfactuals should be easily interpretable to humans, motivating sparse solutions with few changes to the feature vectors. However, these approaches would not ensure that the produced counterfactuals be proximate (i.e., not local outliers) and connected to regions with substantial data density (i.e., close to correctly classified observations), two requirements known as counterfactual faithfulness. These requirements are fundamental when making suggestions to individuals that are indeed attainable. Our contribution is twofold. On one hand, we suggest to complement the catalogue of counterfactual quality measures [1] using a criterion to quantify the degree of difficulty for a certain counterfactual suggestion. On the other hand, drawing ideas from the manifold learning literature, we develop a framework that generates attainable counterfactuals. We suggest the counterfactual conditional heterogeneous variational autoencoder (C-CHVAE) to identify attainable counterfactuals that lie within regions of high data density.

Journal ArticleDOI
TL;DR: Four experiments explore a recent finding and provide clear support for a unified counterfactual analysis of causal reasoning by showing that prescriptive norm violations affect causal judgments of agents, but not inanimate artifacts used by those agents.

Book ChapterDOI
16 Sep 2019
TL;DR: This paper focuses on the notion of explanation justification, defined as connectedness to ground-truth data, in the context of counterfactuals, and shows that state-of-the-art post-hoccounterfactual approaches can minimize the impact of this risk by generating less local explanations.
Abstract: Post-hoc interpretability approaches, although powerful tools to generate explanations for predictions made by a trained black-box model, have been shown to be vulnerable to issues caused by lack of robustness of the classifier. In particular, this paper focuses on the notion of explanation justification, defined as connectedness to ground-truth data, in the context of counterfactuals. In this work, we explore the extent of the risk of generating unjustified explanations. We propose an empirical study to assess the vulnerability of classifiers and show that the chosen learning algorithm heavily impacts the vulnerability of the model. Additionally, we show that state-of-the-art post-hoc counterfactual approaches can minimize the impact of this risk by generating less local explanations (Source code available at: https://github.com/thibaultlaugel/truce).

Journal ArticleDOI
TL;DR: A set-theoretic and possible worlds approach to counterfactual analysis in case-study explanation is developed, and a rigorous understanding of the 'minimal-rewrite' rule is turned to, linking this rule to insights from set theory about the relative importance of necessary conditions.
Abstract: In this paper, we develop a set-theoretic and possible worlds approach to counterfactual analysis in case-study explanation. Using this approach, we first consider four kinds of counterfactuals: necessary condition counterfactuals, SUIN condition counterfactuals, sufficient condition counterfactuals, and INUS condition counterfactuals. We explore the distinctive causal claims entailed in each, and conclude that necessary condition and SUIN condition counterfactuals are the most useful types for hypothesis assessment in case-study research. We then turn attention to the development of a rigorous understanding of the 'minimal-rewrite' rule, linking this rule to insights from set theory about the relative importance of necessary conditions. We show why, logically speaking, a comparative analysis of two necessary condition counterfactuals will tend to favour small events and contingent happenings. A third section then presents new tools for specifying the level of generality of the events in a counterfactual. We show why and how the goals of formulating empirically important versus empirically plausible counterfactuals stand in tension with one another. Finally, we use our framework to link counterfactual analysis to causal sequences, which in turn provides advantages for conducting counterfactual projections.

Posted Content
TL;DR: It is argued that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions, that can compute counterfactuals, new samples that will inform how the model reacts to feature changes on the authors' input.
Abstract: Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.

Journal ArticleDOI
17 Jul 2019
TL;DR: It is shown that this work's model-specific approach exploits all the theoretical advantages of counterfactual explanations, hence improves decision tree interpretability by decoupling the quality of the interpretation from the depth and width of the tree.
Abstract: Explanations in machine learning come in many forms, but a consensus regarding their desired properties is still emerging. In our work we collect and organise these explainability desiderata and discuss how they can be used to systematically evaluate properties and quality of an explainable system using the case of class-contrastive counterfactual statements. This leads us to propose a novel method for explaining predictions of a decision tree with counterfactuals. We show that our model-specific approach exploits all the theoretical advantages of counterfactual explanations, hence improves decision tree interpretability by decoupling the quality of the interpretation from the depth and width of the tree.

Proceedings ArticleDOI
TL;DR: In this paper, the authors propose a new line of counterfactual explanations research aimed at providing actionable and feasible paths to transform a selected instance into one that meets a certain goal.
Abstract: Work in Counterfactual Explanations tends to focus on the principle of "the closest possible world" that identifies small changes leading to the desired outcome. In this paper we argue that while this approach might initially seem intuitively appealing it exhibits shortcomings not addressed in the current literature. First, a counterfactual example generated by the state-of-the-art systems is not necessarily representative of the underlying data distribution, and may therefore prescribe unachievable goals(e.g., an unsuccessful life insurance applicant with severe disability may be advised to do more sports). Secondly, the counterfactuals may not be based on a "feasible path" between the current state of the subject and the suggested one, making actionable recourse infeasible (e.g., low-skilled unsuccessful mortgage applicants may be told to double their salary, which may be hard without first increasing their skill level). These two shortcomings may render counterfactual explanations impractical and sometimes outright offensive. To address these two major flaws, first of all, we propose a new line of Counterfactual Explanations research aimed at providing actionable and feasible paths to transform a selected instance into one that meets a certain goal. Secondly, we propose FACE: an algorithmically sound way of uncovering these "feasible paths" based on the shortest path distances defined via density-weighted metrics. Our approach generates counterfactuals that are coherent with the underlying data distribution and supported by the "feasible paths" of change, which are achievable and can be tailored to the problem at hand.

Journal ArticleDOI
TL;DR: In this article, the authors present arguments for the importance of counterfactuals as well as a game-theoretic framework to account for them and argue that the nature, stability and the dynamics of any institution depend on how people reason about states of affairs that do not occur.
Abstract: This paper is a contribution to the advancement of a naturalistic social ontology. Individuals participate in an institutionalized practice by following rules. In this perspective, I show that the nature, the stability, and the dynamics of any institution depend on how people reason about states of affairs that do not occur. That means that counterfactual reasoning is essential in the working of institutions. I present arguments for the importance of counterfactuals as well as a game-theoretic framework to account for them. Since the role of counterfactuals does not directly transpire in people's behavior, the whole discussion can be seen as a broad argument against behaviorism in philosophy and the social sciences.

01 Dec 2019
TL;DR: In this article, the authors propose a modified variational auto encoder loss for generating CF examples that optimizes for feasibility as people interact with its output, which can generate counterfactual explanations that better satisfy feasibility constraints.
Abstract: To construct interpretable explanations that are consistent with the original ML model, counterfactual examples---showing how the model's output changes with small perturbations to the input---have been proposed. This paper extends the work in counterfactual explanations by addressing the challenge of feasibility of such examples. For explanations of ML models in critical domains such as healthcare and finance, counterfactual examples are useful for an end-user only to the extent that perturbation of feature inputs is feasible in the real world. We formulate the problem of feasibility as preserving causal relationships among input features and present a method that uses (partial) structural causal models to generate actionable counterfactuals. When feasibility constraints cannot be easily expressed, we consider an alternative mechanism where people can label generated CF examples on feasibility: whether it is feasible to intervene and realize the candidate CF example from the original input. To learn from this labelled feasibility data, we propose a modified variational auto encoder loss for generating CF examples that optimizes for feasibility as people interact with its output. Our experiments on Bayesian networks and the widely used ''Adult-Income'' dataset show that our proposed methods can generate counterfactual explanations that better satisfy feasibility constraints than existing methods.. Code repository can be accessed here: \textit{this https URL}

Posted Content
TL;DR: In this article, tight bounds are developed on counterfactual discrete choice probabilities and on the expectation and c.d.f. of (functionals of) counter-factual stochastic demand in nonparametric random utility models of demand.
Abstract: We bound features of counterfactual choices in the nonparametric random utility model of demand, i.e. if observable choices are repeated cross-sections and one allows for unrestricted, unobserved heterogeneity. In this setting, tight bounds are developed on counterfactual discrete choice probabilities and on the expectation and c.d.f. of (functionals of) counterfactual stochastic demand.

Journal ArticleDOI
TL;DR: This article examined how adults with and without ASD make sense of reality-violating fantasy narratives by testing real-time understanding of counterfactuals, and found that anomaly detection effects in the early moments of processing (immediately in Experiment 1, and from the post-critical region in Experiment 2), which were not modulated by group.

Journal ArticleDOI
TL;DR: Different from causal and hypothetical conditionals, the dual meaning and pragmatic implications ofcounterfactuals may prompt people to go beyond here and now to elaborate their mental models and entertain alternative interpretations, and substantial literature exposure would further enhance pragmatic inference of counterfactual context.
Abstract: Counterfactuals are contrary-to-fact statements that are widely used in daily life to convey thoughts about what might have been. Different from fact-based processing, successful counterfactual comprehension requires readers to keep in mind both suppositional information and presupposed fact. Using event-related potentials, the present study investigates how the process of establishing a coreferential relation (i.e., pronoun resolution) is influenced by counterfactual context, and whether it will be modulated by individual difference in literature reading. We compared the P600 (a positive-going deflection, which often reaches its peak around 600 milliseconds after presentation of the stimulus) effects elicited by gender-mismatched pronouns in three conditionals (causal vs. hypothetical vs. counterfactual) between two groups (literature exposure high- vs. low-level group). Results show that for low-level group, incongruent pronouns elicited robust P600 effects across all three conditionals, while for high-level group, the P600 effects were pronounced only in causal and hypothetical conditionals, but not in counterfactual conditionals. These findings suggest (a) different from causal and hypothetical conditionals, the dual meaning and pragmatic implications of counterfactuals may prompt people to go beyond here and now to elaborate their mental models and entertain alternative interpretations, and (b) substantial literature exposure would further enhance pragmatic inference of counterfactual context, leading high-level readers more inclined to elaborate discourse with possible alternative inferences, while leaving low-level readers habitually resort to more straightforward coreferential interpretation. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

Posted Content
TL;DR: A methodology for making counterfactual predictions when the information held by strategic agents is a latent parameter, and there is a finite dimensional description of the sharpcounterfactual prediction, even though the latent parameter is infinite dimensional.
Abstract: We describe a methodology for making counterfactual predictions in settings where the information held by strategic agents and the distribution of payoi¬€-relevant states of the world are unknown. The analyst observes behavior assumed to be rationalized by a Bayesian model, in which agents maximize expected utility, given partial and dii¬€erential information about the state. A counterfactual prediction is desired about behavior in another strategic setting, under the hypothesis that the distribution of the state and agents’ information about the state are held fixed. When the data and the desired counterfactual prediction pertain to environments with finitely many states, players, and actions, the counterfactual prediction is described by finitely many linear inequalities, even though the latent parameter, the information structure, is infinite dimensional.

Journal ArticleDOI
Erik Carlson1
TL;DR: The authors argued that CCA is incompatible with the prudential and moral relevance of harm and benefit, and some possible ways to revise or restrict CCA, in order to avoid this conclusion, are discussed and found wanting.
Abstract: The counterfactual comparative account of harm and benefit (CCA) has several virtues, but it also faces serious problems. I argue that CCA is incompatible with the prudential and moral relevance of harm and benefit. Some possible ways to revise or restrict CCA, in order to avoid this conclusion, are discussed and found wanting. Finally, I try to show that appealing to the context-sensitivity of counterfactuals, or to the alleged contrastive nature of harm and benefit, does not provide a solution.

Journal ArticleDOI
TL;DR: The authors give an account of the meaning of subjunctive conditionals according to which the past tense receives a modal interpretation, which allows the worlds of the antecedent to include the world of the context of utterance.
Abstract: The paper gives an account of the meaning of subjunctive conditionals according to which the past tense receives a modal interpretation. The view allows the worlds of the antecedent to include the world of the context of utterance, and thus it avoids a problem pointed out by Mackay (2015) for previous modal views of the past tense in subjunctive conditionals. I argue that it also explains a variety of facts about the relationship among subjunctive morphology, counterfactuality, and presupposition. EARLY ACCESS

Journal ArticleDOI
TL;DR: The authors explored experiences and attitudes associated with "precarious work", an umbrella term for insecure, casual, flexible, contingency, non-standard and zero-hour types of employment.
Abstract: Purpose The purpose of this paper is to explore experiences and attitudes associated with “precarious work”, an umbrella term for insecure, casual, flexible, contingency, non-standard and zero-hour types of employment. Design/methodology/approach The investigation was carried-out through two studies. The “outside-in” view was represented by business undergraduates (n=56), responding to a four-item questionnaire on precarious work. It was contrasted with the “inside-out” perspective of migrant, care and hospitality workers (n=72) expressed in 48 in-depth interviews, and four focus groups. Findings Participant narratives included counterfactual comparisons that were more often of a downward (“it could have been worse”) than of an upward (“not as good as it could have been”) kind. Precarious participants spontaneously remarked that they were “lucky” (rather than “unlucky”) to be in precarious work. Research limitations/implications Precarious work is likely to give rise to insecurity, uncertainty and vulnerability. However, this study distinguishes between the perspectives of “outside-in” observers, and “inside-out” participants. The former view was aligned with the standard view of work social scientists, yet the latter ran counter to both. Interestingly, the narratives of participants were compatible with the self-evaluations of people exposed to other hardships (like natural disasters). Originality/value There is a limited research on how the use of counterfactual thinking and difference of vantage points shapes attitudes and evaluations of precariousness. To the authors’ knowledge, this is the first study which has identified and explained the unprompted use of “luck” in the narratives of precarious workers.

Journal ArticleDOI
03 Jan 2019
TL;DR: In this article, the authors introduce an extension of team semantics which provides a framework for the logic of manipulationist theories of causation based on structural equation models, such as Woodward's and Pearl's, incorporating (partial or total) information about functional dependencies that are invariant under interventions.
Abstract: We introduce an extension of team semantics which provides a framework for the logic of manipulationist theories of causation based on structural equation models, such as Woodward's and Pearl's; our causal teams incorporate (partial or total) information about functional dependencies that are invariant under interventions. We give a unified treatment of observational and causal aspects of causal models by isolating two operators on causal teams which correspond, respectively, to conditioning and to interventionist counterfactual implication. We then introduce formal languages for deterministic and probabilistic causal discourse, and show how various notions of cause (e.g. direct and total causes) may be defined in them. Through the tuning of various constraints on structural equations (recursivity, existence and uniqueness of solutions, full or partial definition of the functions), our framework can capture different causal models. We give an overview of the inferential aspects of the recursive, fully defined case; and we dedicate some attention to the recursive, partially defined case, which involves a shift of attention towards nonclassical truth values.

Journal ArticleDOI
TL;DR: In this paper, the role of counterfactual reasoning for the EPR argument and Bell's theorem is investigated and it is shown that the use of the latter does no harm and the non-locality result can well follow from EPR premises.
Abstract: I show why old and new claims on the role of counterfactual reasoning for the EPR argument and Bell’s theorem are unjustified: once the logical relation between locality and counterfactual reasoning is clarified, the use of the latter does no harm and the nonlocality result can well follow from the EPR premises To show why, after emphasizing the role of incompleteness arguments that Einstein developed before the EPR paper, I critically review more recent claims that equate the use of counterfactual reasoning with the assumption of a strong form of realism and argue that such claims are untenable

Journal ArticleDOI
03 Jan 2019
TL;DR: In this paper, the authors introduce an extension of team semantics which provides a framework for the logic of manipulationist theories of causation based on structural equation models, such as Woodward's and Pearl's, incorporating (partial or total) information about functional dependencies that are invariant under interventions.
Abstract: We introduce an extension of team semantics which provides a framework for the logic of manipulationist theories of causation based on structural equation models, such as Woodward's and Pearl's; our causal teams incorporate (partial or total) information about functional dependencies that are invariant under interventions. We give a unified treatment of observational and causal aspects of causal models by isolating two operators on causal teams which correspond, respectively, to conditioning and to interventionist counterfactual implication. We then introduce formal languages for deterministic and probabilistic causal discourse, and show how various notions of cause (e.g. direct and total causes) may be defined in them. Through the tuning of various constraints on structural equations (recursivity, existence and uniqueness of solutions, full or partial definition of the functions), our framework can capture different causal models. We give an overview of the inferential aspects of the recursive, fully defined case; and we dedicate some attention to the recursive, partially defined case, which involves a shift of attention towards nonclassical truth values.

Journal ArticleDOI
TL;DR: In this paper, a non-orthodox, comparative, counterfactual, hybrid (partly welfarist, partly non-welfarist) concept of harm is proposed to describe the moral wrongness of discrimination.
Abstract: Many legal, social, and medical theorists and practitioners, as well as lay people, seem to be concerned with the harmfulness of discriminative practices. However, the philosophical literature on the moral wrongness of discrimination, with a few exceptions, does not focus on harm. In this paper, I examine, and improve, a recent account of wrongful discrimination, which divides into (1) a definition of group discrimination, and (2) a characterisation of its moral wrong-making feature in terms of harm. The resulting account analyses the wrongness of discrimination in terms of intrapersonal comparisons of the discriminatee’s actual, and relevantly counterfactual, well-being levels. I show that the account faces problems from counterfactuals, which can be traced back specifically to the orthodox - comparative, counterfactual, welfarist - concept of harm. I argue that non-counterfactual and non-comparative harm concepts face problems of their own, and don’t fit easily with our best understanding of discrimination; hence they are unsuitable to replace the orthodox concept here. I then propose a non-orthodox - comparative, counterfactual, hybrid (partly welfarist, partly non-welfarist) - concept of harm, which relies on counterfactual comparisons of ways of being treated (rather than well-being levels). I suggest how such a concept can help us handle the problems from counterfactuals, at least for my account of discrimination. I also show that there are similar proposals in other harm-related debates. An upshot of the paper is thus to corroborate the case for a non-orthodox, hybrid concept of harm, which seems better able to fulfil its functional roles in a variety of contexts.