scispace - formally typeset
Search or ask a question
Conference

AAAI/ACM Conference on AI, Ethics, and Society 

About: 호남고고학보 is an academic conference. The conference publishes majorly in the area(s): Computer science & Engineering..

Papers published on a yearly basis

Papers
More filters
Proceedings ArticleDOI
09 Jun 2022
TL;DR: It is concluded that the turn toward audits alone is unlikely to achieve actual algorithmic accountability, and sustained focus on institutional design will be required for meaningful third party involvement.
Abstract: Much attention has focused on algorithmic audits and impact assessments to hold developers and users of algorithmic systems accountable. But existing algorithmic accountability policy approaches have neglected the lessons from non-algorithmic domains: notably, the importance of third parties. Our paper synthesizes lessons from other fields on how to craft effective systems of external oversight for algorithmic deployments. First, we discuss the challenges of third party oversight in the current AI landscape. Second, we survey audit systems across domains - e.g., financial, environmental, and health regulation - and show that the institutional design of such audits are far from monolithic. Finally, we survey the evidence base around these design components and spell out the implications for algorithmic auditing. We conclude that the turn toward audits alone is unlikely to achieve actual algorithmic accountability, and sustained focus on institutional design will be required for meaningful third party involvement.

18 citations

Proceedings ArticleDOI
15 May 2022
TL;DR: This work proposes an evaluation framework which can quantitatively measure disparities in the quality of explanations and sheds light on previously unexplored ways in which explanation methods may introduce unfairness in real world decision making.
Abstract: As post hoc explanation methods are increasingly being leveraged to explain complex models in high-stakes settings, it becomes critical to ensure that the quality of the resulting explanations is consistently high across all subgroups of a population. For instance, it should not be the case that explanations associated with instances belonging to, e.g., women, are less accurate than those associated with other genders. In this work, we initiate the study of identifying group-based disparities in explanation quality. To this end, we first outline several key properties that contribute to explanation quality-namely, fidelity (accuracy), stability, consistency, and sparsity-and discuss why and how disparities in these properties can be particularly problematic. We then propose an evaluation framework which can quantitatively measure disparities in the quality of explanations. Using this framework, we carry out an empirical analysis with three datasets, six post hoc explanation methods, and different model classes to understand if and when group-based disparities in explanation quality arise. Our results indicate that such disparities are more likely to occur when the models being explained are complex and non-linear. We also observe that certain post hoc explanation methods (e.g., Integrated Gradients, SHAP) are more likely to exhibit disparities. Our work sheds light on previously unexplored ways in which explanation methods may introduce unfairness in real world decision making.

16 citations

Proceedings ArticleDOI
07 Jun 2022
TL;DR: For instance, the authors found that 77% of the most frequent words associated with men versus women were more likely to be used by men than women, providing evidence of a masculine default in the everyday language of the English-speaking world.
Abstract: Word embeddings are numeric representations of meaning derived from word co-occurrence statistics in corpora of human-produced texts. The statistical regularities in language corpora encode well-known social biases into word embeddings (e.g., the word vector for family is closer to the vector women than to men). Although efforts have been made to mitigate bias in word embeddings, with the hope of improving fairness in downstream Natural Language Processing (NLP) applications, these efforts will remain limited until we more deeply understand the multiple (and often subtle) ways that social biases can be reflected in word embeddings. Here, we focus on gender to provide a comprehensive analysis of group-based biases in widely-used static English word embeddings trained on internet corpora (GloVe 2014, fastText 2017). While some previous research has helped uncover biases in specific semantic associations between a group and a target domain (e.g., women - family), using the Single-Category Word Embedding Association Test, we demonstrate the widespread prevalence of gender biases that also show differences in: (1) frequencies of words associated with men versus women; (b) part-of-speech tags in gender-associated words; (c) semantic categories in gender-associated words; and (d) valence, arousal, and dominance in gender-associated words. We leave the analysis of non-binary gender to future work due to the challenges in accurate group representation caused by limitations inherent in data. First, in terms of word frequency: we find that, of the 1,000 most frequent words in the vocabulary, 77% are more associated with men than women, providing direct evidence of a masculine default in the everyday language of the English-speaking world. Second, turning to parts-of-speech: the top male-associated words are typically verbs (e.g., fight, overpower) while the top female-associated words are typically adjectives and adverbs (e.g., giving, emotionally). Gender biases in embeddings also permeate parts-of-speech. Third, for semantic categories: bottom-up, cluster analyses of the top 1,000 words associated with each gender. The top male-associated concepts include roles and domains of big tech, engineering, religion, sports, and violence; in contrast, the top female-associated concepts are less focused on roles, including, instead, female-specific slurs and sexual content, as well as appearance and kitchen terms. Fourth, using human ratings of word valence, arousal, and dominance from a ~20,000 word lexicon, we find that male-associated words are higher on arousal and dominance, while female-associated words are higher on valence. Ultimately, these findings move the study of gender bias in word embeddings beyond the basic investigation of semantic relationships to also study gender differences in multiple manifestations in text. Given the central role of word embeddings in NLP applications, it is essential to more comprehensively document where biases exist and may remain hidden, allowing them to persist without our awareness throughout large text corpora.

15 citations

Proceedings ArticleDOI
26 Jul 2022
TL;DR: A heuristic map which matches human cognitive biases with explainability techniques from the XAI literature, structured around XAI-aided decision-making, to structure directions for future XAI systems to better align with people's cognitive processes is presented.
Abstract: The field of eXplainable Artificial Intelligence (XAI) aims to bring transparency to complex AI systems. Although it is usually considered an essentially technical field, effort has been made recently to better understand users' human explanation methods and cognitive constraints. Despite these advances, the community lacks a general vision of what and how cognitive biases affect explainability systems. To address this gap, we present a heuristic map which matches human cognitive biases with explainability techniques from the XAI literature, structured around XAI-aided decision-making. We identify four main ways cognitive biases affect or are affected by XAI systems: 1) cognitive biases affect how XAI methods are designed, 2) they can distort how XAI techniques are evaluated in user studies, 3) some cognitive biases can be successfully mitigated by XAI techniques, and, on the contrary, 4) some cognitive biases can be exacerbated by XAI techniques. We construct this heuristic map through the systematic review of 37 papers-drawn from a corpus of 285-that reveal cognitive biases in XAI systems, including the explainability method and the user and task types in which they arise. We use the findings from our review to structure directions for future XAI systems to better align with people's cognitive processes.

14 citations

Proceedings ArticleDOI
10 May 2022
TL;DR: An initial synthesis of existing research on XAI studies using a statistical meta-analysis to derive implications across existing research finds a statistically positive impact of XAI on users' performance, and indicates that human-AI decision-making tends to yield better task performance on text data.
Abstract: Research in artificial intelligence (AI)-assisted decision-making is experiencing tremendous growth with a constantly rising number of studies evaluating the effect of AI with and without techniques from the field of explainable AI (XAI) on human decision-making performance. However, as tasks and experimental setups vary due to different objectives, some studies report improved user decision-making performance through XAI, while others report only negligible effects. Therefore, in this article, we present an initial synthesis of existing research on XAI studies using a statistical meta-analysis to derive implications across existing research. We observe a statistically positive impact of XAI on users' performance. Additionally, the first results indicate that human-AI decision-making tends to yield better task performance on text data. However, we find no effect of explanations on users' performance compared to sole AI predictions. Our initial synthesis gives rise to future research investigating the underlying causes and contributes to further developing algorithms that effectively benefit human decision-makers by providing meaningful explanations.

13 citations

Performance
Metrics
No. of papers from the Conference in previous years
YearPapers
2022106