scispace - formally typeset
Search or ask a question

Showing papers by "Florian Schaub published in 2023"


Proceedings ArticleDOI
19 Apr 2023
TL;DR: For instance, this paper found that participants viewed emotion AI as a deep privacy violation over the privacy of workers' sensitive emotional information, and that workers may engage in emotional labor as a mechanism to preserve privacy over their emotions.
Abstract: Workplaces are increasingly adopting emotion AI, promising benefits to organizations. However, little is known about the perceptions and experiences of workers subject to emotion AI in the workplace. Our interview study with (n=15) US adult workers addresses this gap, finding that (1) participants viewed emotion AI as a deep privacy violation over the privacy of workers’ sensitive emotional information; (2) emotion AI may function to enforce workers’ compliance with emotional labor expectations, and that workers may engage in emotional labor as a mechanism to preserve privacy over their emotions; (3) workers may be exposed to a wide range of harms as a consequence of emotion AI in the workplace. Findings reveal the need to recognize and define an individual right to what we introduce as emotional privacy, as well as raise important research and policy questions on how to protect and preserve emotional privacy within and beyond the workplace.

4 citations


DOI
01 May 2023
TL;DR: This paper conducted a qualitative content analysis of 4,957 Reddit comments in 180 security and privacy-related discussion threads from /r/homeautomation, a major Reddit smart home forum.
Abstract: Smart home technologies offer many benefits to users. Yet, they also carry complex security and privacy implications that users often struggle to assess and account for during adoption. To better understand users’ considerations and attitudes regarding smart home security and privacy, in particular how users develop them progressively, we conducted a qualitative content analysis of 4,957 Reddit comments in 180 security- and privacy-related discussion threads from /r/homeautomation, a major Reddit smart home forum. Our analysis reveals that users’ security and privacy attitudes, manifested in the levels of concern and degree to which they incorporate protective strategies, are shaped by multi-dimensional considerations. Users’ attitudes evolve according to changing contextual factors, such as adoption phases, and how they become aware of these factors. Further, we describe how online discourse about security and privacy risks and protections contributes to individual and collective attitude development. Based on our findings, we provide recommendations to improve smart home designs, support users’ attitude development, facilitate information exchange, and guide future research regarding smart home security and privacy.

3 citations


Journal ArticleDOI
TL;DR: In this paper , the main ethical, technical, and legal categories of privacy, which is much more than just data protection, are discussed and recommendations about how such technologies might mitigate privacy risks and in which cases the risks are higher than the benefits of the technology.
Abstract: What do you have to keep in mind when developing or using eye-tracking technologies regarding privacy? In this article we discuss the main ethical, technical, and legal categories of privacy, which is much more than just data protection. We additionally provide recommendations about how such technologies might mitigate privacy risks and in which cases the risks are higher than the benefits of the technology.

3 citations


Proceedings ArticleDOI
19 Apr 2023
TL;DR: In this paper , the authors iteratively designed ad control interfaces that varied in the setting's entry point (within ads, at the feed's top) and level of actionability, with high actionability directly surfacing links to specific advertisement settings, and low actionability pointing to general settings pages.
Abstract: Tech companies that rely on ads for business argue that users have control over their data via ad privacy settings. However, these ad settings are often hidden. This work aims to inform the design of findable ad controls and study their impact on users’ behavior and sentiment. We iteratively designed ad control interfaces that varied in the setting’s (1) entry point (within ads, at the feed’s top) and (2) level of actionability, with high actionability directly surfacing links to specific advertisement settings, and low actionability pointing to general settings pages (which is reminiscent of companies’ current approach to ad controls). We built a Chrome extension that augments Facebook with our experimental ad control interfaces and conducted a between-subjects online experiment with 110 participants. Results showed that entry points within ads or at the feed’s top, and high actionability interfaces, both increased Facebook ad settings’ findability and discoverability, as well as participants’ perceived usability of them. High actionability also reduced users’ effort in finding ad settings. Participants perceived high and low actionability as equally usable, which shows it is possible to design more actionable ad controls without overwhelming users. We conclude by emphasizing the importance of regulation to provide specific and research-informed requirements to companies on how to design usable ad controls.

2 citations


TL;DR: This article conducted 17 semi-structured interviews addressing the following research questions: RQ1. What characteristics do people ascribe to security robots? RQ2. What expectations do people have about the function and role of security robots.
Abstract: Robots are increasingly being deployed as security agents helping law enforcement in spaces such as streets, parks, or shopping malls. Unfortunately, the deployment of security robots is not without problems and controversies. For example, the New York Police Department canceled its contract with Boston Dynamics in response to backlash from their use of Digidog, an autonomous robotic dog, which sparked fears in the public. However, it is unclear to what extent affected communities have been i nvolved i n the design and deployment process of robots. This is problematic because, without input from community members in the processes of design and deployment, security robots are likely to not satisfy the concerns or safety needs of real communities. To gain deeper insight into people’s perceptions of security robots—including both potential benefits and concerns—we conducted 17 semi-structured interviews addressing the following research questions: RQ1. What characteristics do people ascribe to security robots? RQ2. What expectations do people have about the function and role of security robots? RQ3. What are people’s attitudes toward the use of security robots? Our study offers several contributions to the existing literature on security robots.

Journal ArticleDOI
TL;DR: In this paper , the authors provide insights into individuals' awareness, perception, and responses to breaches that affect them through two online surveys: a main survey (n = 413) in which they presented participants with up to three breaches that affected them, and a follow-up survey in which the main study participants followed through with their intentions to act.
Abstract: Data breaches are prevalent. We provide novel insights into individuals’ awareness, perception, and responses to breaches that affect them through two online surveys: a main survey (n = 413) in which we presented participants with up to three breaches that affected them, and a follow-up survey (n = 108) in which we investigated whether the main study participants followed through with their intentions to act. Overall, 73% of participants were affected by at least one breach, but participants were unaware of 74% of breaches affecting them. While some reported intention to take action, most participants believed the breach would not impact them. We also found a sizeable intention-behavior gap. Participants did not follow through with their intention when they were apathetic about breaches, considered potential costs, forgot, or felt resigned about taking action. Our findings suggest that breached organizations should be held accountable for more proactively informing and protecting affected consumers.