scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Human agency beliefs affect older adults' interaction behaviours and task performance when learning with computerised partners

01 Dec 2019-Computers in Human Behavior (Pergamon)-Vol. 101, pp 60-67
TL;DR: It is suggested that beliefs about agency affect how efficiently and how accurately older adults learn with technology, which has implications for computer mediated support in aging.
About: This article is published in Computers in Human Behavior.The article was published on 2019-12-01 and is currently open access. It has received 5 citations till now. The article focuses on the topics: Collaborative learning & Task (project management).

Summary (2 min read)

Jump to: [Introduction][Methods][Results] and [Discussion]

Introduction

  • With a shift in the demographic structure of the population characterised by adults living longer, technology can play a key role in supporting aging in-place (Haub & Yanagishita, 2011; Wiles, Leibing, Guberman, Reeve, & Allen, 2012, Elers et al., 2018).
  • This suggests that some aspects of social behaviour are not dependent on mental state beliefs regarding the interactive partner (i.e., whether they are a human or a computer), as understanding that a computer does not have emotions does not preclude people interacting with computers in a polite, social way (Branigan, 2003; Branigan, Pickering, Pearson, & McLean, 2010).
  • In the current study, the authors investigated whether beliefs about human agency have a direct impact on how older adults learn and recall information.
  • In both learning conditions, participants were in fact interacting with computer systems with synthetic and natural speech used to emulate computer and human interlocutors respectively.

Methods

  • Using a WoZ set-up allowed the creation of two directly comparable human and computer conditions where every aspect of the interaction was identical and constant, with the only manipulations being participants’ beliefs about their interaction partner, and whether information was presented using natural or synthetic speech.
  • This procedure continued until participants placed a card in the space or all four descriptions were presented.
  • In line with human Barrier Task research, in order to minimise inter-trial noise, the nine trials were collapsed into three trial bins, each representing three consecutive trials.
  • For positive integers, d < 0.33 indicated small effect sizes, d < 0.47 indicated medium effect sizes and d > 0.47 indicated large effect sizes.

Results

  • Table 1 shows participants’ performance on the background neuropsychological tests, with participants performing within the expected range for cognitively healthy individuals in this age-range.
  • In terms of turns taken, participants used significantly fewer turns between trial bins 1-2, although this pattern did not continue between trial bins 2-3 as the number of turns taken did not significantly decrease (see Table 2b).
  • Post-hoc contrasts were used to compare the number of turns taken in the human and computer conditions in the final trials to investigate whether participants interacted similarly in both conditions.
  • The Cliff’s Delta effect size was calculated as -0.45 indicating a medium effect.

Discussion

  • The current study investigated the effect that human agency beliefs have on interaction behaviours and learning and memory performance during a collaborative learning task.
  • This suggests that participants were taking longer to make their card selections but were not seeking additional interaction from the computer partner to assist with their choices.
  • Previous work comparing responses to instructions given by a human or a computer found participants’ responses were significantly shorter and less variable in their utterances when interacting with computers (Siegert, Bock, Wendemuth, Vlasenko, & Ohnemus, 2015; Tenbrink, Ross, Thomas, & Dethlefs, 2010).
  • Perceiving that a task is computerbased may result in additional difficulty compared to when older adults believe they are interacting with a human partner.
  • These results have largely been found in young and middle-aged adults, and may not reflect older adults’ experiences.

Did you find this useful? Give us your feedback

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors investigated the factors influencing the adoption of medical apps by hospital patients, and found that patients have begun to provide their own apps to improve the efficiency of hospital services.
Abstract: Hospitals have begun to provide their own apps to improve the efficiency of hospital services. This research investigated the factors influencing the adoption of medical apps by hospital patients, ...

4 citations

Proceedings ArticleDOI
05 May 2021
TL;DR: In this article, the authors analyzed the use of conversational agents and the main types of personalization used during interacting with the elderly and found that these solutions may have a positive impact on elderly's life.
Abstract: Social isolation and loneliness are problems faced by the elderly that might be aggravated due to quarantine during the coronavirus pandemic. This motivates a search demand for automatic technological solutions, such as chatbots, to address the problem, and establish a collaborative interaction between elderly and conversational agents. We analyze the use of conversational agents and the main types of personalization used during interacting with the elderly. We mapped 53 papers in 5 sources to answer 5 research questions that guided our study. Our findings show these solutions may have a positive impact on elderly's life.

3 citations

Proceedings Article
01 Jan 2020
TL;DR: Towards human-based models of behaviour in social robots: Exploring age-related differences in the processing of gaze cues in human-robot interaction.
Abstract: Towards human-based models of behaviour in social robots: Exploring age-related differences in the processing of gaze cues in human-robot interaction

3 citations


Cites background from "Human agency beliefs affect older a..."

  • ...Crompton and MacPherson [CM19] reported an increase in accuracy and decrease in completion time for a collaborative task with a non-embodied agent when it used a natural human voice and the OA thought of the system as a human person....

    [...]

Journal ArticleDOI
TL;DR: The authors investigated whether speakers converge with their conversational partner on lexical choices (i.e., lexical alignment) to the same extent when they believed the partner was a human (human-human interaction, HHI) and when they were a computer (humancomputer interaction), and whether the strength of lexical alignments is moderated by individuals' social skills in the same fashion in HHI and HCI.

2 citations

Book ChapterDOI
TL;DR: In this article , the authors examined the effects of an e-learning instructor's identity and voice cues on an instructor's social ratings, learners' cognitive load, and learning performance.
Abstract: An instructor in an e-learning video can identify as a human or a computer agent. Relatedly, they can project a human-recorded voice or a machine-voice generated from a classical text-to-speech engine. This study examines the effects of an e-learning instructor’s identity and voice cues on an instructor’s social ratings, learners’ cognitive load, and learning performance. A between-subjects laboratory experiment was conducted where university undergraduates (n = 108) interacted with either one of the four e-learning videos featuring different pairings of an instructor’s identity (human versus agent) and voice (human-voice versus machine-voice) cues that delivered a lesson on programming algorithms. The findings affirmed the voice effects in multimedia learning in that the human-voice enhanced social and learning outcomes more than the machine-voice, irrespective of the identity cues. Credibility ratings were diminished when an instructor identified as a computer agent projected a human-voice than a machine-voice—additionally, endowing a human-identified instructor with a machine-voice prompted learners to assign lower intrinsic cognitive load. These observations imply the congruence/incongruence effects of identity-voice cue pairings on social and cognitive load outcomes. Theoretical and practical implications are discussed in this paper.
References
More filters
Journal ArticleDOI
TL;DR: In this paper, a different approach to problems of multiple significance testing is presented, which calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate, which is equivalent to the FWER when all hypotheses are true but is smaller otherwise.
Abstract: SUMMARY The common approach to the multiplicity problem calls for controlling the familywise error rate (FWER). This approach, though, has faults, and we point out a few. A different approach to problems of multiple significance testing is presented. It calls for controlling the expected proportion of falsely rejected hypotheses -the false discovery rate. This error rate is equivalent to the FWER when all hypotheses are true but is smaller otherwise. Therefore, in problems where the control of the false discovery rate rather than that of the FWER is desired, there is potential for a gain in power. A simple sequential Bonferronitype procedure is proved to control the false discovery rate for independent test statistics, and a simulation study shows that the gain in power is substantial. The use of the new procedure and the appropriateness of the criterion are illustrated with examples.

83,420 citations

Journal ArticleDOI
TL;DR: In this article, a model is described in an lmer call by a formula, in this case including both fixed-and random-effects terms, and the formula and data together determine a numerical representation of the model from which the profiled deviance or the profeatured REML criterion can be evaluated as a function of some of model parameters.
Abstract: Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.

50,607 citations

Book
01 Jan 1996
TL;DR: This chapter discusses the media equation, which describes the role media and personality play in the development of a person's identity and aims at clarifying these roles.
Abstract: Part I. Introduction: 1. The media equation Part II. Media and Manners: 2. Politeness 3. Interpersonal distance 4. Flattery 5. Judging others and ourselves Part III. Media and Personality: 6. Personality of characters 7. Personality of interfaces 8. Imitating a personality Part IV. Media and emotion: 9. Good versus bad 10. Negativity 11. Arousal Part V. Media and Social Roles: 12. Specialists 13. Teammates 14. Gender 15. Voices 16. Source orientation Part VI. Media and Form: 17. Image size 18. Fidelity 19. Synchrony 20. Motion 21. Scene changes 22. Subliminal images Part VII. Final Words: 23. Conclusions about the media equation References.

4,690 citations

Journal ArticleDOI
TL;DR: A mechanistic account of dialogue, the interactive alignment account, is proposed and used to derive a number of predictions about basic language processes, and the need for a grammatical framework that is designed to deal with language in dialogue rather than monologue is considered.
Abstract: Traditional mechanistic accounts of language processing derive almost entirely from the study of monologue. Yet, the most natural and basic form of language use is dialogue. As a result, these accounts may only offer limited theories of the mechanisms that un- derlie language processing in general. We propose a mechanistic account of dialogue, the interactive alignment account, and use it to de- rive a number of predictions about basic language processes. The account assumes that, in dialogue, the linguistic representations em- ployed by the interlocutors become aligned at many levels, as a result of a largely automatic process. This process greatly simplifies production and comprehension in dialogue. After considering the evidence for the interactive alignment model, we concentrate on three aspects of processing that follow from it. It makes use of a simple interactive inference mechanism, enables the development of local di- alogue routines that greatly simplify language processing, and explains the origins of self-monitoring in production. We consider the need for a grammatical framework that is designed to deal with language in dialogue rather than monologue, and discuss a range of implica- tions of the account.

2,222 citations

Journal ArticleDOI
TL;DR: Langer as discussed by the authors reviewed a series of experimental studies that demonstrate that individuals mindlessly apply social rules and expecta-tions to computers and demonstrate that people exhibit overlearned social behaviors such as politeness and reciprocity toward comput-ers.
Abstract: Following Langer (1992), this article reviews a series of experimental studiesthat demonstrate that individuals mindlessly apply social rules and expecta-tions to computers. The first set of studies illustrates how individuals overusehuman social categories, applying gender stereotypes to computers and ethnicallyidentifying with computer agents. The second set demonstrates that people exhibitoverlearned social behaviors such as politeness and reciprocity toward comput-ers.Inthethirdsetofstudies,prematurecognitivecommitmentsaredemonstrated:Aspecialisttelevisionsetisperceivedasprovidingbettercontentthanageneralisttelevision set. A final series of studies demonstrates the depth of social responseswith respect to computer “personality.” Alternative explanations for these find -ings, such as anthropomorphism and intentional social responses, cannot explainthe results. We conclude with an agenda for future research.Computer users approach the personal computer in many different ways.Experienced word processors move smoothly from keyboard to mouse to menu,mixing prose and commands to the computer automatically; the distinctionbetween the hand and the tool blurs (Heidegger, 1977; Winograd & Flores, 1987).Novices cautiously strike each key, fearing that one false move will initiate anuncontrollable series of unwanted events. Game players view computers as

2,167 citations

Frequently Asked Questions (13)
Q1. What are the contributions in this paper?

For instance, this paper found that older adults are more likely to become anxious around computers, and computer anxiety is an important predictor of computer use. 

There may be differences in the recall of this information after a longer delay, and future research should focus on the influence of human and computer learning partners on longer-term recall. Future studies may focus on whether the approachability and friendliness of perceived human interlocutors has a role in how well older adults interact and learn. Despite these limitations, their results indicate that beliefs about agency play an important role in human-computer dialogue and highlight the need for future research in this area. 

In line with human Barrier Task research, in order to minimise inter-trial noise, the nine trials were collapsed into three trial bins, each representing three consecutive trials. 

The interaction between the early trials and condition suggests that while initial interactions with the computer partner are quicker, participants show a greater overall decrease in the time taken to complete trials in the human partner condition. 

Higher levels of computer literacy and internet use in older adults are significantly predictors of psychological well-being, reduced loneliness, and higher life satisfaction (González, Ramírez & Viadel, 2015; Heo, Chun, Lee, Lee & Kim, 2015; Gardiner, Geldenhuys & Gott, 2018). 

In a within-subjects design, twenty-four older adults aged 60-85 years completed a collaborative learning task with both the human and computer systems. 

It is also important to note that the social manner of the computer may play a role in how people interact and learn from them, and increased computer sociability may create a more human-like alliance with an agent (Vardoulakis et al., 2012). 

Participants believed that they were interacting with a human partner in one condition and a computer partner in another condition and their interactive behaviours, performance, and later recall were assessed. 

As beliefs about agency have an impact on how older adults interact with and learn from systems, researchers and software designers should take this into account when creating systems designed to interact with and assist older adults. 

Advanced computer systems are now relatively inexpensive and therefore increasingly used by people, organisations, and corporations (Caruana, Spirou, & Brock, 2017). 

Wilcoxon signed-ranks (V = 17, p < 0.05, d = 0.55) revealed that, one hour later, participants recalled significantly more tangram descriptions in the human condition compared to the computer condition. 

The Computerised Barrier Task yields two dependent variables relating to interactionwith the system; time taken to complete the task and number of interactive turns taken, aligning with previous Barrier Task research (Derksen et al., 2015). 

The four most commonly used phrases to describe each card were accessible as descriptors, with the least common of the four presented initially.