scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 2018"


Journal ArticleDOI
TL;DR: This research investigated and verified a four-factor structure of the UES and proposed a Short Form (SF), and presents a revised long and short form (SF) version of theUES, and offers guidance for researchers interested in adopting the Ues and UES-SF in their own studies.
Abstract: User engagement (UE) and its measurement have been of increasing interest in human-computer interaction (HCI). The User Engagement Scale (UES) is one tool developed to measure UE, and has been used in a variety of digital domains. The original UES consisted of 31-items and purported to measure six dimensions of engagement: aesthetic appeal, focused attention, novelty, perceived usability, felt involvement, and endurability. A recent synthesis of the literature questioned the original six-factors. Further, the ways in which the UES has been implemented in studies suggests there may be a need for a briefer version of the questionnaire and more effective documentation to guide its use and analysis. This research investigated and verified a four-factor structure of the UES and proposed a Short Form (SF). We employed contemporary statistical tools that were unavailable during the UES’ development to re-analyze the original data, consisting of 427 and 779 valid responses across two studies, and examined new data (N=344) gathered as part of a three-year digital library project. In this paper we detail our analyses, present a revised long and short form (SF) version of the UES, and offer guidance for researchers interested in adopting the UES and UES-SF in their own studies.

406 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide factor-analytic validation of two of the most commonly used player experience scales, the Player Experience of Need Satisfaction (PENS) and Game Experience Questionnaire (GEQ).
Abstract: HIGHLIGHTS • Popular measures of videogame player experience typically have not been empirically validated • We provide factor-analytic validation of two of the most commonly used player experience scales • The theorised structure of the GEQ is partially supported; a revised five factor structure is proposed • The theorised structure of the PENS is largely supported, but we suggest combining two subscales ABSTRACT Accurate measurement of the player experience in videogames is key to understanding the impacts of videogame play, designing and developing engaging videogames, and effectively applying game design principles in other fields. A large number of player experience questionnaires are available, but in most cases empirical validation of the scales is limited or absent. Two of the most commonly used scales are the Player Experience of Need Satisfaction (PENS) and the Game Experience Questionnaire (GEQ). Both scales were developed using a rational-theoretical approach, but neither scale has had formal factor-analytic studies published, limiting our capacity to judge the empirical validity of the scales. We present detailed exploratory and confirmatory factor analyses of both scales based on responses from a sample (n=571) of videogame players. The GEQ is partially supported (using a revised factor structure); the PENS is largely supported (with a more minor revision of the factor structure). We provide suggestions for the most effective use of both scales in future research.

110 citations


Journal ArticleDOI
TL;DR: This research uses a mixed methods approach to explore employee susceptibility to targeted phishing emails, known as spear phishing, and demonstrates that the presence of authority cues increased the likelihood that a user would click a suspicious link contained in an email.
Abstract: Phishing emails provide a means to infiltrate the technical systems of organisations by encouraging employees to click on malicious links or attachments. Despite the use of awareness campaigns and phishing simulations, employees remain vulnerable to phishing emails. The present research uses a mixed methods approach to explore employee susceptibility to targeted phishing emails, known as spear phishing. In study one, nine spear phishing simulation emails sent to 62,000 employees over a six-week period were rated according to the presence of authority and urgency influence techniques. Results demonstrated that the presence of authority cues increased the likelihood that a user would click a suspicious link contained in an email. In study two, six focus groups were conducted in a second organisation to explore whether additional factors within the work environment impact employee susceptibility to spear phishing. We discuss these factors in relation to current theoretical approaches and provide implications for user communities.

105 citations


Journal ArticleDOI
TL;DR: The impact in the development process of introducing an AR authoring tool which is accessible to cultural heritage experts is focused on and the relationship between AR technology and storytelling and the feasibility of using AR in literary museums with adult and senior people having little experience in the use of AR technologies are explored.
Abstract: This work describes the design and evaluation of an augmented reality experience using storytelling techniques, aimed at visitors to a literary museum and city tour focused on the famous Italian novelist Italo Svevo, who lived in Trieste between the 19th and the 20th century. The project was initiated by the Svevo Museum’s management in order to augment the space of the small museum, increase the accessibility of its collections and to enhance the experience of the visitors, who are mainly adults and senior people. The project provided the opportunity to explore different research questions. In particular, this paper focuses on the impact in the development process of introducing an AR authoring tool which is accessible to cultural heritage experts. It also explores the relationship between AR technology and storytelling and the feasibility of using AR in literary museums with adult and senior people having little experience in the use of AR technologies.

87 citations


Journal ArticleDOI
TL;DR: Evidence for two different halo effects was found: on the one hand, visual aesthetics influenced perceived usability in the beginning, and on the other hand, the usability of the device impacted the perceived visual attractiveness and emotional responses at later stages.
Abstract: We examine changes in product-related perceptions and emotions at early stages and for short-time usage.Before any interaction takes place, the visual aesthetics of the device influences perceived usability.Already after a short period of interacting with the product, this influence vanishes and system usability starts affecting perceived visual attractiveness and emotions.To account for both influences, we distinguish between a hedonic halo effect (beautiful is usable) and a pragmatic halo effect (usable gets beautiful).Based on the results, we propose that a heuristic may bias ratings of usability in the beginning while emotions may bias ratings of aesthetics at later stages. User Experience (UX) has emerged as a comprehensive concept which provides a holistic perspective on users interaction with technology. This concept can be characterized as a multidimensional phenomenon that comprises both, the perception of different product qualities as well as emotions that arise while using a product. The interrelations of these components are described in the Component Model of User Experience (CUE model), which serves as the theoretical basis for our experiment. UX can be investigated in different phases of usage. In our experiment, we examined product perceptions and emotions in early phases and for short-time usage. Sixty participants employed different versions of mobile digital audio players which were systematically varied with respect to visual aesthetics and usability. Essential aspects of UX, i.e., perceptions of visual attractiveness and usability, as well as emotional responses were measured at three stages: Before interacting with the device, after an exploration (2min) and after working with the system for a short time (15min) to solve a given set of tasks. Data was analysed using a 223 mixed MANCOVA. The results of the experiment show that influences of visual aesthetics and of usability on quality perceptions as well as emotions change during these early stages. Moreover, evidence for two different halo effects was found: On the one hand, visual aesthetics influenced perceived usability in the beginning. On the other hand, the usability of the device impacted the perceived visual attractiveness and emotional responses at later stages. To account for these findings, we suggest to distinguish a hedonic halo effect from a pragmatic one. Based on the results for both effects, we propose that two mechanisms may be responsible for the effects during short-time usage, one of them being cognitive in nature, the other emotional.

84 citations


Journal ArticleDOI
TL;DR: Analysis of results yielded that mixed-reality interaction realms amplified the effects of human cognitive style towards game-specific interaction behaviour and visual behaviour, which further support the added value of incorporating human cognitive factors in both design and run-time.
Abstract: Mixed-reality environments introduce innovative human-computer interaction paradigms assisted by enhanced visual content presentation which require from end-users to perform excessive cognitive tasks related to visual attention, search, processing, and comprehension. In such visually enriched interaction realms, individual differences in perception and visual information processing might affect users’ behaviour and immersion, given that such effects are known to exist in conventional computer environments, like desktop or mobile. In an attempt to shed light on whether, how, and why such effects persist within mixed-reality contexts, we conducted a between-subjects eye-tracking study (N=73) in which users interacted within either a conventional or a mixed-reality technological context, and adopted an accredited cognitive style theory to interpret the derived results. Analysis of results yielded that mixed-reality interaction realms amplified the effects of human cognitive style towards game-specific interaction behaviour and visual behaviour. Findings further support the added value of incorporating human cognitive factors in both design and run-time, aiming to provide adaptive and personalised features to end-users within mixed-reality interaction contexts. Such practical implications are also discussed in this paper.

81 citations


Journal ArticleDOI
TL;DR: Testing a social network consisting of multiple applications with linear navigation as a digital literacy method for the elderly in rural areas found that one of the most frequent emotions at the beginning of the ICT sessions was “fear”, but the continued use of the system improved the users’ perceptions of their own capacity to handle ICTs and their interest in I CTs in general.
Abstract: Information and Communication Technologies (ICTs) have considerably increased the information and communication channels, favoring the emergence of new models for social relations, such as social networks. However, for elderly users whose learning has traditionally been based on linear models of information such as textbooks, unfamiliarity with Internet can be a barrier. Moreover, elderly people living in rural communities face a lack of telecommunication infrastructures, which increases their difficulties in accessing ICTs. The aim of this study is to test a social network consisting of multiple applications with linear navigation as a digital literacy method for the elderly in rural areas. A sample of 46 participants between 60–76 years old with heterogeneous previous experience with ICTs participated in the study. They performed eight standardized sessions in an Elderly Leisure Center. Results showed differences in perceived usefulness between users with high and low ICT experience. After eight training sessions, the majority of the participants were able to independently use all the system applications, and positive results were obtained on the variables measured, i.e., learnability, sense of control over the system, ability to use the system, orientation, efficiency, accessible design, perceived ease, perceived usefulness, and intention to use. The participants with previous experience with other ICT methods preferred the linear navigation method because they thought it was easier than other ICTs. The results showed interaction differences when touch screens were used. Qualitative results showed that one of the most frequent emotions at the beginning of the ICT sessions was “fear” (related to breaking the computer or to making fools of themselves), but the continued use of the system improved the users’ perceptions of their own capacity to handle ICTs and their interest in ICTs in general. The main contribution of this work consists of exploring the usefulness of linear navigation and social network systems in the context of digital literacy for elderly users in rural areas.

56 citations


Journal ArticleDOI
TL;DR: This paper investigated the brain activity of Software Engineers (SEngs) while performing two distinct but related mental tasks: understanding and inspecting code for syntax errors and built a model of subjective difficulty based on the recorded brainwave patterns.
Abstract: This paper provides a proof of concept for the use of wearable technology, and specifically wearable Electroencephalography (EEG), in the field of Empirical Software Engineering. Particularly, we investigated the brain activity of Software Engineers (SEngs) while performing two distinct but related mental tasks: understanding and inspecting code for syntax errors. By comparing the emerging EEG patterns of activity and neural synchrony, we identified brain signatures that are specific to code comprehension. Moreover, using the programmer's rating about the difficulty of each code snippet shown, we identified neural correlates of subjective difficulty during code comprehension. Finally, we attempted to build a model of subjective difficulty based on the recorded brainwave patterns. The reported results show promise towards novel alternatives to programmers’ training and education. Findings of this kind may eventually lead to various technical and methodological improvements in various aspects of software development like programming languages, building platforms for teams, and team working schemes.

55 citations


Journal ArticleDOI
TL;DR: This study provides cogent arguments for improving usability of websites by information filtering for users with blindness by conducting three experiments in which seventy-six participants with blindness performed tasks on websites which filtered or not irrelevant and redundant information.
Abstract: Accessibility norms for the Web are based on the principle that everybody should have access to the same information. Applying these norms enables the oralization of all visual information by screen readers used by people with blindness. However, compliance with accessibility norms does not guarantee that users with blindness can reach their goals with a reasonable amount of time and effort. To improve website usability, it is necessary to take into account the specific needs of users. A previous study revealed that a major need for users with blindness is to quickly reach the information relevant to the task, by filtering redundant and irrelevant information. We conducted three experiments in which seventy-six participants with blindness performed tasks on websites which filtered or not irrelevant and redundant information. Cognitive load was assessed using the dual-task paradigm and the NASA-RTLX questionnaire. The results showed a substantial benefit for information filtering regarding participants' cognitive load, performance, and satisfaction. Thus, this study provides cogent arguments for improving usability of websites by information filtering for users with blindness.

52 citations


Journal ArticleDOI
TL;DR: The results of experiments showed the potential for implementing TrailCare in real-life situations, allowing the use of ubiquitous technologies to support accessibility for wheelchair users.
Abstract: This article proposes a computational system to assist wheelchair users, improving accessibility through ubiquitous computing technologies. TrailCare uses indoor and outdoor location information to assist wheelchair users, recording their trails and providing context-aware assistance. Trails are historical records of users’ displacements that can be used to develop strategic accessibility solutions, such as security management and recommendation through inferences on behavior. TrailCare contributions are the indoor/outdoor trail-aware strategy and its application to recommend contextualized accessibility resources. The system was implemented and integrated with a motorized wheelchair manufactured by a Brazilian company. The prototype is a complete and functional system based on one of the most used wheelchair models in Brazil. TrailCare was assessed through three practical experiments involving scenarios in a university campus. The first two experiments aimed to evaluate the system functionalities. They consisted of two scenarios that tested practical situations supported by the technologies of context awareness and indoor/outdoor trail awareness. The third experiment focused on evaluating the user experience with the system. It comprised a scenario that was followed by 10 wheelchair users, who were observed by researchers regarding usability aspects. The users also filled out a survey based on the Technology Acceptance Model (TAM). The survey was composed by 10 sentences and the results of each one are discussed. The experiments allowed us to learn 10 relevant lessons about technological and usability aspects of the TrailCare that are recorded in this article. The results also showed 96% of acceptance regarding perceived ease of use and 98% in perceived usefulness. The results of experiments showed the potential for implementing TrailCare in real-life situations, allowing the use of ubiquitous technologies to support accessibility for wheelchair users.

49 citations


Journal ArticleDOI
TL;DR: These guidelines provide guidance to ethics review boards that are required to evaluate nudge-related research and explain how researchers can use these guidelines to ensure that they satisfy the ethical requirements during nudge trials in information security and privacy.
Abstract: There has recently been an upsurge of interest in the deployment of behavioural economics techniques in the information security and privacy domain. In this paper, we consider the nature of one particular intervention, the nudge, and the way it exercises its influence. We contemplate the ethical ramifications of nudging, in its broadest sense, deriving general principles for ethical nudging from the literature. We extrapolate these principles to the deployment of nudging in information security and privacy. Furthermore, we explain how researchers can use these guidelines to ensure that they satisfy the ethical requirements during nudge trials in information security and privacy. Our guidelines also provide guidance to ethics review boards that are required to evaluate nudge-related research.

Journal ArticleDOI
TL;DR: The results indicate benefits to complement the guided visit by using projective AR to explore different layers of the learning experience and by including collaborative activities based on embodied enactments to foster the understanding of historical contents that require emotional engagement and critical thinking.
Abstract: The design of interactive experiences for archaeological sites entails the consideration of the particular characteristics and constraints of the exhibition space. Our aim is to address these challenges by exploring the potential of a recently emerging interaction paradigm called World-as-Support, which is based on projective Augmented Reality (AR). In this study, we present the design process of a virtual heritage experience for a bomb shelter built during the Spanish Civil War that currently belongs to the Barcelona History Museum. The goal of this study was twofold. First, we aimed to define the requirements for the design of a first prototype based on the World-as-Support interaction paradigm. Second, we carried out a study with a local school to evaluate the benefits of an educational experience based on this paradigm. Our results indicate benefits to complement the guided visit: (1) by using projective AR to explore different layers of the learning experience; and (2) by including collaborative activities based on embodied enactments to foster the understanding of historical contents that require emotional engagement and critical thinking.

Journal ArticleDOI
TL;DR: A stage theory that distinguishes three sets of motivations for participation in UGC, using the theory of helping behaviour as a framework and integrating social movement theory, is proposed and tested.
Abstract: Motives for contributing content are argued to differ for participants at different stages.A stage theory that distinguishes three sets of motivations for participation is proposed.The theory is tested with a data set from the Wikimedia Editor Survey.Results support the premise that motives are not unitary but differ by stage. User-generated content (UGC) projects involve large numbers of mostly unpaid contributors collaborating to create content. Motivation for such contributions has been an active area of research. In prior research, motivation for contribution to UGC has been considered a single, static and individual phenomenon. In this paper, we argue that it is instead three separate but interrelated phenomena. Using the theory of helping behaviour as a framework and integrating social movement theory, we propose a stage theory that distinguishes three separate sets (initial, sustained and meta) of motivations for participation in UGC. We test this theory using a data set from a Wikimedia Editor Survey (Wikimedia Foundation, 2011). The results suggest several opportunities for further refinement of the theory but provide support for the main hypothesis, that different stages of contribution have distinct motives. The theory has implications for both researchers and practitioners who manage UGC projects.

Journal ArticleDOI
TL;DR: Although there was no main effect of social responsive behavior on participants´ subjective experience of rapport and on connectedness with the agent, those people with a high need to belong reported less willingness to engage in social activities after the interaction with a virtual agent – but only if the agent displayed socially responsive behavior.
Abstract: Based on considerations that people´s need to belong can be temporarily satisfied by “social snacking” (Gardner et al., 2005) in the sense that in absence of social interactions which adequately satisfy belongingness needs surrogates can bridge lonely times, it was tested whether the interaction with a virtual agent can serve to ease the need for social contact. In a between subjects experimental setting, 79 participants interacted with a virtual agent who either displayed socially responsive nonverbal behavior or not. Results demonstrate that although there was no main effect of socially responsive behavior on participants´ subjective experience of rapport and on connectedness with the agent, those people with a high need to belong reported less willingness to engage in social activities after the interaction with a virtual agent – but only if the agent displayed socially responsive behavior.

Journal ArticleDOI
TL;DR: Results indicate that depicting virtual animal-like characters at realism levels used in current video games causes negative reactions just as the uncanny valley predicts for humanlike characters, and design implication to avoid that sensation is suggested.
Abstract: Approaching a high degree of realism, android robots, and virtual humans may evoke uncomfortable feelings. Due to technologies that increase the realism of human replicas, this phenomenon, which is known as the uncanny valley , has been frequently highlighted in recent years by researchers from various fields. Although virtual animals play an important role in video games and entertainment, the question whether there is also an uncanny valley for virtual animals has been little investigated. This paper examines whether very realistic virtual pets tend to cause a similar aversion as humanlike characters. We conducted two empirical studies using cat renderings to investigate the effects of realism, stylization, and facial expressions of virtual cats on human perception. Through qualitative feedback, we gained deeper insight into the perception of realistic computer-generated animals. Our results indicate that depicting virtual animal-like characters at realism levels used in current video games causes negative reactions just as the uncanny valley predicts for humanlike characters. We conclude design implication to avoid that sensation and suggest that virtual animals should either be given a completely natural or a stylized appearance. We propose to further examine the uncanny valley by the inclusion of artificial animals.

Journal ArticleDOI
TL;DR: The results show that the more time participants spend with the GPS-like map, the less accurate spatial knowledge they manifest and the longer paths they travel without GPS guidance, suggesting that despite an extensive use of navigation aids may have a detrimental effect on person’s spatial learning.
Abstract: There is a vibrant debate about consequences of mobile devices on our cognitive capabilities. Use of technology guided navigation has been linked with poor spatial knowledge and wayfinding in both virtual and real world experiments. Our goal was to investigate how the attention people pay to the GPS aid influences their navigation performance. We developed navigation tasks in a virtual city environment and during the experiment, we measured participants’ eye movements. We also tested their cognitive traits and interviewed them about their navigation confidence and experience. Our results show that the more time participants spend with the GPS-like map, the less accurate spatial knowledge they manifest and the longer paths they travel without GPS guidance. This poor performance cannot be explained by individual differences in cognitive skills. We also show that the amount of time spent with the GPS is related to participant’s subjective evaluation of their own navigation skills, with less confident navigators using GPS more intensively. We therefore suggest that despite an extensive use of navigation aids may have a detrimental effect on person’s spatial learning, its general use is modulated by a perception of one’s own navigation abilities.

Journal ArticleDOI
TL;DR: A personalisation framework to support complex scenarios that combine the physical, the digital, and the social dimensions of a visit is presented and how adaptive techniques can be effectively complemented with interaction design, rich narratives and visitors’ choice to create deeply personal experiences is used.
Abstract: Shaping personalization in a scenario of tangible, embedded and embodied interaction for cultural heritage involves challenges that go well beyond the requirements of implementing content personalization for portable mobile guides. Content is coupled with the physical experience of the objects, the space, and the facets of the context – being those personal or social – acquire a more prominent role. This paper presents a personalization framework to support complex scenarios that combine the physical, the digital, and the social dimensions of a visit. It is based on our experience in collaborating with curators and museum experts to understand and shape personalization in a way that is meaningful to them and to visitors alike, that is sustainable to implement and effective in managing the complexity of context-awareness. The pro posed approach features a decomposition of personalization into multiple layers of complexity that involve a blend of customization on the visitor’s initiative or according to the visitor’s profile, system context-awareness, and automatic adaptivity computed by the system based on the visitor’s behaviour model. We use a number of case studies of implemented exhibitions where this approach was used to illustrate its many facets and how adaptive techniques can be effectively complemented with interaction design, rich narratives and visitors’ choice to create deeply personal experiences. Overarching reflections spanning case studies and prototypes provide evidence of the viability of the proposed frame work, and illustrate the final effect of the user experience.

Journal ArticleDOI
TL;DR: The research results show that online users’ propensity to use privacy controls is likely to be driven by the type of personal information they are willing to share, and show clearly that compensation is a factor to motivate online users in using privacy controls over data flows.
Abstract: The rise of new technologies has brought new challenges regarding the protection of personal data, as a vast amount of personal information is being published and shared. Because personal data are extremely valuable for many digital businesses, it is crucial to understand to what extent individuals want to exert control over the disclosure of their personal data. This paper aims to assess the factors that affect web users’ predisposition to exert control over personal data flows, using a dataset collected in France in 2014 that targets online users and their privacy. Our results demonstrate that those who are more likely to disclose personal data express a greater propensity to use privacy controls. Additionally, our research results show that online users’ propensity to use privacy controls is likely to be driven by the type of personal information they are willing to share. Furthermore, our findings show clearly that compensation is a factor to motivate online users in using privacy controls over data flows.

Journal ArticleDOI
TL;DR: This paper introduces a Cognitive Assistant named LIZA that is developed as a pedagogical agent and aims at improving the reasoning and decision making abilities of its users.
Abstract: Cognitive Assistants support humans and enhance their capabilities in solving a wide variety of complex tasks. In this paper, we introduce a Cognitive Assistant named LIZA that is developed as a pedagogical agent and aims at improving the reasoning and decision making abilities of its users. This Cognitive Assistant is able to hold conversation with users in natural language in order to help them solve problems of common heuristics and biases. Using controlled experiments, we demonstrate that LIZA could help test persons improve their reasoning skills, and show that LIZA achieved significant higher learning gains than a non-interactive online course.

Journal ArticleDOI
TL;DR: It is claimed that including social dialogue in QA systems increases users satisfaction and makes them easily engage with the system, and the evaluation results support this claim.
Abstract: The application of natural language to improve the interaction of human users with information systems is a growing trend in the recent years. Advances in cognitive computing enable a new way of interaction that accelerates insight from existing information sources. In this paper, we propose a modular cognitive agent architecture for question answering featuring social dialogue improved for a specific knowledge domain. The proposed system has been implemented as a personal agent to assist students learning Java programming language. The developed prototype has been evaluated to analyze how users perceive the interaction with the system. We claim that including social dialogue in QA systems increases users satisfaction and makes them easily engage with the system. Finally, we present the evaluation results that support our hypotheses.

Journal ArticleDOI
TL;DR: The results indicate that virtual team members give meanings to communication technology while interacting, and recommend developing both technological systems and team members’ ways of using them, as well as providing opportunities to negotiate the meanings of technology and thus avoid frame disputes.
Abstract: Communication technology is an essential part of virtual teams in working life. This article presents a qualitative study on the meanings of communication technology in virtual team meetings. The study was conducted by examining frames of technology-related virtual team interaction. Observational data was gathered from six expert team meetings. Technology-related communication episodes (N = 88) were identified from team interaction and then analyzed by means of frame analysis. Four frame categories were found: the practical frame, work frame, user frame, and relational frame. Team members talked about technological properties and functions as well as giving and receiving technological guidance. They also discussed technology in relation to work tasks, contemplated technology users’ attributes, and built and maintained relationships with technology. The results indicate that virtual team members give meanings to communication technology while interacting. Communication technology has several meanings—it is seen as a tool for work, a reason for uncertainty, a useful benefit, a challenge, an object of competence, an entity of technical properties, a subject of guidance, a way to express closeness, and a shared space. The results presented in this article deepen our understanding of the role communication technology plays in the day-to-day interaction of virtual teams. The results recommend developing both technological systems and team members’ ways of using them, as well as providing opportunities to negotiate the meanings of technology and thus avoid frame disputes. In addition, ensuring that virtual teams use technological systems that support their unique communicational needs is suggested.

Journal ArticleDOI
TL;DR: The goal of this study is to shed light on how the vibration modality can be perceived by blind users when accessing simple contour-based images and visual graphics on a touch-screen, and to develop a dedicated experimental protocol, titled EVIAC, testing a blind user's capacity in learning, distinguishing, identifying, and recognizing basic shapes and geometric objects presented on a vibrating touch- screen.
Abstract: Accessing visual information becomes a central need for all kinds of tasks and users (from accessing graphics and charts in news articles, to viewing images of items on sale on e-commerce sites), especially for blind users. In this context, digital tools of assistance, using adapted software (screen readers, talking browsers, etc.), hardware (force feedback mouse, piezo-electric pins, etc.), and more recently touch-screen technology (using smart phones or smart tablets) have been increasingly helping blind persons access and manipulate information. While effective with textual information, yet existing solutions remain limited when handling visual information. In this context, the goal of our study is to shed light on how the vibration modality can be perceived by blind users when accessing simple contour-based images and visual graphics on a touch-screen. In this paper, we target the vibration-only modality, compared with audio-kinesthetic or multimodal vibro-audio solutions. Our main motivation is that the potentials and limitations of touch-screen vibration-only feedback need to be fully studied and understood prior to integrating other modalities (such as sound, human speech, or other forms of haptic feedback). This could prove very useful in a range of applications: allowing blind people to access geographic maps, to navigate autonomously inside and outside buildings, as well as to access graphs and mathematical charts (for blind students). To achieve our goal, we develop a dedicated experimental protocol, titled EVIAC, testing a blind user's capacity in learning, distinguishing, identifying, and recognizing basic shapes and geometric objects presented on a vibrating touch-screen. Extensive tests were conducted on blindfolded and blind candidates, using a battery of evaluation metrics including: i) accuracy of shape recognition, ii) testers average response time, iii) number and duration of finger strokes, iv) surface area covered by the testers finger path trails, as well as iv) finger path correlation with the surface of the target shape. Results show that blind users are generally capable of accessing simple shapes and graphics presented on a vibrating touch-screen. However, results also underline various issues, ranging over: prolonged response time (e.g., blind users require 1min and 22s on average to recognize a basic shape), reduced touch-screen surface coverage, and low correlation between the surface of the target shape and the tester's vibration trails. The latter issues need to be further investigated to produce optimal recipes for using touch-screen technology to support image accessibility for blind users.

Journal ArticleDOI
TL;DR: This paper used a quantitative approach to disentangle the role of embodiment from the physical presence of a social robot with three different agents (robot, telepresent robot, and virtual agent), as well as with an actual human.
Abstract: Both robotic and virtual agents could one day be equipped with social abilities necessary for effective and natural interaction with human beings. Although virtual agents are relatively inexpensive and flexible, they lack the physical embodiment present in robotic agents. Surprisingly, the role of embodiment and physical presence for enriching human-robot-interaction is still unclear. This paper explores how these unique features of robotic agents influence three major elements of human-robot face-to-face communication, namely the perception of visual speech, facial expression, and eye-gaze. We used a quantitative approach to disentangle the role of embodiment from the physical presence of a social robot, called Ryan, with three different agents (robot, telepresent robot, and virtual agent), as well as with an actual human. We used a robot with a retro-projected face for this study, since the same animation from a virtual agent could be projected to this robotic face, thus allowing comparison of the virtual agent’s animation behaviors with both telepresent and the physically present robotic agents. The results of our studies indicate that the eye gaze and certain facial expressions are perceived more accurately when the embodied agent is physically present than when it is displayed on a 2D screen either as a telepresent or a virtual agent. Conversely, we find no evidence that either the embodiment or the presence of the robot improves the perception of visual speech, regardless of syntactic or semantic cues. Comparison of our findings with previous studies also indicates that the role of embodiment and presence should not be generalized without considering the limitations of the embodied agents.

Journal ArticleDOI
TL;DR: A user-centered design approach resulted in an intuitive interface for people with visual impairments and laid the foundation for demonstrating this device's potential to depict mathematical data shown in graphs.
Abstract: Students who are visually impaired face unique challenges when learning mathematical concepts due to the visual nature of graphs, charts, tables, and plots. While touchscreens have been explored as a means to assist people with visual impairments in learning mathematical concepts, many devices are not standalone, were not developed with a user-centered design approach, and have not been tested with users who are visually impaired. This research details the user-centered design and analysis of an electrostatic touchscreen system for displaying graph-based visual information to individuals who are visually impaired. Feedback from users and experts within the visually-impaired community informed the iterative development of our software. We conducted a usability study consisting of locating haptic points in order to test the efficacy and efficiency of the system and to determine patterns of user interactions with the touchscreen. The results showed that: (1) participants correctly located haptic points with an accuracy rate of 69.83% and an average time of 15.34 s out of 116 total trials, (2) accuracy increased across trials, (3) efficient patterns of user interaction involved either a systematic approach or a rapid exploration of the screen, and (4) haptic elements placed near the corners of the screen were more easily located. Our user-centered design approach resulted in an intuitive interface for people with visual impairments and laid the foundation for demonstrating this device's potential to depict mathematical data shown in graphs.

Journal ArticleDOI
TL;DR: A conceptual framework of multiple selves, each representing a stage in the consolidation of experience accessed by self-report is developed to support the interpretation of user experience, provide insight into users’ evaluations of their own experiences, and emphasise the importance of design for experience as lived and reflected upon.
Abstract: Design is driven by our understanding of users’ experiences. This understanding rests primarily upon users’ reports of their experiences, in the moment, after the fact, or ahead of time. In this paper we ask how the study of and design for experience might be better informed by attending more carefully to differences between these reports. Based on a broad and interdisciplinary literature review, we develop a conceptual framework of multiple selves, each representing a stage in the consolidation of experience accessed by self-report. We explore the use of this framework to support the interpretation of user experience, provide insight into users’ evaluations of their own experiences, and emphasise the importance of design for experience as lived and reflected upon. We discuss the implications of this framing of experience for design, particularly in the case of systems to support self-knowledge, wellbeing, behaviour change, reflection, and decision making.

Journal ArticleDOI
TL;DR: A head-tracking interface using a mobile device’s front camera was evaluated using two interaction methods: moving the head vs. moving the device and a new multi-directional corner (MDC) task, confirming the utility of the MDC task for evaluations on devices with small displays.
Abstract: A head-tracking interface using a mobile device’s front camera was evaluated using two interaction methods: moving the head vs. moving the device. The evaluation used an Apple iPad Air and the multi-directional (MD) target selection test described in ISO 9241-411. Throughput was higher at 1.42 bits per second (bps) using head movement compared to 1.20 bps using device movement. Users also expressed a preference for the head input method in an assessment of comfort questionnaire. In a second user study, input using head movements was tested using a new multi-directional corner (MDC) task with an iPad Air and an Apple iPhone 6. The MDC task has two advantages over the MD task: a wider range of task difficulties is used and target selection is tested over the entire display surface, including corners. Throughputs were similar to those measured in the first user study, thus confirming the utility of the MDC task for evaluations on devices with small displays.

Journal ArticleDOI
TL;DR: The capability of discriminating between affective states on the valence and arousal dimensions using functional near-infrared spectroscopy (fNIRS), a practical non-invasive device that benefits from its ability to localize activation in functional brain regions with spatial resolution superior to the Electroencephalograph (EEG).
Abstract: We demonstrate the capability of discriminating between affective states on the valence and arousal dimensions using functional near-infrared spectroscopy (fNIRS), a practical non-invasive device that benefits from its ability to localize activation in functional brain regions with spatial resolution superior to the Electroencephalograph (EEG). The high spatial resolution of fNIRS enables us to identify the neural correlates of emotion with spatial precision comparable to fMRI, but without requiring the use of the constricting and impractical fMRI scanner. We make these predictions across subjects, creating the capacity to generalize the model to new participants. We designed the experiment and evaluated our results in the context of a prior experiment—based on the same basic protocol and stimulus materials—which used EEG to measure participants’ valence and arousal. The F1-scores achieved by our classifiers suggest that fNIRS is particularly useful at distinguishing between high and low levels of valence (F1-score of 0.739), which has proven to be difficult to measure with physiological sensors.

Journal ArticleDOI
TL;DR: A phenomenological take on the autistic lived experience is proposed, which could integrate the results achieved by the medical model, and offer a “first person perspective” on autism by adopting a cognitive approach to urbanism.
Abstract: Over the years, the relationship between technology and people with autism has been framed mainly in a medical model, where technology is primarily aimed at mitigating deficits and providing helps to overcome limitations. This has yielded a variety of Human-Computer Interaction designs addressed to improve the autistic individuals’ daily tasks and behavior. In this article, we want to explore a different approach, by proposing a phenomenological take on the autistic lived experience, which could integrate the results achieved by the medical model, and offer a “first person perspective” on autism. More precisely, by adopting a cognitive approach to urbanism we want to explore how autistic individuals conceptualize and experience the spaces they inhabit. To this aim, we interviewed 12 adults with a diagnosis of autism asking them to recount their everyday movements and city living activities. Building on the study findings, we identified three kinds of spaces that characterize their life and outlined a series of design considerations to support technology interventions for satisfying their spatial needs. Then, during a design session, we developed our conceptualization as well as our design suggestions, yielding a more nuanced picture of how space is subjectively constructed by autistic people.

Journal ArticleDOI
TL;DR: Experiments on blindfolded and blind candidates show that Fitts’ Law can be effectively applied for blind users using a vibrating touch-screen under certain parameters, while it is not verified under others (i.e., when varying target distance and size).
Abstract: The pointing task is the process of pointing to an object on a computer monitor using a pointing device, or physically touching an object with the hand or finger. It is an important element for users when manipulating visual computer interfaces such as traditional screens and touch-screens. In this context, Fitts’ Law remains one of the central studies that have mathematically modeled the pointing method, and was found to be a good predictor of the average time needed to perform a pointing task. Yet, in our modern computerized society, accessing visual information becomes a central need for all kinds of users, namely users who are blind or visually impaired. Hence, the goal of our study is to evaluate whether Fitts’ Law can be applied for blind candidates using the vibration modality on a touch-screen. To achieve this, we first review the literature on Fitts’ Law and visual data accessibility solutions for blind users. Then, we introduce an experimental framework titled FittsEVAL studying the ability of blind users to tap specific shapes on a touch-screen, while varying different parameters, namely: target distance, target size, and the angle of attack of the pointing task. Experiments on blindfolded and blind candidates show that Fitts’ Law can be effectively applied for blind users using a vibrating touch-screen under certain parameters (i.e., when varying target distance and size), while it is not verified under others (i.e., when varying the angle of attack). This can be considered as a first step toward a more complete experimental evaluation of vibrating touch-screen accessibility, toward designing more adapted interfaces for the blind.

Journal ArticleDOI
TL;DR: Two types of queuing schemes are examined, the self-paced Open-queue identifying all robots’ normal/abnormal conditions, whereas the forced-paced shortest-job-first (SJF) queue showing a single robot's request at one time is examined, suggesting that the SJF attentional scheduling approach can provide stable performance in both primary and secondary tasks, regardless of the system's reliability levels.
Abstract: Human multi-robot interaction exploits both the human operator's high-level decision-making skills and the robotic agents’ vigorous computing and motion abilities. While controlling multi-robot teams, an operator's attention must constantly shift between individual robots to maintain sufficient situation awareness. To conserve an operator's attentional resources, a robot with self-reflect capability on its abnormal status can help an operator focus her attention on emergent tasks rather than unneeded routine checks. With the proposing self-reflect aids, the human-robot interaction becomes a queuing framework, where the robots act as the clients to request for interaction and an operator acts as the server to respond these job requests. This paper examined two types of queuing schemes, the self-paced Open-queue identifying all robots’ normal/abnormal conditions, whereas the forced-paced shortest-job-first (SJF) queue showing a single robot's request at one time by following the SJF approach. As a robot may miscarry its experienced failures in various situations, the effects of imperfect automation were also investigated in this paper. The results suggest that the SJF attentional scheduling approach can provide stable performance in both primary (locate potential targets) and secondary (resolve robots’ failures) tasks, regardless of the system's reliability levels. However, the conventional results (e.g., number of targets marked) only present little information about users’ underlying cognitive strategies and may fail to reflect the user's true intent. As understanding users’ intentions is critical to providing appropriate cognitive aids to enhance task performance, a Hidden Markov Model (HMM) is used to examine operators’ underlying cognitive intent and identify the unobservable cognitive states. The HMM results demonstrate fundamental differences among the queuing mechanisms and reliability conditions. The findings suggest that HMM can be helpful in investigating the use of human cognitive resources under multitasking environments.