scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Human-computer Studies \/ International Journal of Man-machine Studies in 2012"


Journal ArticleDOI
TL;DR: An intelligent tutoring system that aims to promote engagement and learning by dynamically detecting and responding to students' boredom and disengagement and gaze-reactivity was effective in promoting learning gains for questions that required deep reasoning.
Abstract: We developed an intelligent tutoring system (ITS) that aims to promote engagement and learning by dynamically detecting and responding to students' boredom and disengagement. The tutor uses a commercial eye tracker to monitor a student's gaze patterns and identify when the student is bored, disengaged, or is zoning out. The tutor then attempts to reengage the student with dialog moves that direct the student to reorient his or her attentional patterns towards the animated pedagogical agent embodying the tutor. We evaluated the efficacy of the gaze-reactive tutor in promoting learning, motivation, and engagement in a controlled experiment where 48 students were tutored on four biology topics with both gaze-reactive and non-gaze-reactive (control condition) versions of the tutor. The results indicated that: (a) gaze-sensitive dialogs were successful in dynamically reorienting students' attentional patterns to the important areas of the interface, (b) gaze-reactivity was effective in promoting learning gains for questions that required deep reasoning, (c) gaze-reactivity had minimal impact on students' state motivation and on self-reported engagement, and (d) individual differences in scholastic aptitude moderated the impact of gaze-reactivity on overall learning gains. We discuss the implications of our findings, limitations, future work, and consider the possibility of using gaze-reactive ITSs in classrooms.

273 citations


Journal ArticleDOI
TL;DR: Investigating the role of visual complexity (VC) and prototypicality (PT) as design factors of websites, shaping users' first impressions by means of two studies suggests that VC and PT affect aesthetic perception even within 17ms, though the effect of PT is less pronounced than the one of VC.
Abstract: This paper experimentally investigates the role of visual complexity (VC) and prototypicality (PT) as design factors of websites, shaping users' first impressions by means of two studies. In the first study, 119 screenshots of real websites varying in VC (low vs. medium vs. high) and PT (low vs. high) were rated on perceived aesthetics. Screenshot presentation time was varied as a between-subject factor (50ms vs. 500ms vs. 1000ms). Results reveal that VC and PT affect participants' aesthetics ratings within the first 50ms of exposure. In the second study presentation times were shortened to 17, 33 and 50ms. Results suggest that VC and PT affect aesthetic perception even within 17ms, though the effect of PT is less pronounced than the one of VC. With increasing presentation time the effect of PT becomes as influential as the VC effect. This supports the reasoning of the information-processing stage model of aesthetic processing (Leder et al., 2004), where VC is processed at an earlier stage than PT. Overall, websites with low VC and high PT were perceived as highly appealing.

232 citations


Journal ArticleDOI
TL;DR: This study sheds light on learning system design as assisted by IS in VLE and can serve as a basis for promoting VLS in assisting learning and reveals that perceived fit and satisfaction are important precedents of the intention to continue VLS and individual performance.
Abstract: Virtual learning system (VLS) is an information system that facilitates e-learning have been widely implemented by higher education institutions to support face-to-face teaching and self-managed learning in the virtual learning and education environment (VLE). This is referred to a blended learning instruction. By adopting the VLS, students are expected to enhance learning by getting access to course-related information and having full opportunities to interact with instructors and peers. However, there are mixed findings revealed in the literature with respect to the learning outcomes in adopting VLS. In this study, we argue that the link between the precedents of leading students to continue to use VLSs and their impacts on learning effectiveness and productivity are overlooked in the literature. This paper aims to tackle this question by integrating information system (IS) continuance theory with task-technology fit (TTF) to extend our understandings of the precedents of the intention to continue VLS and their impacts on learning. By doing it, factors of technology-acceptance-to-performance, based on TAM (technology acceptance model) and TTF and post-technology-acceptance, based on expectation-confirmation theory, models can be included to test in one study. The results reveal that perceived fit and satisfaction are important precedents of the intention to continue VLS and individual performance. Later, a discussion and conclusions are provided. This study sheds light on learning system design as assisted by IS in VLE and can serve as a basis for promoting VLS in assisting learning.

220 citations


Journal ArticleDOI
TL;DR: The results of this study indicate that the nature of the relation between multitasking and performance depends upon the metric used, and if performance is measured with accuracy of results, the relation is a downward slopping line, in which increased levels of multitasking lead to a significant loss in accuracy.
Abstract: In this study, we develop a theoretical model that predicts an inverted-U relationship between multitasking and performance. The model is tested with a controlled experiment using a custom-developed application. Participants were randomly assigned to either a control condition, where they had to perform tasks in sequence, or an experimental condition, where they could discretionarily switch tasks by clicking on tabs. Our results show an inverted-U pattern for performance efficiency (productivity) and a decreasing line for performance effectiveness (accuracy). The results of this study indicate that the nature of the relation between multitasking and performance depends upon the metric used. If performance is measured with productivity, different multitasking levels are associated with an inverted-U curve where medium multitaskers perform significantly better than both high and low multitaskers. However, if performance is measured with accuracy of results, the relation is a downward slopping line, in which increased levels of multitasking lead to a significant loss in accuracy. Metaphorically speaking, juggling multiple tasks is much more difficult while balancing on a high wire, where performance mishaps can have serious consequences.

181 citations


Journal ArticleDOI
TL;DR: Results for radial dragging are new, showing that errors, task time and movement distance are all linearly correlated with number of items available, and it is demonstrated that this performance is modelled by the Steering Law rather than Fitts' Law.
Abstract: Touch-based interaction with computing devices is becoming more and more common. In order to design for this setting, it is critical to understand the basic human factors of touch interactions such as tapping and dragging; however, there is relatively little empirical research in this area, particularly for touch-based dragging. To provide foundational knowledge in this area, and to help designers understand the human factors of touch-based interactions, we conducted an experiment using three input devices (the finger, a stylus, and a mouse as a performance baseline) and three different pointing activities. The pointing activities were bidirectional tapping, one-dimensional dragging, and radial dragging (pointing to items arranged in a circle around the cursor). Tapping activities represent the elemental target selection method and are analysed as a performance baseline. Dragging is also a basic interaction method and understanding its performance is important for touch-based interfaces because it involves relatively high contact friction. Radial dragging is also important for touch-based systems as this technique is claimed to be well suited to direct input yet radial selections normally involve the relatively unstudied dragging action, and there have been few studies of the interaction mechanics of radial dragging. Performance models of tap, drag, and radial dragging are analysed. For tapping tasks, we confirm prior results showing finger pointing to be faster than the stylus/mouse but inaccurate, particularly with small targets. In dragging tasks, we also confirm that finger input is slower than the mouse and stylus, probably due to the relatively high surface friction. Dragging errors were low in all conditions. As expected, performance conformed to Fitts' Law. Our results for radial dragging are new, showing that errors, task time and movement distance are all linearly correlated with number of items available. We demonstrate that this performance is modelled by the Steering Law (where the tunnel width increases with movement distance) rather than Fitts' Law. Other radial dragging results showed that the stylus is fastest, followed by the mouse and finger, but that the stylus has the highest error rate of the three devices. Finger selections in the North-West direction were particularly slow and error prone, possibly due to a tendency for the finger to stick-slip when dragging in that direction.

130 citations


Journal ArticleDOI
TL;DR: The findings imply that a simplified interface design of the task performance, information hierarchy, and visual display attributes contributes to positive satisfaction evaluations when users interact with their smartphone as they engage in communication, information search, and entertainment activities.
Abstract: Motivated by the need to develop an integrated measure of simplicity perception for a smartphone user interface, our research incorporated visual aesthetics, information design, and task complexity into an extended construct of simplicity. Drawn from three distinct domains of human-computer interaction design and related areas, the new development of a simplicity construct and measurement scales were then validated. The final measurement model consisted of six components: reduction, organization, component complexity, coordinative complexity, dynamic complexity, and visual aesthetics. The following phase aimed at verifying the relationship between simplicity perception of the interface and evaluations of user satisfaction. The hypothesis was accepted that user satisfaction was positively affected by simplicity perception and that the relationship between the two constructs was very strong. The findings imply that a simplified interface design of the task performance, information hierarchy, and visual display attributes contributes to positive satisfaction evaluations when users interact with their smartphone as they engage in communication, information search, and entertainment activities.

124 citations


Journal ArticleDOI
TL;DR: A four-dimension evaluation framework was developed and applied to an empirical study with a DEG on teaching geography, adopting Engestrom's (1987) extended framework of Activity Theory (AT) that provides contextual information essential for understanding contradictions and breakdowns observed in the interactions between the game players.
Abstract: Adaptive digital educational games (DEGs) providing players with relevant interventions can enhance gameplay experience. This advance in game design, however, renders the user experience (UX) evaluation of DEGs even more challenging. To tackle this challenge, we developed a four-dimension evaluation framework (i.e., gaming experience, learning experience, adaptivity, and usability) and applied it to an empirical study with a DEG on teaching geography. Mixed-method approaches were adopted to collect data with 16 boys aged 10-11. Specifically, a so-called Dyadic User Experience Tests (DUxT) was employed; participants were paired up to assume different roles during gameplay. Learning efficacy was evaluated with a pre-post intervention measurement using a domain-specific questionnaire. Learning experience, gaming experiences and usability were evaluated with intensive in situ observations and interviews guided by a multidimensional scheme; content analysis of these transcribed audio data was supplemented by video analysis. Effectiveness of adaptivity algorithms was planned to be evaluated with automatic logfiles, which, unfortunately, could not be realised due to some technical problem. Nonetheless, the user-based data could offer some insights into this issue. Furthermore, we attempted to bridge the existing gap in UX research - the lack of theoretical frameworks in understanding user experience - by adopting Engestrom's (1987) extended framework of Activity Theory (AT) that provides contextual information essential for understanding contradictions and breakdowns observed in the interactions between the game players. The dyadic gameplay setting allows us to explore the issue of group UX. Implications for further applications of the AT framework in the UX research, especially the interplay between evaluation and redesign (i.e., downstream utility of UX evaluation methods), are discussed.

118 citations


Journal ArticleDOI
TL;DR: The data suggest that not only does positive information increase trust, but mere uncertainty reduction regarding a seller can also contribute towards trust in online transactions.
Abstract: Reputation scores and seller photos are regarded as two types of signals promoting trust in e-commerce. Little is known about their differential impact when co-occurring in online transactions. Using a computer-mediated trust game, the current study combined three photo conditions (trustworthy, untrustworthy and no seller photo) with three reputation conditions (positive, negative and no seller reputation) in a 3x3 within-subject design. Buyers' ratings of trust and number of purchases served as dependent variables. Significant main effects were found for reputation scores and photos on both dependent variables and there was no interaction effect. Trustworthy photos and positive reputation contributed towards buyers' trust and higher purchase rates. Surprisingly, neither untrustworthy photos nor negative reputation performed worse than missing information. On the contrary, completely missing information (no reputation, no photo) led to distrust and differed significantly from completely negative information (low reputation, untrustworthy photo), which resulted in a neutral trust level. Overall, the data suggest that not only does positive information increase trust, but mere uncertainty reduction regarding a seller can also contribute towards trust in online transactions.

112 citations


Journal ArticleDOI
TL;DR: Eye-tracking was applied to capture visual attention strategies and construct a detailed account of visual attention during debugging to find repetitive patterns in visual strategies that were associated with less expertise and lower performance.
Abstract: In modern multi-representational environments, software developers need to coordinate various information sources to effectively perform maintenance tasks Although visual attention is an important skill in software development, our current understanding of the role of visual attention in the coordination of representations during maintenance tasks is minimal Therefore, we applied eye-tracking to capture visual attention strategies and construct a detailed account of visual attention during debugging Two groups of programmers with two distinct levels of experience debugged a program with the help of multiple representations The proportion of time spent looking at each representation, the frequency of switching attention between visualrepresentations and the type of switch were investigated during consecutive phases of debugging We found repetitive patterns in visual strategies that were associated with less expertise and lower performance Novice developers made use of both the code and graphical representations while frequently switching between them More experienced participants expended more efforts integrating the available information and primarily concentrated on systematically relating the code to the output Our results informed us about the differences in program debugging strategies from a fine-grain, temporal perspective and have implications for the design of future development environments

77 citations


Journal ArticleDOI
TL;DR: The design, development, and deployment of G-nome Surfer; a multi-touch tabletop user interface for collaborative exploration of genomic data, and empirical evidence for the feasibility and value of integrating tabletop interaction in college-level education are described.
Abstract: In this paper, we reflect on the design, development, and deployment of G-nome Surfer; a multi-touch tabletop user interface for collaborative exploration of genomic data. G-nome Surfer lowers the threshold for using advanced bioinformatics tools, reduces the mental workload associated with manipulating genomic information, and fosters effective collaboration. We describe our two-year-long effort from design strategy to iterations of design, development, and evaluation. This paper presents four main contributions: (1) a set of design requirements for supporting collaborative exploration in data-intensive domains, (2) the design, implementation, and validation of a multi-touch tabletop interface for collaborative exploration, (3) a methodology for evaluating the strengths and limitations of tabletop interaction for collaborative exploration, and (4) empirical evidence for the feasibility and value of integrating tabletop interaction in college-level education.

67 citations


Journal ArticleDOI
TL;DR: A novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models that can be integrated into a formal system model so that system safety properties can be formally verified with a model checker.
Abstract: Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel's zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development.

Journal ArticleDOI
TL;DR: This paper applies the theoretical perspectives of accommodation to misfit and IS evolution to understand the phenomenon through an in-depth case study of an EMAS implemented in a large public hospital and develops a process framework to explain how the benefits, issues, and workarounds inter-relate and determine the impacts of the system.
Abstract: Healthcare information systems such as an Electronic Medication Administration System (EMAS) have the potential to enhance productivity, lower costs, and reduce medication errors. However, various issues have arisen from the use of these systems. A key issue relates to workarounds as a result of a misfit between the new information system (IS) implementation and existing work processes. However, there is a lack of understanding and studies on healthcare IS workarounds and their outcomes. This paper applies the theoretical perspectives of accommodation to misfit and IS evolution to understand the phenomenon through an in-depth case study of an EMAS implemented in a large public hospital. Based on the findings, it develops a process framework to explain how the benefits, issues, and workarounds inter-relate and determine the impacts of the system. The findings have implications for research and practice on workarounds in the use of healthcare IS.

Journal ArticleDOI
TL;DR: It is suggested that Second Life helped motivation and socialisation stages, although integration with other technologies is necessary for knowledge construction, and preliminary guidelines are proposed for configuration and management of Second Life in collaborative learning.
Abstract: Two studies on collaborative learning in Second Life are reported. The first is an ecological study of Second Life used in an undergraduate class, by observation, interviews, and limit surveys. Use of Second Life motivated students with good user experience, although they viewed it as a games technology. Second Life was used to prepare virtual meetings and presentations but not for online discussion, with Blackboard and especially Facebook providing collaborative support. In the second experimental study, the effectiveness and user experience with Second Life and Blackboard were compared, including a face-to-face control condition. There were no performance differences overall, although face-to-face was quicker and was preferred by users, followed by Blackboard and Second Life. Blackboard was perceived to be more usable, whereas Second Life provided a better user experience. Worst performance was indicated by dislike of avatar interaction in Second Life, and poor user experience in Blackboard, whereas better performance was associated with engagement with avatars, and better usability in Blackboard. The results of both studies are reviewed using Salmon's model for online learning, suggesting that Second Life helped motivation and socialisation stages, although integration with other technologies is necessary for knowledge construction. Preliminary guidelines are proposed for configuration and management of Second Life in collaborative learning. The affordances for collaboration in virtual worlds are discussed, with reflections on user experience and functional support provided by Second Life, as an exemplar of a virtual world for collaborative learning support.

Journal ArticleDOI
TL;DR: This study analyses usability professionals' operational understanding of usability by eliciting the constructs they employ in their thinking about system use, finding that they make use of more utilitarian than experiential, i.e. user-experience related, constructs.
Abstract: Usability professionals have attained a specialist role in systems-development projects. This study analyses usability professionals' operational understanding of usability by eliciting the constructs they employ in their thinking about system use. We approach usability broadly and without a priori distinguishing it from user experience. On the basis of repertory-grid interviews with 24 Chinese, Danish, and Indian usability professionals we find that they make use of more utilitarian than experiential, i.e. user-experience related, constructs. This indicates that goal-related performance is central to their thinking about usability, whereas they have less elaborate sets of experiential constructs. The usability professionals mostly construe usability at an individual level, rather than at organizational and environmental levels. The few exceptions include effectiveness constructs, which are evenly spread across all three levels, and relational constructs, which are phrased in terms of social context. Considerations about users' cognitive activities appear more central to the usability professionals than conventional human-factors knowledge about users' sensorial abilities. The usability professionals' constructs, particularly their experiential constructs, go considerably beyond ISO 9241 usability, indicating a discrepancy between this definition of usability and the thinking of the professionals concerned with delivering usability. Finally, usability is construed rather similarly across the three nationalities of usability professionals.

Journal ArticleDOI
TL;DR: Late ERPs elicited by VR-irrelevant tones differ as a function of presence experience in VR and provide a valuable method for measuring presence in VR, and frontal negative slow waves turned out to be accurate predictors for presence experience.
Abstract: The feeling of presence in a virtual reality (VR) is a concept without a standardized objective measurement. In the present study, we used event-related brain potentials (ERP) of the electroencephalogram (EEG) elicited by tones, which are not related to VR, as an objective indicator for the presence experience within a virtual environment. Forty participants navigated through a virtual city and rated their sensation of being in the VR (experience of presence), while hearing frequent standard tones and infrequent deviant tones, which were irrelevant for the VR task. Different ERP components elicited by the tones were compared between participants experiencing a high level of presence and participants with a low feeling of presence in the virtual city. Early ERP components, which are more linked to automatic stimulus processing, showed no correlation with presence experience. In contrast, an increased presence experience was associated with decreased late negative slow wave amplitudes, which are associated with central stimulus processing and allocation of attentional resources. This result supports the assumption that increased presence is associated with a strong allocation of attentional resources to the VR, which leads to a decrease of attentional resources available for processing VR-irrelevant stimuli. Hence, ERP components elicited by the tones are reduced. Particularly, frontal negative slow waves turned out to be accurate predictors for presence experience. Summarizing, late ERPs elicited by VR-irrelevant tones differ as a function of presence experience in VR and provide a valuable method for measuring presence in VR.

Journal ArticleDOI
TL;DR: Overall, iScale resulted in an increase in the amount, the richness, and the test-retest consistency of recalled information as compared to free recall, which provides support for the viability of retrospective techniques as a cost-effective alternative to longitudinal studies.
Abstract: We present iScale, a survey tool for the retrospective elicitation of longitudinal user experience data. iScale aims to minimize retrospection bias and employs graphing to impose a process during the reconstruction of one's experiences. Two versions, the constructive and the value-account iScale, were motivated by two distinct theories on how people reconstruct emotional experiences from memory. These two versions were tested in two separate studies. Study 1 aimed at providing qualitative insight into the use of iScale and compared its performance to that of free-hand graphing. Study 2 compared the two versions of iScale to free recall, a control condition that does not impose structure on the reconstruction process. Overall, iScale resulted in an increase in the amount, the richness, and the test-retest consistency of recalled information as compared to free recall. These results provide support for the viability of retrospective techniques as a cost-effective alternative to longitudinal studies.

Journal ArticleDOI
TL;DR: A review of the history and development of locative media can be found in this article, where the authors provide an overview on methods to investigate and elaborate design principles for future locative Media.
Abstract: Highlights ► Provides a review of the history and development of locative media. ► Outlines different human-computer interaction techniques applied in locative media. ► Discusses how locative media applications have changed interaction affordances in and of physical spaces. ► Discusses practices of people in urban settings that evolved through these new affordances. ► Provides an overview on methods to investigate and elaborate design principles for future locative media.

Journal ArticleDOI
TL;DR: It is suggested that informing drivers with detailed information of their driving performance after driving is more acceptable than warning drivers with auditory and visual alerts while driving.
Abstract: Vehicle crashes caused by driver distraction are of increasing concern. One approach to reduce the number of these crashes mitigates distraction by giving drivers feedback regarding their performance. For these mitigation systems to be effective, drivers must trust and accept them. The objective of this study was to evaluate real-time and post-drive mitigation systems designed to reduce driver distraction. The real-time mitigation system used visual and auditory warnings to alert the driver to distracting behavior. The post-drive mitigation system coached drivers on their performance and encouraged social conformism by comparing their performance to peers. A driving study with 36 participants between the ages of 25 and 50 years old (M=34) was conducted using a high-fidelity driving simulator. An extended Technology Acceptance Model captured drivers' acceptance of mitigation systems using four constructs: perceived ease of use, perceived usefulness, unobtrusiveness, and behavioral intention to use. Perceived ease of use was found to be the primary determinant and perceived usefulness the secondary determinant of behavioral intention to use, while the effect of unobtrusiveness on intention to use was fully mediated by perceived ease of use and perceived usefulness. The real-time system was more obtrusive and less easy to use than the post-drive system. Although this study included a relatively narrow age range (25 to 50 years old), older drivers found both systems more useful. These results suggest that informing drivers with detailed information of their driving performance after driving is more acceptable than warning drivers with auditory and visual alerts while driving.

Journal ArticleDOI
TL;DR: The Finger-Count menu is proposed, a menu technique and teaching method for implicitly learning Finger- count gestures, a coherent set of multi-finger and two-handed gestures that can be used from a distance as well as when touching the surface.
Abstract: Selecting commands on multi-touch displays is still a challenging problem. While a number of gestural vocabularies have been proposed, these are generally restricted to one or two fingers or can be difficult to learn. We introduce Finger-Count gestures, a coherent set of multi-finger and two-handed gestures. Finger-Count gestures are simple, robust, expressive and fast to perform. In order to make these gestures self-revealing and easy to learn, we propose the Finger-Count menu, a menu technique and teaching method for implicitly learning Finger-Count gestures. We discuss the properties, advantages and limitations of Finger-Count interaction from the gesture and menu technique perspectives as well as its integration into three applications. We present alternative designs to increase the number of commands and to enable multi-user scenarios. Following a study which shows that Finger-Count is as easy to learn as radial menus, we report the results of an evaluation investigating which gestures are easier to learn and which finger chords people prefer. Finally, we present Finger-Count for in-the-air gestures. Thereby, the same gesture set can be used from a distance as well as when touching the surface.

Journal ArticleDOI
TL;DR: Using Virtual People Factory, medical and pharmacy educators are now creating natural language virtual patient interactions on their own, and five case studies showing that Human-centered Distributed Conversational Modeling has addressed the logistical cost for acquiring knowledge are presented.
Abstract: Educators in medicine, psychology, and the military want to provide their students with interpersonal skills practice Virtual humans offer structured learning of interview skills, can facilitate learning about unusual conditions, and are always available However, the creation of virtual humans with the ability to understand and respond to natural language requires costly engineering by conversation knowledge engineers (generally computer scientists), and incurs logistical cost for acquiring domain knowledge from domain experts (educators) We address these problems using a novel crowdsourcing method entitled Human-centered Distributed Conversational Modeling This method facilitates collaborative development of virtual humans by two groups of end-users: domain experts (educators) and domain novices (students) We implemented this method in a web-based authoring tool called Virtual People Factory Using Virtual People Factory, medical and pharmacy educators are now creating natural language virtual patient interactions on their own This article presents the theoretical background for Human-centered Distributed Conversational Modeling, the implementation of the Virtual People Factory authoring tool, and five case studies showing that Human-centered Distributed Conversational Modeling has addressed the logistical cost for acquiring knowledge

Journal ArticleDOI
TL;DR: A comprehensive taxonomy of icons is proposed that is intended to enable the generalization of the findings of recognition studies and indicates that the lexical and semantic attributes of a concept influence the choice of representation strategy.
Abstract: Predicting whether the intended audience will be able to recognize the meaning of an icon or pictograph is not an easy task. Many icon recognition studies have been conducted in the past. However, their findings cannot be generalized to other icons that were not included in the study, which, we argue, is their main limitation. In this paper, we propose a comprehensive taxonomy of icons that is intended to enable the generalization of the findings of recognition studies. To accomplish this, we analyzed a sample of more than eight hundred icons according to three axes: lexical category, semantic category, and representation strategy. Three basic representation strategies were identified: visual similarity; semantic association; and arbitrary convention. These representation strategies are in agreement with the strategies identified in previous taxonomies. However, a greater number of subcategories of these strategies were identified. Our results also indicate that the lexical and semantic attributes of a concept influence the choice of representation strategy.

Journal ArticleDOI
TL;DR: To inform the design of security policy, task models of password behaviour were constructed for different user groups-Computer Scientists, Administrative Staff and Students, revealing Computer Scientists viewed information security as part of their tasks and passwords provided a way of completing their work.
Abstract: To inform the design of security policy, task models of password behaviour were constructed for different user groups-Computer Scientists, Administrative Staff and Students. These models identified internal and external constraints on user behaviour and the goals for password use within each group. Data were drawn from interviews and diaries of password use. Analyses indicated password security positively correlated with the sensitivity of the task, differences in frequency of password use were related to password security and patterns of password reuse were related to knowledge of security. Modelling revealed Computer Scientists viewed information security as part of their tasks and passwords provided a way of completing their work. By contrast, Admin and Student groups viewed passwords as a cost incurred when accessing the primary task. Differences between the models were related to differences in password security and used to suggest six recommendations for security officers to consider when setting password policy.

Journal ArticleDOI
TL;DR: The state-of-the-art of virtual reality technology is reviewed, and core areas of psychology relevant to experiences in the fulldome are surveyed, including visual perception, attention, memory, social factors and individual differences.
Abstract: One of the most recent additions to the range of Immersive Virtual Environments has been the digital fulldome. However, not much empirical research has been conducted to explore its potential and benefits over other types of presentation formats. In this review we provide a framework within which to examine the properties of fulldome environments and compare them to those of other existing immersive digital environments. We review the state-of-the-art of virtual reality technology, and then survey core areas of psychology relevant to experiences in the fulldome, including visual perception, attention, memory, social factors and individual differences. Building on the existing research within these domains, we propose potential directions for empirical investigation that highlight the great potential of the fulldome in teaching, learning and research.

Journal ArticleDOI
TL;DR: This study found that the children with autism were as able as the typically developing children to engage with the task, although qualitative differences in their responses were recorded.
Abstract: The imaginative abilities of children on the autistic spectrum are reportedly impaired compared to typically developing children. This study explored computer mediated story construction in children with autism and typically developing peers. The purpose was to explore expressive writing ability, as a measure of imagination. Ten pairs of individually matched children (one typically developing and one child on the autistic spectrum) aged between seven and nine created reality and fantasy based stories using Bubble Dialogue software. The study provided a brief starting point for the stories, relying on the imaginative capabilities of the children to develop the stories beyond the story opening. The study contributes to the literature as an alternative to paper based studies of imagination given the known appeal of technology to most children, particularly children on the autistic spectrum (Gal et al., 2005). This study found that the children with autism were as able as the typically developing children to engage with the task, although qualitative differences in their responses were recorded.

Journal ArticleDOI
TL;DR: The experiments show that Smoothed Pointing allows a significant decrease in the error rate and achieves the highest values of throughput in trajectory-based tasks, and indicate that the effectiveness of precision enhancing techniques is significantly affected by the pointing modality and the type of pointing task.
Abstract: The increasing use of remote pointing devices in various application domains is fostering the adoption of pointing enhancement techniques which are aimed at counterbalancing the shortcomings of desk-free interaction. This paper describes the strengths and weaknesses of existing methods for ray pointing facilitation, and presents a refinement of Smoothed Pointing, an auto-calibrating velocity-oriented precision enhancing technique. Furthermore, the paper discusses the results of a user study aimed at empirically investigating how velocity-oriented approaches perform in target acquisition and in trajectory-based interaction tasks, considering both laser-style and image-plane pointing modalities. The experiments, carried out in a low precision scenario in which a Wiimote was used both as a wand and a tracking system, show that Smoothed Pointing allows a significant decrease in the error rate and achieves the highest values of throughput in trajectory-based tasks. The results also indicate that the effectiveness of precision enhancing techniques is significantly affected by the pointing modality and the type of pointing task.

Journal ArticleDOI
TL;DR: This paper provides a thorough and comprehensive synthesis of the disparate literature that pertains to the subject of proximity, offering insights into why existing methods for reasoning with proximity work, or do not work, and analyses their strengths and weaknesses.
Abstract: In order to design computer systems that are intuitive to use, the way humans reason about their ''real world'' surroundings needs to be taken into consideration. Geographic Information Systems (GIS) focus on spatial reasoning. Over the last decades, many advances have been made in GIS interfaces and functionality; however the concept of proximity or nearness, which is essential in many forms of human reasoning, is still being addressed insufficiently. This paper provides a thorough and comprehensive synthesis of the disparate literature that pertains to the subject of proximity. It offers insights into why existing methods for reasoning with proximity work, or do not work, and analyses their strengths and weaknesses. Finally, the paper provides the derivation of new proximity measures, and their evaluation, backed by experiments and reflections. New measures are formally described in a unifying and compelling framework. This framework acknowledges that while distance is one factor that influences proximity perception, proximity is much more than just a distance measure.

Journal ArticleDOI
TL;DR: The results show that users do not perform as well in terms of text entry efficiency and speed using a multi-touch interface as with a traditional keyboard, and a baseline for further research to explore techniques for improving text entry performance on multi- touch systems is provided.
Abstract: Multi-touch, which has been heralded as a revolution in human-computer interaction, provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization-features embraced by an increasingly tech-savvy public However, multi-touch platforms have not been adopted as ''everyday'' computer interaction devices that support important text entry intensive applications such as word processing and spreadsheets In this paper, we present two studies that begin to explore user performance and experience with entering text using a multi-touch input The first study establishes a benchmark for text entry performance on a multi-touch platform across input modes that compare uppercase-only to mixed-case, single-touch to multi-touch and copy to memorization tasks The second study includes mouse style interaction for formatting rich text to simulate a word processing task using multi-touch input As expected, our results show that users do not perform as well in terms of text entry efficiency and speed using a multi-touch interface as with a traditional keyboard Not as expected was the result that degradation in performance was significantly less for memorization versus copy tasks, and consequently willingness to use multi-touch was substantially higher (50% versus 26%) in the former case Our results, which include preferred input styles of participants, also provide a baseline for further research to explore techniques for improving text entry performance on multi-touch systems

Journal ArticleDOI
TL;DR: An experiment was conducted with a prototype path planner under various conditions to assess the effect on operator performance, and unexpectedly, participants were able to better optimize more complex cost functions as compared to a simple time-based cost function.
Abstract: Path planning is a problem encountered in multiple domains, including unmanned vehicle control, air traffic control, and future exploration missions to the Moon and Mars. Due to the voluminous and complex nature of the data, path planning in such demanding environments requires the use of automated planners. In order to better understand how to support human operators in the task of path planning with computer aids, an experiment was conducted with a prototype path planner under various conditions to assess the effect on operator performance. Participants were asked to create and optimize paths based on increasingly complex path cost functions, using different map visualizations including a novel visualization based on a numerical potential field algorithm. They also planned paths under degraded automation conditions. Participants exhibited two types of analysis strategies, which were global path regeneration and local sensitivity analysis. No main effect due to visualization was detected, but results indicated that the type of optimizing cost function affected performance, as measured by metabolic costs, sun position, path distance, and task time. Unexpectedly, participants were able to better optimize more complex cost functions as compared to a simple time-based cost function.

Journal ArticleDOI
TL;DR: The semiotic analysis of this case study shows that although multi- touch interfaces can facilitate user exploration, the lack of well-known standards in multi-touch interface design and in the use of gestures makes the user interface difficult to use and interpret.
Abstract: Although multi-touch applications and user interfaces have become increasingly common in the last few years, there is no agreed-upon multi-touch user interface language yet. In order to gain a deeper understanding of the design of multi-touch user interfaces, this paper presents semiotic analysis of multi-touch applications as an interesting approach to gain deeper understanding of the way users use and understand multi-touch interfaces. In a case study example, user tests of a multi-touch tabletop application platform called MuTable are analysed with the Communicability Evaluation Method to evaluate to what extent users understand the intended messages (e.g., cues about interaction and functionality) the MuTable platform communicates. The semiotic analysis of this case study shows that although multi-touch interfaces can facilitate user exploration, the lack of well-known standards in multi-touch interface design and in the use of gestures makes the user interface difficult to use and interpret. This conclusion points to the importance of the elusive balance between letting users explore multi-touch systems on their own on one hand, and guiding users, explaining how to use and interpret the user interface, on the other.

Journal ArticleDOI
TL;DR: Two experiments explored how learners allocate limited time across a set of relevant on-line texts to determine the extent to which time allocation is sensitive to local task demands, and showed that readers shift preference towards harder texts when their learning goals are more demanding.
Abstract: Two experiments explored how learners allocate limited time across a set of relevant on-line texts, in order to determine the extent to which time allocation is sensitive to local task demands. The first experiment supported the idea that learners will spend more of their time reading easier texts when reading time is more limited; the second experiment showed that readers shift preference towards harder texts when their learning goals are more demanding. These phenomena evince an impressive capability of readers. Further, the experiments reveal that the most common method of time allocation is a version of satisficing (Reader and Payne, 2007) in which preference for texts emerges without any explicit comparison of the texts (the longest time spent reading each text is on the first time that text is encountered). These experiments therefore offer further empirical confirmation for a method of time allocation that relies on monitoring on-line texts as they are read, and which is sensitive to learning goals, available time and text difficulty.