scispace - formally typeset
Search or ask a question
Author

Arturo Nakasone

Bio: Arturo Nakasone is an academic researcher from University of Tokyo. The author has contributed to research in topics: Rhetorical Structure Theory & Narrative. The author has an hindex of 10, co-authored 17 publications receiving 260 citations. Previous affiliations of Arturo Nakasone include Graduate University for Advanced Studies.

Papers
More filters
Book ChapterDOI
19 Jun 2006
TL;DR: The AutoSelect system is introduced, that automatically detects a user's preference based on eye movement data and physiological signals in a two-alternative forced choice task and could correctly classify subjects' choice of neckties in an exploratory study.
Abstract: While objects of our focus of attention (“where we are looking at”) and accompanying affective responses to those objects is part of our daily experience, little research exists on investigating the relation between attention and positive affective evaluation. The purpose of our research is to process users' emotion and attention in real-time, with the goal of designing systems that may recognize a user's affective response to a particular visually presented stimulus in the presence of other stimuli, and respond accordingly. In this paper, we introduce the AutoSelect system that automatically detects a user's preference based on eye movement data and physiological signals in a two-alternative forced choice task. In an exploratory study involving the selection of neckties, the system could correctly classify subjects' choice of in 81%. In this instance of AutoSelect, the gaze ‘cascade effect' played a dominant role, whereas pupil size could not be shown as a reliable predictor of preference.

53 citations

Journal ArticleDOI
TL;DR: New client software that controls bots based on the Multimodal Presentation Markup Language 3D (MPML3D), a highly expressive XML-based scripting language for controlling the verbal and nonverbal behavior of interacting animated agents is implemented.
Abstract: The aim of this paper is two-fold. First, it describes a scripting language for specifying communicative behavior and interaction of computer-controlled agents ("bots”) in the popular three-dimensional (3D) multiuser online world of "Second Life” and the emerging "OpenSimulator” project. While tools for designing avatars and in-world objects in Second Life exist, technology for nonprogrammer content creators of scenarios involving scripted agents is currently missing. Therefore, we have implemented new client software that controls bots based on the Multimodal Presentation Markup Language 3D (MPML3D), a highly expressive XML-based scripting language for controlling the verbal and nonverbal behavior of interacting animated agents. Second, the paper compares Second Life and OpenSimulator platforms and discusses the merits and limitations of each from the perspective of agent control. Here, we also conducted a small study that compares the network performance of both platforms.

35 citations

Proceedings ArticleDOI
04 Oct 2005
TL;DR: An approach to evaluating the utility of life-like interface agents that is based on human eye movements rather than questionnaires is motivate and the investigation of users' eye movements reveals that agent behavior may trigger natural and social interaction behavior of human users.
Abstract: We motivate an approach to evaluating the utility of life-like interface agents that is based on human eye movements rather than questionnaires. An eye tracker is employed to obtain quantitative evidence of a user's focus of attention. The salient feature of our evaluation strategy is that it allows us to measure important properties of a user's interaction experience on a moment-by-moment basis in addition to a cumulative (spatial) analysis of the user's areas of interest. We describe an empirical study in which we compare attending behavior of subjects watching the presentation of an apartment by three types of media: an animated agent, a text box, and speech only. The investigation of users' eye movements reveals that agent behavior may trigger natural and social interaction behavior of human users.

34 citations

Journal ArticleDOI
TL;DR: AstroSim as discussed by the authors is a Second Life based prototype application for synchronous collaborative visualization targeted at astronomers, which can be found in the Google Play Store and is available for download from Amazon Mechanical Turk.
Abstract: We introduce AstroSim, a Second Life based prototype application for synchronous collaborative visualization targeted at astronomers.

25 citations

01 Jan 2005
TL;DR: This paper describes the two building blocks of the approach to affective gaming and discusses the measurement of human physiological activity in game interactions and non-verbal agent behavior.
Abstract: Physiologically interactive (or affective) gaming refers to research on the evocation and detection of emotion during game play [21]. In this paper, we first describe the two building blocks of our approach to affective gaming. The building blocks correspond to two independently conducted research strands on affective human–computer interaction: one on an emotion simulation system for an expressive 3D humanoid agent called Max, which was designed at the University of Bielefeld [13, 2]; the other one on a real-time system for empathic (agent) feedback that is based on human emotional states derived from physiological information, and developed at the University of Tokyo and the National Institute of Informatics [19]. Then, the integration of both systems is motivated in the setting of a cards game called Skip-Bo that is played by a human game partner and Max. Physiological user information is used to enable empathic feedback through non-verbal behaviors of the humanoid agent Max. With regard to the new area of Conversational Informatics we discuss the measurement of human physiological activity in game interactions and non-verbal agent behavior.

25 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper reviews studies on eye movements in decision making, and compares their observations to theoretical predictions concerning the role of attention, finding that more accurate assumptions could have been made based on prior attention and eye movement research.

636 citations

Proceedings ArticleDOI
01 Oct 2014
TL;DR: Immersion provides benefits beyond the traditional “desktop” visualization tools: it leads to a demonstrably better perception of a datascape geometry, more intuitive data understanding, and a better retention of the perceived relationships in the data.
Abstract: Effective data visualization is a key part of the discovery process in the era of “big data”. It is the bridge between the quantitative content of the data and human intuition, and thus an essential component of the scientific path from data into knowledge and understanding. Visualization is also essential in the data mining process, directing the choice of the applicable algorithms, and in helping to identify and remove bad data from the analysis. However, a high complexity or a high dimensionality of modern data sets represents a critical obstacle. How do we visualize interesting structures and patterns that may exist in hyper-dimensional data spaces? A better understanding of how we can perceive and interact with multidimensional information poses some deep questions in the field of cognition technology and human-computer interaction. To this effect, we are exploring the use of immersive virtual reality platforms for scientific data visualization, both as software and inexpensive commodity hardware. These potentially powerful and innovative tools for multi-dimensional data visualization can also provide an easy and natural path to a collaborative data visualization and exploration, where scientists can interact with their data and their colleagues in the same visual space. Immersion provides benefits beyond the traditional “desktop” visualization tools: it leads to a demonstrably better perception of a datascape geometry, more intuitive data understanding, and a better retention of the perceived relationships in the data.

290 citations

Journal ArticleDOI
TL;DR: This article propose that self-serving justifications emerge before and after people engage in intentional ethical violations to mitigate the threat to the moral self, enabling them to do wrong while feeling moral.
Abstract: Unethical behavior by “ordinary” people poses significant societal and personal challenges. We present a novel framework centered on the role of self-serving justification to build upon and advance the rapidly expanding research on intentional unethical behavior of people who value their morality highly. We propose that self-serving justifications emerging before and after people engage in intentional ethical violations mitigate the threat to the moral self, enabling them to do wrong while feeling moral. Pre-violation justifications lessen the anticipated threat to the moral self by redefining questionable behaviors as excusable. Post-violation justifications alleviate the experienced threat to the moral self through compensations that balance or lessen violations. We highlight the psychological mechanisms that prompt people to do wrong and feel moral, and suggest future research directions regarding the temporal dimension of self-serving justifications of ethical misconduct.

287 citations