scispace - formally typeset
Search or ask a question

Showing papers in "Interaction Studies in 2007"


Journal ArticleDOI
TL;DR: It is argued that lacking humanlike mental models and a sense of self, robots may prove untrustworthy and will be rejected from human teams.
Abstract: The team has become a popular model to organize joint human–robot behavior. Robot teammates are designed with high-levels of autonomy and well-developed coordination skills to aid humans in unpredictable environments. In this paper, we challenge the assumption that robots will succeed as teammates alongside humans. Drawing from the literature on human teams, we evaluate robots’ potential to meet the requirements of successful teammates. We argue that lacking humanlike mental models and a sense of self, robots may prove untrustworthy and will be rejected from human teams. Benchmarks for evaluating human–robot teams are included, as are guidelines for defining alternative structures for human–robot groups.

248 citations


Journal ArticleDOI
TL;DR: The authors examines watershed moments in the history of human-machine interaction, focusing on the pertinence of relational artifacts to our collective perception of aliveness, life's purposes, and the implications of relational objects for relationships.
Abstract: The first generation of children to grow up with electronic toys and games saw computers as our “nearest neighbors.” They spoke of computers as rational machines and of people as emotional machines, a fragile formulation destined to be challenged. By the mid-1990s, computational creatures, including robots, were presenting themselves as “relational artifacts,” beings with feelings and needs. One consequence of this development is a crisis in authenticity in many quarters. In an increasing number of situations, people behave as though they no longer place value on living things and authentic emotion. This paper examines watershed moments in the history of human–machine interaction, focusing on the pertinence of relational artifacts to our collective perception of aliveness, life’s purposes, and the implications of relational artifacts for relationships. For now, the exploration of human–robot encounters leads us to questions about the morality of creating believable digital companions that are evocative but not authentic.

209 citations


Journal ArticleDOI
TL;DR: This paper presents a new interactive narrative architecture designed using a set of dramatic techniques that have been formulated based on several years of training in film and theatre.
Abstract: Interactive narratives have been used in a variety of applications, including video games, educational games, and training simulations. Maintaining engagement within such environments is an important problem, because it affects entertainment, motivation, and presence. Performance arts theorists have discussed and formalized many techniques that increase engagement and enhance dramatic content of art productions. While constructing a narrative manually, using these techniques, is acceptable for linear media, using this approach for interactive environments results in inflexible experiences due to the unpredictability of users’ actions. Few researchers attempted to develop adaptive interactive narrative experiences. However, developing a quality interactive experience is largely an art process, and many of these adaptive techniques do not encode artistic principles. This paper presents a new interactive narrative architecture designed using a set of dramatic techniques that have been formulated based on several years of training in film and theatre.

92 citations



Journal ArticleDOI
TL;DR: This approach adopts a bipartite model taken from narrative theory, in which narrative is composed of story and discourse, which is defined in terms of plans that drive the dynamics of a virtual environment.
Abstract: In this paper, we set out a basic approach to the modeling of narrative in interactive virtual worlds This approach adopts a bipartite model taken from narrative theory, in which narrative is composed of story and discourse In our approach, story elements — plot and character — are defined in terms of plans that drive the dynamics of a virtual environment Discourse elements — the narrative’s communicative actions — are defined in terms of discourse plans whose communicative goals include conveying the story world plan’s structure To ground the model in computational terms, we provide examples from research under way in the Liquid Narrative Group involving the design of the Mimesis system, an architecture for intelligent interactive narrative incorporating concepts from artificial intelligence, narrative theory, cognitive psychology and computational linguistics

65 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated human and robot personality traits as part of a human-robot interaction trial and found that participants did not tend to assign their personality traits to match the robots', while individual personality traits, participants rated themselves as having stronger personality characteristics compared to the robot.
Abstract: Identifying links between human personality and attributed robot personality is a relatively new area of human–robot interaction. In this paper we report on an exploratory study that investigates human and robot personality traits as part of a human–robot interaction trial. The trials took place in a simulated living-room scenario involving 28 participants and a human-sized robot of mechanical appearance. Participants interacted with the robot in two task scenarios relevant to a ‘robot in the home’ context. It was found that participants’ evaluations of their own personality traits are related to their evaluations of the robot’s personality traits. The statistical analysis of questionnaire data yields several statistically significant results: (a) Participants do not tend to assign their personality traits to match the robots’, (b) For individual personality traits, participants rated themselves as having stronger personality characteristics compared to the robot, (c) Specific significant correlations were found between participants’ and robot personality traits, and (d) Significant group differences for participant gender, age and technological background are highlighted. The results are discussed in light of developing personalized robot companions.

63 citations


Journal ArticleDOI
TL;DR: In this article, a distributed view of language is used to trace the rise of human-style autonomy in the early stages of speech and learning to talk, which can be traced to not categorizing speech sounds, but events that shape the emergence of human style autonomy.
Abstract: Taking a distributed view of language, this paper naturalizes symbol grounding. Learning to talk is traced to — not categorizing speech sounds — but events that shape the rise of human-style autonomy. On the extended symbol hypothesis, this happens as babies integrate micro-activity with slow and deliberate adult action. As they discover social norms, intrinsic motive formation enables them to reshape co-action. Because infants link affect to contingencies, dyads develop norm-referenced routines. Over time, infant doings become analysis amenable. The caregiver of a nine-month-old may, for example, prompt the baby to fetch objects. Once she concludes that the baby uses ‘words’ to understand what she says, the infant can use this belief in orienting to more abstract contingencies. New cognitive powers will develop as the baby learns to act in ways that are consistent with a caregiver’s false belief that her baby uses ‘words.’

60 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss how to simulate language evolution in a relatively complex environment which has been developed in the context of the New Ties project and demonstrate how external (or social) symbol grounding can be studied in simulations with large populations.
Abstract: This paper illustrates how external (or social) symbol grounding can be studied in simulations with large populations. We discuss how we can simulate language evolution in a relatively complex environment which has been developed in the context of the New Ties project. This project has the objective of evolving a cultural society and, in doing so, the agents have to evolve a communication system that is grounded in their interactions with their virtual environment and with other individuals. A preliminary experiment is presented in which we investigate the effect of a number of learning mechanisms. The results show that the social symbol grounding problem is a particularly hard one; however, we provide an ideal platform to study this problem.

46 citations


Journal ArticleDOI
TL;DR: The results indicate that the robot’s representations are capable of incrementally evolving by correcting class descriptions, based on instructor feedback to classification results, which is comparable to those obtained by other authors.
Abstract: This paper addresses word learning for human–robot interaction. The focus is on making a robotic agent aware of its surroundings, by having it learn the names of the objects it can find. The human user, acting as instructor, can help the robotic agent ground the words used to refer to those objects. A lifelong learning system, based on one-class learning, was developed (OCLL). This system is incremental and evolves with the presentation of any new word, which acts as a class to the robot, relying on instructor feedback. A novel experimental evaluation methodology, that takes into account the open-ended nature of word learning, is proposed and applied. This methodology is based on the realization that a robot’s vocabulary will be limited by its discriminatory capacity which, in turn, depends on its sensors and perceptual capabilities. The results indicate that the robot’s representations are capable of incrementally evolving by correcting class descriptions, based on instructor feedback to classification results. In successive experiments, it was possible for the robot to learn between 6 and 12 names of real-world office objects. Although these results are comparable to those obtained by other authors, there is a need to scale-up. The limitations of the method are discussed and potential directions for improvement are pointed out.

45 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an alternative benchmark that lies in the dyad and not the agent alone: Does the agent's behavior evoke intersubjectivity from the user, in both conscious and unconscious communication, do users react to behaviorally realistic agents in the same way they react to other humans? Do users appear to attribute similar thoughts and actions?
Abstract: What is the hallmark of success in human–agent interaction? In animation and robotics, many have concentrated on the looks of the agent — whether the appearance is realistic or lifelike. We present an alternative benchmark that lies in the dyad and not the agent alone: Does the agent’s behavior evoke intersubjectivity from the user? That is, in both conscious and unconscious communication, do users react to behaviorally realistic agents in the same way they react to other humans? Do users appear to attribute similar thoughts and actions? We discuss why we distinguish between appearance and behavior, why we use the benchmark of intersubjectivity, our methodology for applying this benchmark to embodied conversational agents (ECAs), and why we believe this benchmark should be applied to human–robot interaction.

43 citations


Journal ArticleDOI
TL;DR: An experimental protocol is developed to study the influence of the specific constraints of the learning areas (spatial planning versus dialogue understanding) as well as Human Computer Interface modalities on executive functions of teenagers diagnosed with high functioning autism.
Abstract: Our exploratory research aims at suggesting design principles for educational software dedicated to people with high functioning autism. In order to explore the efficiency of educational games, we developed an experimental protocol to study the influence of the specific constraints of the learning areas (spatial planning versus dialogue understanding) as well as Human Computer Interface modalities. We designed computer games that were tested with 10 teenagers diagnosed with high functioning autism, during 13 sessions, at the rate of one session per week. Participants’ skills were assessed before and after a training period. A group of 10 typical children matched on academic level also took part in the experiment. A software platform was developed to manage interface modalities and log users’ actions. Moreover, we annotated video recordings of two sessions. Results underline the influence of the task and interface modalities on executive functions.

Journal ArticleDOI
TL;DR: Theory of Mind (ToM) is the ability to predict and understand the mental state of another as mentioned in this paper, and it has been shown that ToM plays a key role in communicating information effectively in conversation.
Abstract: Theory of Mind (ToM) is the ability to predict and understand the mental state of another. While ToM is theorized to play a role in language, we examined whether such a mentalizing ability plays an important role in establishing shared understanding in conversation. Pairs of participants engaged in a Lego model building task in which a director instructed a builder on how to create duplicate models from a prototype that only the director could see. We manipulated whether the director could see (visible condition) or could not see (hidden condition) the builder’s workspace. As predicted, the Mind in the Eyes test (a measure of ToM) predicted accuracy when the workspace was hidden. A high mentalizing ability was an advantage when instructing, resulting in fewer errors, but may be a disadvantage when following instructions. This research indicates that ToM plays a key role in communicating information effectively in conversation.

Journal ArticleDOI
TL;DR: In this paper, the authors examined whether a screen based virtual pet, specifically Nintendogs, gives any form of companionship comparable to a real pet and found that a Nintendog does give companionship, but companionship which is significantly less than that given by a real dog or cat.
Abstract: The purpose of this short paper is to examine whether a screen based virtual pet, specifically Nintendogs, gives any form of companionship comparable to a real pet. Nintendogs runs on a Nintendo DS, a mobile games console. The unit has a full colour screen showing an animated puppy which users must feed, water, walk, play with and train. An abundance of literature exists examining the benefits of owning a real pet yet very little has been written about human attachment to virtual pets. Six hundred five Nintendog users were contacted by email with a request to complete a questionnaire about their interaction, 80 (13%) responded. Nine hundred requests were made to a similar group who were asked to respond about their real pet, if they had one. One hundred sixteen responses were received. Results indicate that a Nintendog does give companionship, but companionship which is significantly less than that given by a real dog or cat.

Journal ArticleDOI
TL;DR: In this paper, the authors examine how children's beliefs can guide the way they interact with and learn about the robot and suggest that better collaboration might require that robots be designed to maximize their relationship potential with specific users.
Abstract: Research on human–robot interaction has often ignored the human cognitive changes that might occur when humans and robots work together to solve problems. Facilitating human–robot collaboration will require understanding how the collaboration functions system-wide. We present detailed examples drawn from a study of children and an autonomous rover, and examine how children’s beliefs can guide the way they interact with and learn about the robot. Our data suggest that better collaboration might require that robots be designed to maximize their relationship potential with specific users.

Journal ArticleDOI
TL;DR: In the human brain, symbol formation and grounding is an ongoing process of generalising constraints from particular contexts, selectively enlisting their use, and re-automating them.
Abstract: After reviewing the papers in this special issue, I must conclude that brains are not syntactic engines, but control systems that orient to biological, interindividual, and cultural norms. By themselves, syntactic constraints both underdetermine and overdetermine cognitive operations. So, rather than serving as the basis for general cognition, they are just another kind of empirically acquired constraint. In humans, symbols emerge from a particular sensorimotor activity through a process of contextual broadening that depends on the coordination of conscious and nonconscious processing. This process provides the representational freedom and stability that constitute the human brain’s solution to the frame problem and symbol grounding problem. Symbol formation and grounding is an ongoing process of generalising constraints from particular contexts, selectively enlisting their use, and re-automating them. This process is central to the self-creation of a language-using person with beliefs, agency, and identity.


Journal ArticleDOI
TL;DR: The pointing and vocalization of 16.5-month-old infants are reported as a function of the context in which they occurred, and manual–vocal signals appear to express the operation of an integrated system, arguably adaptive in the young from evolutionary times to the present.
Abstract: It has long been asserted that the evolutionary path to spoken language was paved by manual–gestural behaviors, a claim that has been revitalized in response to recent research on mirror neurons. Renewed interest in the relationship between manual and vocal behavior draws attention to its development. Here, the pointing and vocalization of 16.5-month-old infants are reported as a function of the context in which they occurred. When infants operated in a referential mode, the frequency of simultaneous vocalization and pointing exceeded the frequency of vocalization-only and pointing-only responses by a wide margin. In a non-communicative context, combinatorial effects persisted, but in weaker form. Manual–vocal signals thus appear to express the operation of an integrated system, arguably adaptive in the young from evolutionary times to the present. It was speculated, based on reported evidence, that manual behavior increases the frequency and complexity of vocal behaviors in modern infants. There may be merit in the claim that manual behavior facilitated the evolution of language because it helped make available, early in development, behaviors that under selection pressures in later ontogenetic stages elaborated into speech.

Journal ArticleDOI
TL;DR: Getting the right set of benchmarks then becomes critical for the emerging field of human–robot interaction (HRI), which can help establish the questions the field asks in setting its research agenda, determining where funding is directed, and shaping how graduate students are educated.
Abstract: The idea for this special issue took shape during discussions on the prospects for using technology to simulate nature and, in particular, the human form. Could it be possible to devise an artificial human being? The computer scientist and robotic engineer, with such ambitions, can reply: “Sure, just give us major funding, say a half billion dollars, and 30 years, and we’ll show you how.” The skeptic can reply, “You’re kidding, right?” Along these lines, debates have raged in the philosophy of mind and cognitive science on whether anything like present day computers could implement a conscious mind, one that could experience what it’s like to be human. Moreover, because it is unclear even what makes us conscious, this problem is likely to remain a hard one for years to come. Nevertheless, we can reframe what a human being is from the standpoint of human attribution. If the technologist insists that it will eventually be possible to build an artificial human being, it is important to determine what would count as one in our own estimation, taking a view from the outside. The question then becomes: What are the benchmarks — categories of interaction that capture fundamental aspects of human life — by which we could measure progress toward this goal? Getting the right set of benchmarks then becomes critical for the emerging field of human–robot interaction (HRI). The benchmarks can help establish the questions the field asks in setting its research agenda, determining where funding is directed, and shaping how graduate students are educated. The right set of benchmarks will also be important to other disciplines, such as comparative psychology, and to meeting the long-term needs of society in areas such as nursing, eldercare, and social work. To these ends, Peter H. Kahn, Jr. and his colleagues proposed six benchmarks in a paper he showed Kerstin Dautenhahn, who was then organizing the 15th IEEE International Symposium on Robot and Human Interactive Communication

Journal ArticleDOI
TL;DR: In this paper, an alternative model of symbol internalisation based on Vygotsky is put forward which goes further in showing how symbols can go from playing intersubjective communicative roles to intrasubjective cognitive ones.
Abstract: This paper compares the nascent theory of the ‘semiotic symbol’ in cognitive science with its computational relative. It finds that the semiotic symbol as it is understood in recent practical and theoretical work does not have the resources to explain the role of symbols in cognition. In light of this argument, an alternative model of symbol internalisation, based on Vygotsky, is put forward which goes further in showing how symbols can go from playing intersubjective communicative roles to intrasubjective cognitive ones. Such a formalisation restores the symbol’s cognitive and communicative dimensions to their proper roles.

Journal ArticleDOI
TL;DR: The symbol grounding problem is presented in the larger context of a materialist theory of content and two problems for causal, teleo-functional accounts of content are presented and a distinction between two kinds of mental representations is made: presentations and symbols.
Abstract: I present the symbol grounding problem in the larger context of a materialist theory of content and then present two problems for causal, teleo-functional accounts of content. This leads to a distinction between two kinds of mental representations: presentations and symbols; only the latter are cognitive. Based on Milner and Goodale’s dual route model of vision, I posit the existence of precise interfaces between cognitive systems that are activated during object recognition. Interfaces are constructed as a child learns, and is taught, how to interact with its environment; hence, interface structure has a social determinant essential for symbol grounding. Symbols are encoded in the brain to exploit these interfaces, by having projections to the interfaces that are activated by what they symbolise. I conclude by situating my proposal in the context of Harnad’s (1990) solution to the symbol grounding problem and responding to three standard objections.

Journal ArticleDOI
TL;DR: Taking the earlier work of Oudeyer, this work has extended his model to include a dispersive force intended to account broadly for a speaker’s motivation to increase auditory distinctiveness, and shows that vowel systems result that are more representative of the range seen in human languages.
Abstract: The traditional view of symbol grounding seeks to connect an a priori internal representation or ‘form’ to its external referent. But such a ‘form’ is usually itself systematically composed out of more primitive parts (i.e., it is ‘symbolic’), so this view ignores its grounding in the physics of the world. Some previous work simulating multiple talking/listening agents has effectively taken this stance, and shown how a shared discrete speech code (i.e., vowel system) can emerge. Taking the earlier work of Oudeyer, we have extended his model to include a dispersive force intended to account broadly for a speaker’s motivation to increase auditory distinctiveness. New simulations show that vowel systems result that are more representative of the range seen in human languages. These simulations make many profound abstractions and assumptions. Relaxing these by including more physically and physiologically realistic mechanisms for talking and listening is seen as the key to replicating more complex and dynamic aspects of speech, such as consonant-vowel patterning.

Journal ArticleDOI
TL;DR: This work considers circumstances where, for this robotic embodiment, dynamic observation has both advantages and disadvantages when compared to static observation, and illustrates the differences and trade-offs that arise between static observational and reactive following learning methods.
Abstract: Research into robotic social learning, especially that concerned with imitation, often focuses at differing ends of a spectrum from observational learning at one end to following or matched-dependent behaviour at the other. We study the implications and differences that arise when carrying out experiments both at the extremes and within this spectrum. Physical Khepera robots with minimal sensory capabilities are used, and after training, experiments are carried out where an imitating robot perceives the dynamic movement behaviours of another model robot carrying a light source. It learns the movement behaviour of the model by either statically observing the model, dynamically observing the model or by following the model. It finally re-enacts the learnt behaviour. We compare the results of these re-enactments and illustrate the differences and trade-offs that arise between static observational and reactive following learning methods. We also consider circumstances where, for this robotic embodiment, dynamic observation has both advantages and disadvantages when compared to static observation. We conclude by discussing the implications that arise from using and combining these types of social learning.

Journal ArticleDOI
TL;DR: The first Young Researchers in Human–Robot Interaction Workshop, held on March 1, 2006 in Salt Lake City, Utah, provides insight into how to facilitate the establishment of the HRI community.
Abstract: The first Young Researchers in Human–Robot Interaction Workshop, held on March 1, 2006 in Salt Lake City, Utah, provides insight into how to facilitate the establishment of the HRI community. Organized in conjunction with the first annual ACM/IEEE Human Robot Interaction Conference, the NSF-sponsored workshop assembled 15 graduate students from 5 different countries in computer science, psychology, engineering, and the arts to build the HRI community. This report highlights recommendations from discussion sessions, a synopsis of the plenary address, and representative examples of the participants’ presentations. Participants emphasized that HRI is a unique field, requiring knowledge in computing, psychology, and communications despite the differences in the courses, methods, and philosophies across disciplines. The following are needed for future growth in HRI: (i) stable, canonical robotics platforms for research purposes, (ii) a multidisciplinary community infrastructure to connect researchers, and (iii) a “Berlitz phrasebook” and collected reference materials for helping understand the “other” disciplines.


Journal ArticleDOI
TL;DR: In this article, the design and utility of the android counselor/psychotherapist is explored, whose body is equipped with semi-autonomous visceral and behavioral capacities for doing intimacy.
Abstract: Studies of human–human interactions indicate that relational dimensions, which are largely nonverbal, include intimacy/involvement, status/control, and emotional valence. This paper devises codes from a study of couples and strangers which may be behavior-mapped on to next generation android bodies. The codes provide act specifications for a possible benchmark of nonverbal intimacy in human–robot interaction. The appropriateness of emotionally intimate behaviors for androids is considered. The design and utility of the android counselor/psychotherapist is explored, whose body is equipped with semi-autonomous visceral and behavioral capacities for ‘doing intimacy.’