scispace - formally typeset
Search or ask a question
Author

Takayuki Kanda

Bio: Takayuki Kanda is an academic researcher from Kyoto University. The author has contributed to research in topics: Social robot & Robot. The author has an hindex of 67, co-authored 410 publications receiving 14825 citations. Previous affiliations of Takayuki Kanda include Osaka University & Wakayama University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, two English-speaking "Robovie" robots interacted with first and sixth-grade pupils at the perimeter of their respective classrooms, using wireless identification tags and sensors to identify and interact with children who came near them.
Abstract: Robots increasingly have the potential to interact with people in daily life. It is believed that, based on this ability, they will play an essential role in human society in the not-so-distant future. This article examined the proposition that robots could form relationships with children and that children might learn from robots as they learn from other children. In this article, this idea is studied in an 18-day field trial held at a Japanese elementary school. Two English-speaking "Robovie" robots interacted with first- and sixth-grade pupils at the perimeter of their respective classrooms. Using wireless identification tags and sensors, these robots identified and interacted with children who came near them. The robots gestured and spoke English with the children, using a vocabulary of about 300 sentences for speaking and 50 words for recognition. The children were given a brief picture-word matching English test at the start of the trial, after 1 week and after 2 weeks. Interactions were counted using the tags, and video and audio were recorded. In the majority of cases, a child's friends were present during the interactions. Interaction with the robot was frequent in the 1st week, and then it fell off sharply by the 2nd week. Nonetheless, some children continued to interact with the robot. Interaction time during the 2nd week predicted improvements in English skill at the posttest, controlling for pretest scores. Further analyses indicate that the robots may have been more successful in establishing common ground and influence when the children already had some initial proficiency or interest in English. These results suggest that interactive robots should be designed to have something in common with their users, providing a social as well as technical challenge.

774 citations

Proceedings ArticleDOI
09 Mar 2009
TL;DR: A set of gaze behaviors for Robovie to signal three kinds of participant roles: addressee, bystander, and overhearer were designed and Behavioral measures showed that subjects' participation behavior conformed to the roles that the robot communicated to them.
Abstract: During conversations, speakers establish their and others' participant roles (who participates in the conversation and in what capacity)--or "footing" as termed by Goffman-using gaze cues. In this paper, we study how a robot can establish the participant roles of its conversational partners using these cues. We designed a set of gaze behaviors for Robovie to signal three kinds of participant roles: addressee, bystander, and overhearer. We evaluated our design in a controlled laboratory experiment with 72 subjects in 36 trials. In three conditions, the robot signaled to two subjects, only by means of gaze, the roles of (1) two addressees, (2) an addressee and a bystander, or (3) an addressee and an overhearer. Behavioral measures showed that subjects' participation behavior conformed to the roles that the robot communicated to them. In subjective evaluations, significant differences were observed in feelings of groupness between addressees and others and liking between overhearers and others. Participation in the conversation did not affect task performance-measured by recall of information presented by the robot-but affected subjects' ratings of how much they attended to the task.

370 citations

Journal ArticleDOI
TL;DR: Results of experiments where subjects interacted with Robovie, which is being developed as a platform for research on the possibility of communication robots, are reported and influences of gender and experience of real robots on their negative attitudes and behaviors toward robots are discussed.
Abstract: Negative attitudes toward robots are considered as one of the psychological factors preventing humans from interacting with robots in the daily life. To verify their influence on humans‘ behaviors toward robots, we designed and executed experiments where subjects interacted with Robovie, which is being developed as a platform for research on the possibility of communication robots. This paper reports and discusses the results of these experiments on correlation between subjects’ negative attitudes and their behaviors toward robots. Moreover, it discusses influences of gender and experience of real robots on their negative attitudes and behaviors toward robots.

366 citations

Journal ArticleDOI
TL;DR: A mechanism for two social communication abilities: forming long-term relationships and estimating friendly relationships among people is proposed and the results demonstrate the potential of current interactive robots to establish social relationships with humans in the authors' daily lives.
Abstract: Interactive robots participating in our daily lives should have the fundamental ability to socially communicate with humans. In this paper, we propose a mechanism for two social communication abilities: forming long-term relationships and estimating friendly relationships among people. The mechanism for long-term relationships is based on three principles of behavior design. The robot we developed, Robovie, is able to interact with children in the same way as children do. Moreover, the mechanism is designed for long-term interaction along the following three design principles: (1) it calls children by name using radio frequency identification tags; (2) it adapts its interactive behaviors for each child based on a pseudo development mechanism; and (3) it confides its personal matters to the children who have interacted with the robot for an extended period of time. Regarding the estimation of friendly relationships, the robot assumes that people who spontaneously behave as a group together are friends. Then, by identifying each person in the interacting group around the robot, it estimates the relationships between them. We conducted a two-month field trial at an elementary school. An interactive humanoid robot, Robovie, was placed in a classroom at the school. The results of the field trial revealed that the robot successfully continued interacting with many children for two months, and seemed to have established friendly relationships with them. In addition, it demonstrated reasonable performance in identifying friendships among children. We believe that these results demonstrate the potential of current interactive robots to establish social relationships with humans in our daily lives.

317 citations

Journal ArticleDOI
TL;DR: Discussion focuses on how children's social and moral relationships with future personified robots may well be substantial and meaningful and (b) personify robots of the future may emerge as a unique ontological category.
Abstract: Children will increasingly come of age with personified robots and potentially form social and even moral relationships with them. What will such relationships look like? To address this question, 90 children (9-, 12-, and 15-year-olds) initially interacted with a humanoid robot, Robovie, in 15-min sessions. Each session ended when an experimenter interrupted Robovie's turn at a game and, against Robovie's stated objections, put Robovie into a closet. Each child was then engaged in a 50-min structural-developmental interview. Results showed that during the interaction sessions, all of the children engaged in physical and verbal social behaviors with Robovie. The interview data showed that the majority of children believed that Robovie had mental states (e.g., was intelligent and had feelings) and was a social being (e.g., could be a friend, offer comfort, and be trusted with secrets). In terms of Robovie's moral standing, children believed that Robovie deserved fair treatment and should not be harmed psychologically but did not believe that Robovie was entitled to its own liberty (Robovie could be bought and sold) or civil rights (in terms of voting rights and deserving compensation for work performed). Developmentally, while more than half the 15-year-olds conceptualized Robovie as a mental, social, and partly moral other, they did so to a lesser degree than the 9- and 12-year-olds. Discussion focuses on how (a) children's social and moral relationships with future personified robots may well be substantial and meaningful and (b) personified robots of the future may emerge as a unique ontological category.

300 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
01 Jan 2001
TL;DR: This chapter discusses Decision-Theoretic Foundations, Game Theory, Rationality, and Intelligence, and the Decision-Analytic Approach to Games, which aims to clarify the role of rationality in decision-making.
Abstract: Preface 1. Decision-Theoretic Foundations 1.1 Game Theory, Rationality, and Intelligence 1.2 Basic Concepts of Decision Theory 1.3 Axioms 1.4 The Expected-Utility Maximization Theorem 1.5 Equivalent Representations 1.6 Bayesian Conditional-Probability Systems 1.7 Limitations of the Bayesian Model 1.8 Domination 1.9 Proofs of the Domination Theorems Exercises 2. Basic Models 2.1 Games in Extensive Form 2.2 Strategic Form and the Normal Representation 2.3 Equivalence of Strategic-Form Games 2.4 Reduced Normal Representations 2.5 Elimination of Dominated Strategies 2.6 Multiagent Representations 2.7 Common Knowledge 2.8 Bayesian Games 2.9 Modeling Games with Incomplete Information Exercises 3. Equilibria of Strategic-Form Games 3.1 Domination and Ratonalizability 3.2 Nash Equilibrium 3.3 Computing Nash Equilibria 3.4 Significance of Nash Equilibria 3.5 The Focal-Point Effect 3.6 The Decision-Analytic Approach to Games 3.7 Evolution. Resistance. and Risk Dominance 3.8 Two-Person Zero-Sum Games 3.9 Bayesian Equilibria 3.10 Purification of Randomized Strategies in Equilibria 3.11 Auctions 3.12 Proof of Existence of Equilibrium 3.13 Infinite Strategy Sets Exercises 4. Sequential Equilibria of Extensive-Form Games 4.1 Mixed Strategies and Behavioral Strategies 4.2 Equilibria in Behavioral Strategies 4.3 Sequential Rationality at Information States with Positive Probability 4.4 Consistent Beliefs and Sequential Rationality at All Information States 4.5 Computing Sequential Equilibria 4.6 Subgame-Perfect Equilibria 4.7 Games with Perfect Information 4.8 Adding Chance Events with Small Probability 4.9 Forward Induction 4.10 Voting and Binary Agendas 4.11 Technical Proofs Exercises 5. Refinements of Equilibrium in Strategic Form 5.1 Introduction 5.2 Perfect Equilibria 5.3 Existence of Perfect and Sequential Equilibria 5.4 Proper Equilibria 5.5 Persistent Equilibria 5.6 Stable Sets 01 Equilibria 5.7 Generic Properties 5.8 Conclusions Exercises 6. Games with Communication 6.1 Contracts and Correlated Strategies 6.2 Correlated Equilibria 6.3 Bayesian Games with Communication 6.4 Bayesian Collective-Choice Problems and Bayesian Bargaining Problems 6.5 Trading Problems with Linear Utility 6.6 General Participation Constraints for Bayesian Games with Contracts 6.7 Sender-Receiver Games 6.8 Acceptable and Predominant Correlated Equilibria 6.9 Communication in Extensive-Form and Multistage Games Exercises Bibliographic Note 7. Repeated Games 7.1 The Repeated Prisoners Dilemma 7.2 A General Model of Repeated Garnet 7.3 Stationary Equilibria of Repeated Games with Complete State Information and Discounting 7.4 Repeated Games with Standard Information: Examples 7.5 General Feasibility Theorems for Standard Repeated Games 7.6 Finitely Repeated Games and the Role of Initial Doubt 7.7 Imperfect Observability of Moves 7.8 Repeated Wines in Large Decentralized Groups 7.9 Repeated Games with Incomplete Information 7.10 Continuous Time 7.11 Evolutionary Simulation of Repeated Games Exercises 8. Bargaining and Cooperation in Two-Person Games 8.1 Noncooperative Foundations of Cooperative Game Theory 8.2 Two-Person Bargaining Problems and the Nash Bargaining Solution 8.3 Interpersonal Comparisons of Weighted Utility 8.4 Transferable Utility 8.5 Rational Threats 8.6 Other Bargaining Solutions 8.7 An Alternating-Offer Bargaining Game 8.8 An Alternating-Offer Game with Incomplete Information 8.9 A Discrete Alternating-Offer Game 8.10 Renegotiation Exercises 9. Coalitions in Cooperative Games 9.1 Introduction to Coalitional Analysis 9.2 Characteristic Functions with Transferable Utility 9.3 The Core 9.4 The Shapkey Value 9.5 Values with Cooperation Structures 9.6 Other Solution Concepts 9.7 Colational Games with Nontransferable Utility 9.8 Cores without Transferable Utility 9.9 Values without Transferable Utility Exercises Bibliographic Note 10. Cooperation under Uncertainty 10.1 Introduction 10.2 Concepts of Efficiency 10.3 An Example 10.4 Ex Post Inefficiency and Subsequent Oilers 10.5 Computing Incentive-Efficient Mechanisms 10.6 Inscrutability and Durability 10.7 Mechanism Selection by an Informed Principal 10.8 Neutral Bargaining Solutions 10.9 Dynamic Matching Processes with Incomplete Information Exercises Bibliography Index

3,569 citations

Journal ArticleDOI
01 Jun 1959

3,442 citations

01 Mar 1999

3,234 citations

Journal ArticleDOI
TL;DR: The context for socially interactive robots is discussed, emphasizing the relationship to other research fields and the different forms of “social robots”, and a taxonomy of design methods and system components used to build socially interactive Robots is presented.

2,869 citations