scispace - formally typeset
Search or ask a question
Author

Dylan F. Glas

Bio: Dylan F. Glas is an academic researcher from Osaka University. The author has contributed to research in topics: Social robot & Robot. The author has an hindex of 23, co-authored 48 publications receiving 2078 citations. Previous affiliations of Dylan F. Glas include Massachusetts Institute of Technology.

Papers
More filters
Proceedings ArticleDOI
24 Jul 1998
TL;DR: Small, electronically tagged wooden blocks that serve as physical icons (“phicons”) for the containment, transport, and manipulation of online media are presented, providing seamless gateways between tangible and graphical interfaces.
Abstract: We present a tangible user interface based upon mediaBlocks: small, electronically tagged wooden blocks that serve as physical icons (“phicons”) for the containment, transport, and manipulation of online media. MediaBlocks interface with media input and output devices such as video cameras and projectors, allowing digital media to be rapidly “copied” from a media source and “pasted” into a media display. MediaBlocks are also compatible with traditional GUIs, providing seamless gateways between tangible and graphical interfaces. Finally, mediaBlocks act as physical “controls” in tangible interfaces for tasks such as s equencing collections of media elements. CR Categories and Subject Descriptors: H.5.2 [User Interfaces] Input devices and strategies; H.5.1 [Multimedia Information Systems] Artificial, augmented, and virtual realities Additional Keywords: tangible user interface, tangible bits, phicons, physical constraints, ubiquitous compu ting

327 citations

Proceedings ArticleDOI
09 Mar 2009
TL;DR: A model of approach behavior with which a robot can initiate conversation with people who are walking that significantly improves the robot's performance in initiating conversations.
Abstract: This paper proposes a model of approach behavior with which a robot can initiate conversation with people who are walking. We developed the model by learning from the failures in a simplistic approach behavior used in a real shopping mall. Sometimes people were unaware of the robot's presence, even when it spoke to them. Sometimes, people were not sure whether the robot was really trying to start a conversation, and they did not start talking with it even though they displayed interest. To prevent such failures, our model includes the following functions: predicting the walking behavior of people, choosing a target person, planning its approaching path, and nonverbally indicating its intention to initiate a conversation. The approach model was implemented and used in a real shopping mall. The field trial demonstrated that our model significantly improves the robot's performance in initiating conversations.

291 citations

Proceedings ArticleDOI
21 Sep 2008
TL;DR: A series of techniques for anticipating people's behavior in a public space, mainly based on the analysis of accumulated trajectories, are presented, and the use of these techniques in a social robot is demonstrated.
Abstract: For a robot providing services to people in a public space such as a train station or a shopping mall, it is important to distinguish potential customers, such as window-shoppers, from other people, such as busy commuters. In this paper, we present a series of techniques for anticipating people's behavior in a public space, mainly based on the analysis of accumulated trajectories, and we demonstrate the use of these techniques in a social robot. We placed a ubiquitous sensor network consisting of six laser range finders in a shopping arcade. The system tracks people's positions as well as their local behaviors such as fast walking, idle walking, or stopping. We accumulated people's trajectories for a week, applying a clustering technique to the accumulated trajectories to extract information about the use of space and people's typical global behaviors. This information enables the robot to target its services to people who are walking idly or stopping. The robot anticipates both the areas in which people are likely to perform these behaviors, and also the probable local behaviors of individuals a few seconds in the future. In a field experiment we demonstrate that this system enables the robot to serve people efficiently.

123 citations

Journal ArticleDOI
TL;DR: In this paper, a model of approach behavior with which a robot can initiate conversation with people who are walking is proposed, which includes predicting the walking behavior of people, choosing a target person, planning its approaching path and nonverbally indicating its intention to initiate a conversation.
Abstract: This paper proposes a model of approach behavior with which a robot can initiate conversation with people who are walking. We developed the model by learning from the failures in a simplistic approach behavior used in a real shopping mall. Sometimes people were unaware of the robot's presence, even when it spoke to them. Sometimes, people were not sure whether the robot was really trying to start a conversation, and they did not start talking with it even though they displayed interest. To prevent such failures, our model includes the following functions: predicting the walking behavior of people, choosing a target person, planning its approaching path, and nonverbally indicating its intention to initiate a conversation. The approach model was implemented and used in a real shopping mall. The field trial demonstrated that our model significantly improves the robot's performance in initiating conversations.

104 citations

Proceedings ArticleDOI
01 Aug 2016
TL;DR: An overview of the requirements and design of the platform, the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids are presented.
Abstract: The development of an android with convincingly lifelike appearance and behavior has been a long-standing goal in robotics, and recent years have seen great progress in many of the technologies needed to create such androids. However, it is necessary to actually integrate these technologies into a robot system in order to assess the progress that has been made towards this goal and to identify important areas for future work. To this end, we are developing ERICA, an autonomous android system capable of conversational interaction, featuring advanced sensing and speech synthesis technologies, and arguably the most humanlike android built to date. Although the project is ongoing, initial development of the basic android platform has been completed. In this paper we present an overview of the requirements and design of the platform, describe the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids.

96 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Polanyi is at pains to expunge what he believes to be the false notion contained in the contemporary view of science which treats it as an object and basically impersonal discipline.
Abstract: The Study of Man. By Michael Polanyi. Price, $1.75. Pp. 102. University of Chicago Press, 5750 Ellis Ave., Chicago 37, 1959. One subtitle to Polanyi's challenging and fascinating book might be The Evolution and Natural History of Error , for Polanyi is at pains to expunge what he believes to be the false notion contained in the contemporary view of science which treats it as an object and basically impersonal discipline. According to Polanyi not only is this a radical and important error, but it is harmful to the objectives of science itself. Another subtitle could be Farewell to Detachment , for in place of cold objectivity he develops the idea that science is necessarily intensely personal. It is a human endeavor and human point of view which cannot be divorced from nor uprooted out of the human matrix from which it arises and in which it works. For a good while

2,248 citations

Journal ArticleDOI
TL;DR: The MCRpd interaction model for tangible interfaces as discussed by the authors is a conceptual framework for tangible user interfaces, which relates the role of physical and digital representations, physical control, and underlying digital models.
Abstract: We present steps toward a conceptual framework for tangible user interfaces. We introduce the MCRpd interaction model for tangible interfaces, which relates the role of physical and digital representations, physical control, and underlying digital models. This model serves as a foundation for identifying and discussing several key characteristics of tangible user interfaces. We identify a number of systems exhibiting these characteristics, and situate these within 12 application domains. Finally, we discuss tangible interfaces in the context of related research themes, both within and outside of the human-computer interaction domain.

1,200 citations

Proceedings ArticleDOI
01 May 1999
TL;DR: A computer augmented environment that allows users to smoothly interchanged digital information among their portable computers, table and walldisplays, and other physical objects and provides a mechanism for attaching digital data to physical objects, such as a videotape or document folder, to link physical and digital spaces.
Abstract: This paper describes our design and implementation of a computer augmented environment that allows users to smoothly interchange digital information among their portable computers, table and wall displays, and other physical objects. Supported by a camera-based object recognition system, users can easily integrate their portable computers with the pre-installed ones in the environment. Users can use displays projected on tables and walls as a spatially continuous extension of their portable computers. Using an interaction technique called hyperdragging, users can transfer information from one computer to another, by only knowing the physical relationship between them. We also provide a mechanism for attaching digital data to physical objects, such as a videotape or a document folder, to link physical and digital spaces.

819 citations

Proceedings ArticleDOI
11 Nov 2001
TL;DR: Evaluation shows that everyday programmers using phidgets can rapidly develop physical interfaces and is shown to allow the programmer to develop, debug and test a physical interface even when no physical device is present.
Abstract: Physical widgets or phidgets are to physical user interfaces what widgets are to graphical user interfaces. Similar to widgets, phidgets abstract and package input and output devices: they hide implementation and construction details, they expose functionality through a well-defined API, and they have an (optional) on-screen interactive interface for displaying and controlling device state. Unlike widgets, phidgets also require: a connection manager to track how devices appear on-line; a way to link a software phidget with its physical counterpart; and a simulation mode to allow the programmer to develop, debug and test a physical interface even when no physical device is present. Our evaluation shows that everyday programmers using phidgets can rapidly develop physical interfaces.

686 citations

Patent
24 Jun 2005
TL;DR: In this paper, a video camera within the interactive display table responds to infrared (IR) light reflected from the objects to detect any connected components, correspond to portions of the object(s) that are either in contact, or proximate the display surface.
Abstract: An interactive display table has a display surface for displaying images and upon or adjacent to which various objects, including a user's hand(s) and finger(s) can be detected. A video camera within the interactive display table responds to infrared (IR) light reflected from the objects to detect any connected components. Connected component correspond to portions of the object(s) that are either in contact, or proximate the display surface. Using these connected components, the interactive display table senses and infers natural hand or finger positions, or movement of an object, to detect gestures. Specific gestures are used to execute applications, carryout functions in an application, create a virtual object, or do other interactions, each of which is associated with a different gesture. A gesture can be a static pose, or a more complex configuration, and/or movement made with one or both hands or other objects.

647 citations