scispace - formally typeset
Search or ask a question
Author

Vaibhav V. Unhelkar

Bio: Vaibhav V. Unhelkar is an academic researcher from Rice University. The author has contributed to research in topics: Human–robot interaction & Mobile robot. The author has an hindex of 9, co-authored 21 publications receiving 407 citations. Previous affiliations of Vaibhav V. Unhelkar include Massachusetts Institute of Technology & Indian Institute of Technology Bombay.

Papers
More filters
Proceedings ArticleDOI
06 Mar 2017
TL;DR: It is found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology, and a higher level of automation transparency may mitigate the “cry wolf’ effect.
Abstract: Existing research assessing human operators' trust in automation and robots has primarily examined trust as a steady-state variable, with little emphasis on the evolution of trust over time. With the goal of addressing this research gap, we present a study exploring the dynamic nature of trust. We defined trust of entirety as a measure that accounts for trust across a human's entire interactive experience with automation, and first identified alternatives to quantify it using real-time measurements of trust. Second, we provided a novel model that attempts to explain how trust of entirety evolves as a user interacts repeatedly with automation. Lastly, we investigated the effects of automation transparency on momentary changes of trust. Our results indicated that trust of entirety is better quantified by the average measure of "area under the trust curve" than the traditional post-experiment trust measure. In addition, we found that trust of entirety evolves and eventually stabilizes as an operator repeatedly interacts with a technology. Finally, we observed that a higher level of automation transparency may mitigate the "cry wolf" effect -- wherein human operators begin to reject an automated system due to repeated false alarms.

134 citations

Journal ArticleDOI
07 Mar 2018
TL;DR: This work presents a human-aware robotic system with single-axis mobility that incorporates both predictions of human motion and planning in time to execute efficient and safe motions during automotive final assembly.
Abstract: Introducing mobile robots into the collaborative assembly process poses unique challenges for ensuring efficient and safe human–robot interaction. Current human–robot work cells require the robot to cease operating completely whenever a human enters a shared region of the given cell, and the robots do not explicitly model or adapt to the behavior of the human. In this work, we present a human-aware robotic system with single-axis mobility that incorporates both predictions of human motion and planning in time to execute efficient and safe motions during automotive final assembly. We evaluate our system in simulation against three alternative methods, including a baseline approach emulating the behavior of standard safety systems in factories today. We also assess the system within a factory test environment. Through both live demonstration and results from simulated experiments, we show that our approach produces statistically significant improvements in quantitative measures of safety and fluency of interaction.

117 citations

Proceedings ArticleDOI
03 Mar 2014
TL;DR: This paper compares the performance of a mobile robotic assistant to that of a human assistant to gain a better understanding of the factors that impact its effectiveness, and discusses how results from the experiment inform the design of a more effective assistant.
Abstract: There is an emerging desire across manufacturing industries to deploy robots that support people in their manual work, rather than replace human workers. This paper explores one such opportunity, which is to field a mobile robotic assistant that travels between part carts and the automotive final assembly line, delivering tools and materials to the human workers. We compare the performance of a mobile robotic assistant to that of a human assistant to gain a better understanding of the factors that impact its effectiveness. Statistically significant differences emerge based on type of assistant, human or robot. Interaction times and idle times are statistically significantly higher for the robotic assistant than the human assistant. We report additional differences in participant’s subjective response regarding team fluency, situational awareness, comfort and safety. Finally, we discuss how results from the experiment inform the design of a more effective assistant.Categories and Subject DescriptorsH.1.2 [Models and Principles]: User/Machine Systems;I.2.9 [Artificial Intelligence]: RoboticsGeneral TermsExperimentation, Performance, Human Factors

82 citations

Proceedings ArticleDOI
26 May 2015
TL;DR: A study is presented that supports the existence of statistically significant biomechanical turn indicators of human walking motions and the effectiveness of these turn indicators as features in the prediction of human motion trajectories and an existing algorithm for motion planning within dynamic environments is demonstrated.
Abstract: Mobile, interactive robots that operate in human-centric environments need the capability to safely and efficiently navigate around humans. This requires the ability to sense and predict human motion trajectories and to plan around them. In this paper, we present a study that supports the existence of statistically significant biomechanical turn indicators of human walking motions. Further, we demonstrate the effectiveness of these turn indicators as features in the prediction of human motion trajectories. Human motion capture data is collected with predefined goals to train and test a prediction algorithm. Use of anticipatory features results in improved performance of the prediction algorithm. Lastly, we demonstrate the closed-loop performance of the prediction algorithm using an existing algorithm for motion planning within dynamic environments. The anticipatory indicators of human walking motion can be used with different prediction and/or planning algorithms for robotics; the chosen planning and prediction algorithm demonstrates one such implementation for human-robot co-navigation.

81 citations

Proceedings ArticleDOI
09 Mar 2020
TL;DR: This work presents a computational framework that decides if, when, and what to communicate during human-robot collaboration, and implements CommPlan for a shared workspace task, in which the robot has multiple communication options and needs to reason within a short time.
Abstract: Communication is critical to collaboration; however, too much of it can degrade performance. Motivated by the need for effective use of a robot’s communication modalities, in this work, we present a computational framework that decides if, when, and what to communicate during human-robot collaboration. The framework, titled CommPlan, consists of a model specification process and an execution-time POMDP planner. To address the challenge of collecting interaction data, the model specification process is hybrid: where part of the model is learned from data, while the remainder is manually specified. Given the model, the robot’s decision-making is performed computationally during interaction and under partial observability of human’s mental states. We implement CommPlan for a shared workspace task, in which the robot has multiple communication options and needs to reason within a short time. Through experiments with human participants, we confirm that CommPlan results in the effective use of communication capabilities and improves human-robot collaboration. ACM Reference Format: Vaibhav V. Unhelkar, Shen Li, and Julie A. Shah. 2020. Decision-Making for Bidirectional Communication in Sequential Human-Robot Collaborative Tasks. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20), March 23–26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3319502.3374779

45 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An extensive review on human–robot collaboration in industrial environment is provided, with specific focus on issues related to physical and cognitive interaction, and the commercially available solutions are presented.

632 citations

Journal ArticleDOI
TL;DR: In this article, the ability of intelligent autonomous systems to perceive, understand, and anticipate human behavior becomes increasingly important in a growing number of intelligent systems in human environments, and the ability to do so is discussed.
Abstract: With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand, and anticipate human behavior becomes increasingly important. Spec...

547 citations

Book
22 May 2017
TL;DR: A Survey of Methods for Safe Human-Robot Interaction organizes and summarizes the large body of research related to facilitation of safe human-robot interaction and organizes them into subcategories, characterizes relationships between the strategies, and identifies potential gaps in the existing knowledge that warrant further research.
Abstract: Ensuring human safety is one of the most important considerations within the field of human-robot interaction (HRI). This does not simply involve preventing collisions between humans and robots operating within a shared space; we must consider all possible ways in which harm could come to a person, ranging from physical contact to adverse psychological effects resulting from unpleasant or dangerous interaction. A Survey of Methods for Safe Human-Robot Interaction organizes and summarizes the large body of research related to facilitation of safe human-robot interaction. It describes the strategies and methods that have been developed thus far, organizes them into subcategories, characterizes relationships between the strategies, and identifies potential gaps in the existing knowledge that warrant further research. By creating an organized categorization of the field, A Survey of Methods for Safe Human-Robot Interaction is intended to support future research and the development of new technologies for safe HRI, as well as facilitate the use of these techniques by researchers within the HRI community.

287 citations

Journal ArticleDOI
Guy Hoffman1
TL;DR: In this paper, the authors developed a number of metrics to evaluate the level of fluency in human-robot shared-location teamwork, and provided an analytical model for four objective metrics, and assessed their dynamics in a turn-taking framework.
Abstract: Collaborative fluency is the coordinated meshing of joint activities between members of a well-synchronized team. In recent years, researchers in human–robot collaboration have been developing robots to work alongside humans aiming not only at task efficiency, but also at human–robot fluency. As part of this effort, we have developed a number of metrics to evaluate the level of fluency in human–robot shared-location teamwork. While these metrics are being used in existing research, there has been no systematic discussion on how to measure fluency and how the commonly used metrics perform and compare. In this paper, we codify subjective and objective human–robot fluency metrics, provide an analytical model for four objective metrics, and assess their dynamics in a turn-taking framework. We also report on a user study linking objective and subjective fluency metrics and survey recent use of these metrics in the literature.

286 citations

Journal ArticleDOI
TL;DR: An overview of symbiotic human-robot collaborative assembly is provided and future research directions for voice processing, gesture recognition, haptic interaction, and brainwave perception are highlighted.

273 citations