scispace - formally typeset
Search or ask a question

Showing papers in "Frontiers in Robotics and AI in 2022"


Journal ArticleDOI
TL;DR: In this article , the challenges in robotic food handling and future research directions and opportunities are discussed based on an analysis of the challenges and state-of-the-art developments, highlighting the advances in robotic end-effectors, food recognition, and fundamental information of food products.
Abstract: Despite developments in robotics and automation technologies, several challenges need to be addressed to fulfill the high demand for automating various manufacturing processes in the food industry. In our opinion, these challenges can be classified as: the development of robotic end-effectors to cope with large variations of food products with high practicality and low cost, recognition of food products and materials in 3D scenario, better understanding of fundamental information of food products including food categorization and physical properties from the viewpoint of robotic handling. In this review, we first introduce the challenges in robotic food handling and then highlight the advances in robotic end-effectors, food recognition, and fundamental information of food products related to robotic food handling. Finally, future research directions and opportunities are discussed based on an analysis of the challenges and state-of-the-art developments.

33 citations


Journal ArticleDOI
TL;DR: In this paper, the evaluation methods, scenarios, datasets, and metrics commonly used in previous socially-aware navigation research are reviewed, the limitations of existing evaluation protocols are discussed, and research opportunities for advancing social-aware robot navigation are highlighted.
Abstract: As mobile robots are increasingly introduced into our daily lives, it grows ever more imperative that these robots navigate with and among people in a safe and socially acceptable manner, particularly in shared spaces. While research on enabling socially-aware robot navigation has expanded over the years, there are no agreed-upon evaluation protocols or benchmarks to allow for the systematic development and evaluation of socially-aware navigation. As an effort to aid more productive development and progress comparisons, in this paper we review the evaluation methods, scenarios, datasets, and metrics commonly used in previous socially-aware navigation research, discuss the limitations of existing evaluation protocols, and highlight research opportunities for advancing socially-aware robot navigation.

29 citations


Journal ArticleDOI
TL;DR: This review paper presents the state-of-the-art in aerial grasping and perching mechanisms and provides a comprehensive comparison of their characteristics, and analyzes these mechanisms by comparing the advantages and disadvantages of the proposed technologies.
Abstract: Over the last decade, there has been an increased interest in developing aerial robotic platforms that exhibit grasping and perching capabilities not only within the research community but also in companies across different industry sectors. Aerial robots range from standard multicopter vehicles/drones, to autonomous helicopters, and fixed-wing or hybrid devices. Such devices rely on a range of different solutions for achieving grasping and perching. These solutions can be classified as: 1) simple gripper systems, 2) arm-gripper systems, 3) tethered gripping mechanisms, 4) reconfigurable robot frames, 5) adhesion solutions, and 6) embedment solutions. Grasping and perching are two crucial capabilities that allow aerial robots to interact with the environment and execute a plethora of complex tasks, facilitating new applications that range from autonomous package delivery and search and rescue to autonomous inspection of dangerous or remote environments. In this review paper, we present the state-of-the-art in aerial grasping and perching mechanisms and we provide a comprehensive comparison of their characteristics. Furthermore, we analyze these mechanisms by comparing the advantages and disadvantages of the proposed technologies and we summarize the significant achievements in these two research topics. Finally, we conclude the review by suggesting a series of potential future research directions that we believe that are promising.

18 citations


Journal ArticleDOI
TL;DR: In this paper , a systematic literature review was performed to evaluate the most frequently addressed operator human factors states in shared space human-robot collaboration, the methods used to quantify these states, and the implications of the states on HRC.
Abstract: The degree of successful human-robot collaboration is dependent on the joint consideration of robot factors (RF) and human factors (HF). Depending on the state of the operator, a change in a robot factor, such as the behavior or level of autonomy, can be perceived differently and affect how the operator chooses to interact with and utilize the robot. This interaction can affect system performance and safety in dynamic ways. The theory of human factors in human-automation interaction has long been studied; however, the formal investigation of these HFs in shared space human-robot collaboration (HRC) and the potential interactive effects between covariate HFs (HF-HF) and HF-RF in shared space collaborative robotics requires additional investigation. Furthermore, methodological applications to measure or manipulate these factors can provide insights into contextual effects and potential for improved measurement techniques. As such, a systematic literature review was performed to evaluate the most frequently addressed operator HF states in shared space HRC, the methods used to quantify these states, and the implications of the states on HRC. The three most frequently measured states are: trust, cognitive workload, and anxiety, with subjective questionnaires universally the most common method to quantify operator states, excluding fatigue where electromyography is more common. Furthermore, the majority of included studies evaluate the effect of manipulating RFs on HFs, but few explain the effect of the HFs on system attributes or performance. For those that provided this information, HFs have been shown to impact system efficiency and response time, collaborative performance and quality of work, and operator utilization strategy.

17 citations


Journal ArticleDOI
TL;DR: This article aims to review the literature on MIS tactile sensing technologies in terms of working principles, design requirements, and specifications and highlights and discusses the promising potential of a few emerging technologies towards establishing low-cost, high-performance MIS force sensing.
Abstract: As opposed to open surgery procedures, minimally invasive surgery (MIS) utilizes small skin incisions to insert a camera and surgical instruments. MIS has numerous advantages such as reduced postoperative pain, shorter hospital stay, faster recovery time, and reduced learning curve for surgical trainees. MIS comprises surgical approaches, including laparoscopic surgery, endoscopic surgery, and robotic-assisted surgery. Despite the advantages that MIS provides to patients and surgeons, it remains limited by the lost sense of touch due to the indirect contact with tissues under operation, especially in robotic-assisted surgery. Surgeons, without haptic feedback, could unintentionally apply excessive forces that may cause tissue damage. Therefore, incorporating tactile sensation into MIS tools has become an interesting research topic. Designing, fabricating, and integrating force sensors onto different locations on the surgical tools are currently under development by several companies and research groups. In this context, electrical force sensing modality, including piezoelectric, resistive, and capacitive sensors, is the most conventionally considered approach to measure the grasping force, manipulation force, torque, and tissue compliance. For instance, piezoelectric sensors exhibit high sensitivity and accuracy, but the drawbacks of thermal sensitivity and the inability to detect static loads constrain their adoption in MIS tools. Optical-based tactile sensing is another conventional approach that facilitates electrically passive force sensing compatible with magnetic resonance imaging. Estimations of applied loadings are calculated from the induced changes in the intensity, wavelength, or phase of light transmitted through optical fibers. Nonetheless, new emerging technologies are also evoking a high potential of contributions to the field of smart surgical tools. The recent development of flexible, highly sensitive tactile microfluidic-based sensors has become an emerging field in tactile sensing, which contributed to wearable electronics and smart-skin applications. Another emerging technology is imaging-based tactile sensing that achieved superior multi-axial force measurements by implementing image sensors with high pixel densities and frame rates to track visual changes on a sensing surface. This article aims to review the literature on MIS tactile sensing technologies in terms of working principles, design requirements, and specifications. Moreover, this work highlights and discusses the promising potential of a few emerging technologies towards establishing low-cost, high-performance MIS force sensing.

15 citations


Journal ArticleDOI
TL;DR: In this article , a new framework is developed in light of a hierarchical manner with the obtained environmental information and gradually solving navigation problems layer by layer, consisting of environmental mapping, path generation, CCPP, and dynamic obstacle avoidance.
Abstract: With the introduction of autonomy into the precision agriculture process, environmental exploration, disaster response, and other fields, one of the global demands is to navigate autonomous vehicles to completely cover entire unknown environments. In the previous complete coverage path planning (CCPP) research, however, autonomous vehicles need to consider mapping, obstacle avoidance, and route planning simultaneously during operating in the workspace, which results in an extremely complicated and computationally expensive navigation system. In this study, a new framework is developed in light of a hierarchical manner with the obtained environmental information and gradually solving navigation problems layer by layer, consisting of environmental mapping, path generation, CCPP, and dynamic obstacle avoidance. The first layer based on satellite images utilizes a deep learning method to generate the CCPP trajectory through the position of the autonomous vehicle. In the second layer, an obstacle fusion paradigm in the map is developed based on the unmanned aerial vehicle (UAV) onboard sensors. A nature-inspired algorithm is adopted for obstacle avoidance and CCPP re-joint. Equipped with the onboard LIDAR equipment, autonomous vehicles, in the third layer, dynamically avoid moving obstacles. Simulated experiments validate the effectiveness and robustness of the proposed framework.

15 citations


Journal ArticleDOI
TL;DR: In this paper , the authors present a framework for closing the loop between the design and robotic assembly of timber structures, and demonstrate an extended automation process that incorporates learning by demonstration to learn and execute a complex assembly of an interlocking wooden joint.
Abstract: The construction sector is investigating wood as a highly sustainable material for fabrication of architectural elements. Several researchers in the field of construction are currently designing novel timber structures as well as novel solutions for fabricating such structures, i.e. robot technologies which allow for automation of a domain dominated by skilled craftsman. In this paper, we present a framework for closing the loop between the design and robotic assembly of timber structures. On one hand, we illustrate an extended automation process that incorporates learning by demonstration to learn and execute a complex assembly of an interlocking wooden joint. On the other hand, we describe a design case study that builds upon the specificity of this process, to achieve new designs of construction elements, which were previously only possible to be assembled by skilled craftsmen. The paper provides an overview of a process with different levels of focus, from the integration of a digital twin to timber joint design and the robotic assembly execution, to the development of a flexible robotic setup and novel assembly procedures for dealing with the complexity of the designed timber joints. We discuss synergistic results on both robotic and construction design innovation, with an outlook on future developments.

14 citations


Journal ArticleDOI
TL;DR: An electronic database search of published works from 2012 to mid-2021 that focus on human gait studies and apply machine learning techniques is performed to provide a single broad-based survey of the applications of machine learning technology in gait analysis and identify future areas of potential study and growth.
Abstract: We performed an electronic database search of published works from 2012 to mid-2021 that focus on human gait studies and apply machine learning techniques. We identified six key applications of machine learning using gait data: 1) Gait analysis where analyzing techniques and certain biomechanical analysis factors are improved by utilizing artificial intelligence algorithms, 2) Health and Wellness, with applications in gait monitoring for abnormal gait detection, recognition of human activities, fall detection and sports performance, 3) Human Pose Tracking using one-person or multi-person tracking and localization systems such as OpenPose, Simultaneous Localization and Mapping (SLAM), etc., 4) Gait-based biometrics with applications in person identification, authentication, and re-identification as well as gender and age recognition 5) “Smart gait” applications ranging from smart socks, shoes, and other wearables to smart homes and smart retail stores that incorporate continuous monitoring and control systems and 6) Animation that reconstructs human motion utilizing gait data, simulation and machine learning techniques. Our goal is to provide a single broad-based survey of the applications of machine learning technology in gait analysis and identify future areas of potential study and growth. We discuss the machine learning techniques that have been used with a focus on the tasks they perform, the problems they attempt to solve, and the trade-offs they navigate.

14 citations


Journal ArticleDOI
TL;DR: In this article , the authors evaluate surface EMG and sonomyography features as inputs to Gaussian process regression models for the continuous estimation of hip, knee and ankle angle and velocity during level walking, stair ascent/descent and ramp ascent/decent ambulation.
Abstract: Research on robotic lower-limb assistive devices over the past decade has generated autonomous, multiple degree-of-freedom devices to augment human performance during a variety of scenarios. However, the increase in capabilities of these devices is met with an increase in the complexity of the overall control problem and requirement for an accurate and robust sensing modality for intent recognition. Due to its ability to precede changes in motion, surface electromyography (EMG) is widely studied as a peripheral sensing modality for capturing features of muscle activity as an input for control of powered assistive devices. In order to capture features that contribute to muscle contraction and joint motion beyond muscle activity of superficial muscles, researchers have introduced sonomyography, or real-time dynamic ultrasound imaging of skeletal muscle. However, the ability of these sonomyography features to continuously predict multiple lower-limb joint kinematics during widely varying ambulation tasks, and their potential as an input for powered multiple degree-of-freedom lower-limb assistive devices is unknown. The objective of this research is to evaluate surface EMG and sonomyography, as well as the fusion of features from both sensing modalities, as inputs to Gaussian process regression models for the continuous estimation of hip, knee and ankle angle and velocity during level walking, stair ascent/descent and ramp ascent/descent ambulation. Gaussian process regression is a Bayesian nonlinear regression model that has been introduced as an alternative to musculoskeletal model-based techniques. In this study, time-intensity features of sonomyography on both the anterior and posterior thigh along with time-domain features of surface EMG from eight muscles on the lower-limb were used to train and test subject-dependent and task-invariant Gaussian process regression models for the continuous estimation of hip, knee and ankle motion. Overall, anterior sonomyography sensor fusion with surface EMG significantly improved estimation of hip, knee and ankle motion for all ambulation tasks (level ground, stair and ramp ambulation) in comparison to surface EMG alone. Additionally, anterior sonomyography alone significantly improved errors at the hip and knee for most tasks compared to surface EMG. These findings help inform the implementation and integration of volitional control strategies for robotic assistive technologies.

12 citations


Journal ArticleDOI
TL;DR: This review summarizes and categorizes the methods used to control the level of exploration and exploitation carried out by an multi-agent systems, as well as the overall performance of a system with a given cooperative control algorithm.
Abstract: Multi-agent systems and multi-robot systems have been recognized as unique solutions to complex dynamic tasks distributed in space. Their effectiveness in accomplishing these tasks rests upon the design of cooperative control strategies, which is acknowledged to be challenging and nontrivial. In particular, the effectiveness of these strategies has been shown to be related to the so-called exploration–exploitation dilemma: i.e., the existence of a distinct balance between exploitative actions and exploratory ones while the system is operating. Recent results point to the need for a dynamic exploration–exploitation balance to unlock high levels of flexibility, adaptivity, and swarm intelligence. This important point is especially apparent when dealing with fast-changing environments. Problems involving dynamic environments have been dealt with by different scientific communities using theory, simulations, as well as large-scale experiments. Such results spread across a range of disciplines can hinder one’s ability to understand and manage the intricacies of the exploration–exploitation challenge. In this review, we summarize and categorize the methods used to control the level of exploration and exploitation carried out by an multi-agent systems. Lastly, we discuss the critical need for suitable metrics and benchmark problems to quantitatively assess and compare the levels of exploration and exploitation, as well as the overall performance of a system with a given cooperative control algorithm.

12 citations


Journal ArticleDOI
TL;DR: Scientists defined statistical rates that summarize TP, FP, FN, and TN in one value, which is the harmonic mean of positive predictive value and true positive rate.
Abstract: A binary classification is a computational procedure that labels data elements as members of one or another category. In machine learning and computational statistics, input data elements which are part of two classes are usually encoded as 0’s or –1’s (negatives) and 1’s (positives). During a binary classification, a method assigns each data element to one of the two categories, usually after a machine learning phase. A typical evaluation procedure then creates a 2 × 2 contingency table called confusion matrix, where the positive elements correctly predicted positive are called true positives (TP), the negative elements correctly predicted negative are called true negatives (TN), the positive elements wrongly labeled as negatives are called false negatives (FN), and the negative elements wrongly labeled as positives are called false positives (FP). Since it would be difficult to always analyze the four categories of the confusion matrix for each test, scientists defined statistical rates that summarize TP, FP, FN, and TN in one value. Accuracy (Eq. 1), for example, is a rate that indicates the ratio of correct positives and negatives (Zliobaite, 2015), while F1 score (Eq. 2), is the harmonic mean of positive predictive value and true positive rate (Lipton et al., 2014; Huang et al., 2015).

Journal ArticleDOI
TL;DR: In this article , a soft-pneumatic-actuator-driven exoskeleton for hip flexion rehabilitation is presented, where an array of soft pneumatic rotary actuators are used for torque generation.
Abstract: Leg motion is essential to everyday tasks, yet many face a daily struggle due to leg motion impairment. Traditional robotic solutions for lower limb rehabilitation have arisen, but they may bare some limitations due to their cost. Soft robotics utilizes soft, pliable materials which may afford a less costly robotic solution. This work presents a soft-pneumatic-actuator-driven exoskeleton for hip flexion rehabilitation. An array of soft pneumatic rotary actuators is used for torque generation. An analytical model of the actuators is validated and used to determine actuator parameters for the target application of hip flexion. The performance of the assembly is assessed, and it is found capable of the target torque for hip flexion, 19.8 Nm at 30°, requiring 86 kPa to reach that torque output. The assembly exhibits a maximum torque of 31 Nm under the conditions tested. The full exoskeleton assembly is then assessed with healthy human subjects as they perform a set of lower limb motions. For one motion, the Leg Raise, a muscle signal reduction of 43.5% is observed during device assistance, as compared to not wearing the device. This reduction in muscle effort indicates that the device is effective in providing hip flexion assistance and suggests that pneumatic-rotary-actuator-driven exoskeletons are a viable solution to realize more accessible options for those who suffer from lower limb immobility.

Journal ArticleDOI
TL;DR: In this paper , the authors present a short overview of the replicability crisis in behavioral sciences and its causes and propose some statistical, methodological and social reforms to improve the stability of future human-robot interaction research.
Abstract: There is a confidence crisis in many scientific disciplines, in particular disciplines researching human behavior, as many effects of original experiments have not been replicated successfully in large-scale replication studies. While human-robot interaction (HRI) is an interdisciplinary research field, the study of human behavior, cognition and emotion in HRI plays also a vital part. Are HRI user studies facing the same problems as other fields and if so, what can be done to overcome them? In this article, we first give a short overview of the replicability crisis in behavioral sciences and its causes. In a second step, we estimate the replicability of HRI user studies mainly 1) by structural comparison of HRI research processes and practices with those of other disciplines with replicability issues, 2) by systematically reviewing meta-analyses of HRI user studies to identify parameters that are known to affect replicability, and 3) by summarizing first replication studies in HRI as direct evidence. Our findings suggest that HRI user studies often exhibit the same problems that caused the replicability crisis in many behavioral sciences, such as small sample sizes, lack of theory, or missing information in reported data. In order to improve the stability of future HRI research, we propose some statistical, methodological and social reforms. This article aims to provide a basis for further discussion and a potential outline for improvements in the field.

Journal ArticleDOI
TL;DR: In this article , the authors provide a comprehensive review of sim-to-real research for robotics, focusing on a technique named "domain randomization" which is a method for learning from randomized simulations.
Abstract: The rise of deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data. Unfortunately, it is prohibitively expensive to generate such data sets on a physical platform. Therefore, state-of-the-art approaches learn in simulation where data generation is fast as well as inexpensive and subsequently transfer the knowledge to the real robot (sim-to-real). Despite becoming increasingly realistic, all simulators are by construction based on models, hence inevitably imperfect. This raises the question of how simulators can be modified to facilitate learning robot control policies and overcome the mismatch between simulation and reality, often called the “reality gap.” We provide a comprehensive review of sim-to-real research for robotics, focusing on a technique named “domain randomization” which is a method for learning from randomized simulations.

Journal ArticleDOI
TL;DR: In this article, a 3D printed modular gripper with highly conformal soft fingers that are composed of positive pressure soft pneumatic actuators along with a mechanical metamaterial was developed.
Abstract: A single universal robotic gripper with the capacity to fulfill a wide variety of gripping and grasping tasks has always been desirable. A three-dimensional (3D) printed modular soft gripper with highly conformal soft fingers that are composed of positive pressure soft pneumatic actuators along with a mechanical metamaterial was developed. The fingers of the soft gripper along with the mechanical metamaterial, which integrates a soft auxetic structure and compliant ribs, was 3D printed in a single step, without requiring support material and postprocessing, using a low-cost and open-source fused deposition modeling (FDM) 3D printer that employs a commercially available thermoplastic poly (urethane) (TPU). The soft fingers of the gripper were optimized using finite element modeling (FEM). The FE simulations accurately predicted the behavior and performance of the fingers in terms of deformation and tip force. Also, FEM was used to predict the contact behavior of the mechanical metamaterial to prove that it highly decreases the contact pressure by increasing the contact area between the soft fingers and the grasped objects and thus proving its effectiveness in enhancing the grasping performance of the gripper. The contact pressure can be decreased by up to 8.5 times with the implementation of the mechanical metamaterial. The configuration of the highly conformal gripper can be easily modulated by changing the number of fingers attached to its base to tailor it for specific manipulation tasks. Two-dimensional (2D) and 3D grasping experiments were conducted to assess the grasping performance of the soft modular gripper and to prove that the inclusion of the metamaterial increases its conformability and reduces the out-of-plane deformations of the soft monolithic fingers upon grasping different objects and consequently, resulting in the gripper in three different configurations including two, three and four-finger configurations successfully grasping a wide variety of objects.

Journal ArticleDOI
TL;DR: In this paper , a scientometric review of the progressively synthesized network derived from 10,504 bibliographic records using a topic search on soft robotics from 2010 to 2021 based on the Web of Science (WoS) core database is conducted.
Abstract: Within the last decade, soft robotics has attracted an increasing attention from both academia and industry. Although multiple literature reviews of the whole soft robotics field have been conducted, there still appears to be a lack of systematic investigation of the intellectual structure and evolution of this field considering the increasing amount of publications. This paper conducts a scientometric review of the progressively synthesized network derived from 10,504 bibliographic records using a topic search on soft robotics from 2010 to 2021 based on the Web of Science (WoS) core database. The results are presented from both the general data analysis of included papers (e.g., relevant journals, citation, h-index, year, institution, country, disciplines) and the specific data analysis corresponding to main disciplines and topics, and more importantly, emerging trends. CiteSpace, a data visualization software, which can construct the co-citation network maps and provide citation bursts, is used to explore the intellectual structures and emerging trends of the soft robotics field. In addition, this paper offers a demonstration of an effective analytical method for evaluating enormous publication citation and co-citation data. Findings of this review can be used as a reference for future research in soft robotics and relevant topics.

Journal ArticleDOI
TL;DR: In this paper , a 3D model of the old wage hall of the Zeche “Bonifacius” (Essen, Germany) with its simple building structure was generated using microdrone data.
Abstract: Post-industrial areas in Europe, such as the Rhine-Ruhr Metropolitan region in Germany, include cultural heritage sites fostering local and regional identities with the industrial past. Today, these landmarks are popular places of interest for visitors. In addition to portable camera devices, low-budget ultra-lightweight unmanned aerial vehicles, such as micro quadcopter drones, are on their way to being established as mass photography equipment. This low-cost hardware is not only useful for recreational usage but also supports individualized remote sensing with optical images and facilitates the acquisition of 3D point clouds of the targeted object(s). Both data sets are valuable and accurate geospatial data resources for further processing of textured 3D models. To experience these 3D models in a timely way, these 3D visualizations can directly be imported into game engines. They can be extended with modern interaction techniques and additional (semantic) information. The visualization of the data can be explored in immersive virtual environments, which allows, for instance, urban planners to use low-cost microdrones to 3D map the human impact on the environment and preserve this status in a 3D model that can be analyzed and explored in following steps. A case example of the old wage hall of the Zeche “Bonifacius” (Essen, Germany) with its simple building structure showed that it is possible to generate a detailed and accurate 3D model based on the microdrone data. The point cloud which the 3D model of the old wage hall was based on represented partly better data accuracy than the point clouds derived from airborne laser scanning and offered by public agencies as open data. On average, the distance between the point clouds was 0.7 m, while the average distance between the airborne laser scanning point cloud and the 3D model was −0.02 m. Matching high-quality textures of the building facades brings in a new aspect of 3D data quality which can be adopted when creating immersive virtual environments using the Unity engine. The example of the wage hall makes it clear that the use of low-cost drones and the subsequent data processing can result in valuable sources of point clouds and textured 3D models.

Journal ArticleDOI
TL;DR: The engineering feasibility and effectiveness of the proposed cable-driven robot in combination with the proposed BiEval software as a valuable tool to augment the conventional physiotherapy protocols and for providing reliable measurements of the patient’s rehabilitation performance and progress are demonstrated.
Abstract: Cable-driven robots can be an ideal fit for performing post-stroke rehabilitation due to their specific features. For example, they have small and lightweight moving parts and a relatively large workspace. They also allow safe human-robot interactions and can be easily adapted to different patients and training protocols. However, the existing cable-driven robots are mostly unilateral devices that can allow only the rehabilitation of the most affected limb. This leaves unaddressed the rehabilitation of bimanual activities, which are predominant within the common Activities of Daily Living (ADL). Serious games can be integrated with cable-driven robots to further enhance their features by providing an interactive experience and by generating a high level of engagement in patients, while they can turn monotonous and repetitive therapy exercises into entertainment tasks. Additionally, serious game interfaces can collect detailed quantitative treatment information such as exercise time, velocities, and force, which can be very useful to monitor a patient’s progress and adjust the treatment protocols. Given the above-mentioned strong advantages of both cable driven robots, bimanual rehabilitation and serious games, this paper proposes and discusses a combination of them, in particular, for performing bilateral/bimanual rehabilitation tasks. The main design characteristics are analyzed for implementing the design of both the hardware and software components. The hardware design consists of a specifically developed cable-driven robot. The software design consists of a specifically developed serious game for performing bimanual rehabilitation exercises. The developed software also includes BiEval. This specific software allows to quantitatively measure and assess the rehabilitation therapy effects. An experimental validation is reported with 15 healthy subjects and a RCT (Randomized Controlled Trial) has been performed with 10 post-stroke patients at the Physiotherapy’s Clinic of the Federal University of Uberlândia (Minas Gerais, Brazil). The RCT results demonstrate the engineering feasibility and effectiveness of the proposed cable-driven robot in combination with the proposed BiEval software as a valuable tool to augment the conventional physiotherapy protocols and for providing reliable measurements of the patient’s rehabilitation performance and progress. The clinical trial was approved by the Research Ethics Committee of the UFU (Brazil) under the CAAE N° 00914818.5.0000.5152 on plataformabrasil@saude.gov.br.

Journal ArticleDOI
TL;DR: The process that is used to verify a pre-existing system for autonomous grasping which is to be used for active debris removal in space is described and how the modularity of this particular autonomous system simplified the usually complex task of verifying a system post-development is described.
Abstract: Active debris removal in space has become a necessary activity to maintain and facilitate orbital operations. Current approaches tend to adopt autonomous robotic systems which are often furnished with a robotic arm to safely capture debris by identifying a suitable grasping point. These systems are controlled by mission-critical software, where a software failure can lead to mission failure which is difficult to recover from since the robotic systems are not easily accessible to humans. Therefore, verifying that these autonomous robotic systems function correctly is crucial. Formal verification methods enable us to analyse the software that is controlling these systems and to provide a proof of correctness that the software obeys its requirements. However, robotic systems tend not to be developed with verification in mind from the outset, which can often complicate the verification of the final algorithms and systems. In this paper, we describe the process that we used to verify a pre-existing system for autonomous grasping which is to be used for active debris removal in space. In particular, we formalise the requirements for this system using the Formal Requirements Elicitation Tool (FRET). We formally model specific software components of the system and formally verify that they adhere to their corresponding requirements using the Dafny program verifier. From the original FRET requirements, we synthesise runtime monitors using ROSMonitoring and show how these can provide runtime assurances for the system. We also describe our experimentation and analysis of the testbed and the associated simulation. We provide a detailed discussion of our approach and describe how the modularity of this particular autonomous system simplified the usually complex task of verifying a system post-development.

Journal ArticleDOI
TL;DR: This review summarizes and categorizes the methods used to control the level of exploration and exploitation carried out by an multi-agent systems, as well as the overall performance of a system with a given cooperative control algorithm.
Abstract: Multi-agent systems and multi-robot systems have been recognized as unique solutions to complex dynamic tasks distributed in space. Their effectiveness in accomplishing these tasks rests upon the design of cooperative control strategies, which is acknowledged to be challenging and nontrivial. In particular, the effectiveness of these strategies has been shown to be related to the so-called exploration-exploitation dilemma: i.e., the existence of a distinct balance between exploitative actions and exploratory ones while the system is operating. Recent results point to the need for a dynamic exploration-exploitation balance to unlock high levels of flexibility, adaptivity, and swarm intelligence. This important point is especially apparent when dealing with fast-changing environments. Problems involving dynamic environments have been dealt with by different scientific communities using theory, simulations, as well as large-scale experiments. Such results spread across a range of disciplines can hinder one's ability to understand and manage the intricacies of the exploration-exploitation challenge. In this review, we summarize and categorize the methods used to control the level of exploration and exploitation carried out by an multi-agent systems. Lastly, we discuss the critical need for suitable metrics and benchmark problems to quantitatively assess and compare the levels of exploration and exploitation, as well as the overall performance of a system with a given cooperative control algorithm.

Journal ArticleDOI
TL;DR: In this paper , the authors focus on the case of children who have newly arrived from a foreign country and their peers at school, and identify two situations and trajectories in which children make eye contact: asking for or giving instructions, and sharing an emotional reaction.
Abstract: Our work is motivated by the idea that social robots can help inclusive processes in groups of children, focusing on the case of children who have newly arrived from a foreign country and their peers at school. Building on an initial study where we tested different robot behaviours and recorded children’s interactions mediated by a robot in a game, we present in this paper the findings from a subsequent analysis of the same video data drawing from ethnomethodology and conversation analysis. We describe how this approach differs from predominantly quantitative video analysis in HRI; how mutual gaze appeared as a challenging interactional accomplishment between unacquainted children, and why we focused on this phenomenon. We identify two situations and trajectories in which children make eye contact: asking for or giving instructions, and sharing an emotional reaction. Based on detailed analyses of a selection of extracts in the empirical section, we describe patterns and discuss the links between the different situations and trajectories, and relationship building. Our findings inform HRI and robot design by identifying complex interactional accomplishments between two children, as well as group dynamics which support these interactions. We argue that social robots should be able to perceive such phenomena in order to better support inclusion of outgroup children. Lastly, by explaining how we combined approaches and showing how they build on each other, we also hope to demonstrate the value of interdisciplinary research, and encourage it.

Journal ArticleDOI
TL;DR: In this paper , an educational robotics lab has been planned for undergraduate students in an Electronic Engineering degree, using the Project Based Learning (PBL) approach and the NAO robot, with the aim of making the functions of the robot as social and autonomous as possible, adopting in the design process the Wolfram Language (WL), from the Mathematica software.
Abstract: An educational robotics lab has been planned for undergraduate students in an Electronic Engineering degree, using the Project Based Learning (PBL) approach and the NAO robot. Students worked in a research context, with the aim of making the functions of the NAO robot as social and autonomous as possible, adopting in the design process the Wolfram Language (WL), from the Mathematica software. Interfacing the programming environment of the NAO with Mathematica, they solved in part the problem of autonomy of the NAO, thus realizing enhanced functions of autonomous movement, recognition of human faces and speech for improving the system social interaction. An external repository was created to streamline processes and stow data that the robot can easily access. Self-assessment processes demonstrated that the course provided students with useful skills to cope with real life problems. Cognitive aspects of programming by WL have also been collected in the students’ feedback.

Journal ArticleDOI
TL;DR: In this paper , a gear-based differential mechanism is proposed to actuate the flexion/extension motion of the fingers and apply bidirectional forces, that is, it is able to both open and close the fingers.
Abstract: Exoskeletons and more in general wearable mechatronic devices represent a promising opportunity for rehabilitation and assistance to people presenting with temporary and/or permanent diseases. However, there are still some limits in the diffusion of robotic technologies for neuro-rehabilitation, notwithstanding their technological developments and evidence of clinical effectiveness. One of the main bottlenecks that constrain the complexity, weight, and costs of exoskeletons is represented by the actuators. This problem is particularly evident in devices designed for the upper limb, and in particular for the hand, in which dimension limits and kinematics complexity are particularly challenging. This study presents the design and prototyping of a hand finger exoskeleton. In particular, we focus on the design of a gear-based differential mechanism aimed at coupling the motion of two adjacent fingers and limiting the complexity and costs of the system. The exoskeleton is able to actuate the flexion/extension motion of the fingers and apply bidirectional forces, that is, it is able to both open and close the fingers. The kinematic structure of the finger actuation system has the peculiarity to present three DoFs when the exoskeleton is not worn and one DoF when it is worn, allowing better adaptability and higher wearability. The design of the gear-based differential is inspired by the mechanism widely used in the automotive field; it allows actuating two fingers with one actuator only, keeping their movements independent.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a system that allows the surgeon to perform a bimanual coordination and navigation task, while a robotic arm autonomously performs the endoscope positioning tasks.
Abstract: Many keyhole interventions rely on bi-manual handling of surgical instruments, forcing the main surgeon to rely on a second surgeon to act as a camera assistant. In addition to the burden of excessively involving surgical staff, this may lead to reduced image stability, increased task completion time and sometimes errors due to the monotony of the task. Robotic endoscope holders, controlled by a set of basic instructions, have been proposed as an alternative, but their unnatural handling may increase the cognitive load of the (solo) surgeon, which hinders their clinical acceptance. More seamless integration in the surgical workflow would be achieved if robotic endoscope holders collaborated with the operating surgeon via semantically rich instructions that closely resemble instructions that would otherwise be issued to a human camera assistant, such as “focus on my right-hand instrument.” As a proof of concept, this paper presents a novel system that paves the way towards a synergistic interaction between surgeons and robotic endoscope holders. The proposed platform allows the surgeon to perform a bimanual coordination and navigation task, while a robotic arm autonomously performs the endoscope positioning tasks. Within our system, we propose a novel tooltip localization method based on surgical tool segmentation and a novel visual servoing approach that ensures smooth and appropriate motion of the endoscope camera. We validate our vision pipeline and run a user study of this system. The clinical relevance of the study is ensured through the use of a laparoscopic exercise validated by the European Academy of Gynaecological Surgery which involves bi-manual coordination and navigation. Successful application of our proposed system provides a promising starting point towards broader clinical adoption of robotic endoscope holders.

Journal ArticleDOI
TL;DR: In this paper , realistic interactive suture simulation for training of suturing and knot-tying tasks commonly used in robotically-assisted surgery is presented, and the results are integrated into a powerful system-agnostic simulator, and compared with equivalent tasks performed with the da Vinci Xi system.
Abstract: Current surgical robotic systems are teleoperated and do not have force feedback. Considerable practice is required to learn how to use visual input such as tissue deformation upon contact as a substitute for tactile sense. Thus, unnecessarily high forces are observed in novices, prior to specific robotic training, and visual force feedback studies demonstrated reduction of applied forces. Simulation exercises with realistic suturing tasks can provide training outside the operating room. This paper presents contributions to realistic interactive suture simulation for training of suturing and knot-tying tasks commonly used in robotically-assisted surgery. To improve the realism of the simulation, we developed a global coordinate wire model with a new constraint development for the elongation. We demonstrated that a continuous modeling of the contacts avoids instabilities during knot tightening. Visual cues are additionally provided, based on the computation of mechanical forces or constraints, to support learning how to dose the forces. The results are integrated into a powerful system-agnostic simulator, and the comparison with equivalent tasks performed with the da Vinci Xi system confirms its realism.

Journal ArticleDOI
TL;DR: The results show that opting in for the robotic tutoring is beneficial for students, and significant subjective knowledge gain and increases in intrinsic motivation regarding the content of the course in general are found.
Abstract: Learning in higher education scenarios requires self-directed learning and the challenging task of self-motivation while individual support is rare. The integration of social robots to support learners has already shown promise to benefit the learning process in this area. In this paper, we focus on the applicability of an adaptive robotic tutor in a university setting. To this end, we conducted a long-term field study implementing an adaptive robotic tutor to support students with exam preparation over three sessions during one semester. In a mixed design, we compared the effect of an adaptive tutor to a control condition across all learning sessions. With the aim to benefit not only motivation but also academic success and the learning experience in general, we draw from research in adaptive tutoring, social robots in education, as well as our own prior work in this field. Our results show that opting in for the robotic tutoring is beneficial for students. We found significant subjective knowledge gain and increases in intrinsic motivation regarding the content of the course in general. Finally, participation resulted in a significantly better exam grade compared to students not participating. However, the extended adaptivity of the robotic tutor in the experimental condition did not seem to enhance learning, as we found no significant differences compared to a non-adaptive version of the robot.

Journal ArticleDOI
TL;DR: Inspired by existing mathematical tools for studying the symmetry structures of geometric spaces, geometric sensor registration, state estimator, and control methods provide indispensable insights into the problem formulations and generalization of robotics algorithms to challenging unknown environments.
Abstract: This article reports on recent progress in robot perception and control methods developed by taking the symmetry of the problem into account. Inspired by existing mathematical tools for studying the symmetry structures of geometric spaces, geometric sensor registration, state estimator, and control methods provide indispensable insights into the problem formulations and generalization of robotics algorithms to challenging unknown environments. When combined with computational methods for learning hard-to-measure quantities, symmetry-preserving methods unleash tremendous performance. The article supports this claim by showcasing experimental results of robot perception, state estimation, and control in real-world scenarios.

Journal ArticleDOI
TL;DR: This paper presents Cora, a conversational system that recommends recipes aligned with its users’ eating habits and current preferences and evaluates the impact of Cora’s conversational skills andusers’ interaction mode on users' perception and intention to cook the recommended recipes.
Abstract: Unhealthy eating behavior is a major public health issue with serious repercussions on an individual’s health. One potential solution to overcome this problem, and help people change their eating behavior, is to develop conversational systems able to recommend healthy recipes. One challenge for such systems is to deliver personalized recommendations matching users’ needs and preferences. Beyond the intrinsic quality of the recommendation itself, various factors might also influence users’ perception of a recommendation. In this paper, we present Cora, a conversational system that recommends recipes aligned with its users’ eating habits and current preferences. Users can interact with Cora in two different ways. They can select pre-defined answers by clicking on buttons to talk to Cora or write text in natural language. Additionally, Cora can engage users through a social dialogue, or go straight to the point. Cora is also able to propose different alternatives and to justify its recipes recommendation by explaining the trade-off between them. We conduct two experiments. In the first one, we evaluate the impact of Cora’s conversational skills and users’ interaction mode on users’ perception and intention to cook the recommended recipes. Our results show that a conversational recommendation system that engages its users through a rapport-building dialogue improves users’ perception of the interaction as well as their perception of the system. In the second evaluation, we evaluate the influence of Cora’s explanations and recommendation comparisons on users’ perception. Our results show that explanations positively influence users’ perception of a recommender system. However, comparing healthy recipes with a decoy is a double-edged sword. Although such comparison is perceived as significantly more useful compared to one single healthy recommendation, explaining the difference between the decoy and the healthy recipe would actually make people less likely to use the system.

Journal ArticleDOI
TL;DR: This model is intended to serve as an AI-enhanceable coordination software for future robotic court bee surrogates and as a hardware controller for generating nature-like behavior patterns for such a robotic ensemble, the first step towards a team of robots working in a bio-compatible way to study honey bees and to increase their pollination performance, thus achieving a stabilizing effect at the ecosystem level.
Abstract: Honey bees live in colonies of thousands of individuals, that not only need to collaborate with each other but also to interact intensively with their ecosystem. A small group of robots operating in a honey bee colony and interacting with the queen bee, a central colony element, has the potential to change the collective behavior of the entire colony and thus also improve its interaction with the surrounding ecosystem. Such a system can be used to study and understand many elements of bee behavior within hives that have not been adequately researched. We discuss here the applicability of this technology for ecosystem protection: A novel paradigm of a minimally invasive form of conservation through “Ecosystem Hacking”. We discuss the necessary requirements for such technology and show experimental data on the dynamics of the natural queen’s court, initial designs of biomimetic robotic surrogates of court bees, and a multi-agent model of the queen bee court system. Our model is intended to serve as an AI-enhanceable coordination software for future robotic court bee surrogates and as a hardware controller for generating nature-like behavior patterns for such a robotic ensemble. It is the first step towards a team of robots working in a bio-compatible way to study honey bees and to increase their pollination performance, thus achieving a stabilizing effect at the ecosystem level.

Journal ArticleDOI
TL;DR: In this article , an inverse distance weighting approach is used to interpolate gamma radiation observations into the configuration space of the robot to reduce the total accumulated dose to background levels in real world deployment and up to a factor of 10 in simulation.
Abstract: Humans in hazardous environments take actions to reduce unnecessary risk, including limiting exposure to radioactive materials where ionising radiation can be a threat to human health. Robots can adopt the same approach of risk avoidance to minimise exposure to radiation, therefore limiting damage to electronics and materials. Reducing a robot’s exposure to radiation results in longer operational lifetime and better return on investment for nuclear sector stakeholders. This work achieves radiation avoidance through the use of layered costmaps, to inform path planning algorithms of this additional risk. Interpolation of radiation observations into the configuration space of the robot is accomplished using an inverse distance weighting approach. This technique was successfully demonstrated using an unmanned ground vehicle running the Robot Operating System equipped with compatible gamma radiation sensors, both in simulation and in real-world mock inspection missions, where the vehicle was exposed to radioactive materials in Lancaster University’s Neutron Laboratory. The addition of radiation avoidance functionality was shown to reduce total accumulated dose to background levels in real-world deployment and up to a factor of 10 in simulation.