scispace - formally typeset
Search or ask a question

Showing papers on "Haptic technology published in 2019"


Journal ArticleDOI
21 Nov 2019-Nature
TL;DR: A wireless, battery-free platform of electronic systems and haptic interfaces capable of softly laminating onto the curved surfaces of the skin to communicate information via spatio-temporally programmable patterns of localized mechanical vibrations is presented.
Abstract: Traditional technologies for virtual reality (VR) and augmented reality (AR) create human experiences through visual and auditory stimuli that replicate sensations associated with the physical world. The most widespread VR and AR systems use head-mounted displays, accelerometers and loudspeakers as the basis for three-dimensional, computer-generated environments that can exist in isolation or as overlays on actual scenery. In comparison to the eyes and the ears, the skin is a relatively underexplored sensory interface for VR and AR technology that could, nevertheless, greatly enhance experiences at a qualitative level, with direct relevance in areas such as communications, entertainment and medicine1,2. Here we present a wireless, battery-free platform of electronic systems and haptic (that is, touch-based) interfaces capable of softly laminating onto the curved surfaces of the skin to communicate information via spatio-temporally programmable patterns of localized mechanical vibrations. We describe the materials, device structures, power delivery strategies and communication schemes that serve as the foundations for such platforms. The resulting technology creates many opportunities for use where the skin provides an electronically programmable communication and sensory input channel to the body, as demonstrated through applications in social media and personal engagement, prosthetic control and feedback, and gaming and entertainment. Interfaces for epidermal virtual reality technology are demonstrated that can communicate by programmable patterns of localized mechanical vibrations.

500 citations


Proceedings ArticleDOI
20 May 2019
TL;DR: This work uses self-supervision to learn a compact and multimodal representation of sensory inputs, which can then be used to improve the sample efficiency of the policy learning of deep reinforcement learning algorithms.
Abstract: Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback. However, it is non-trivial to manually design a robot controller that combines modalities with very different characteristics. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We evaluate our method on a peg insertion task, generalizing over different geometry, configurations, and clearances, while being robust to external perturbations. We present results in simulation and on a real robot.

322 citations


Journal ArticleDOI
03 Oct 2019
TL;DR: New developments related to the handling of tactile data, energy autonomy, and large-area manufacturing related to tactile sensing and haptics in robotics and prosthetics are reviewed.
Abstract: Sensory feedback from touch is critical for many tasks carried out by robots and humans, such as grasping objects or identifying materials. Electronic skin (e-skin) is a crucial technology for these purposes. Artificial tactile skin that can play the roles of human skin remains a distant possibility because of hard issues in resilience, manufacturing, mechanics, sensorics, electronics, energetics, information processing, and transport. Taken together, these issues make it difficult to bestow robots, or prosthetic devices, with effective tactile skins. Nonetheless, progress over the past few years in relation with the above issues has been encouraging, and we have achieved close to providing some of the abilities of biological skin with the advent of deformable sensors and flexible electronics. The naive imitation of skin morphology and sensing an impoverished set of mechanical and thermal quantities are not sufficient. There is a need to find more efficient ways to extract tactile information from mechanical contact than those previously available. Renewed interest in neuromorphic tactile skin is expected to bring some fresh ideas in this field. This article reviews these new developments, particularly related to the handling of tactile data, energy autonomy, and large-area manufacturing. The challenges in relation with these advances for tactile sensing and haptics in robotics and prosthetics are discussed along with potential solutions.

191 citations


Journal ArticleDOI
TL;DR: A QoE-aware model for dynamic resource allocation for tactile applications in IIoT is presented and the empirical results are discussed to elaborate more on the impact of such a model for Qo E-aware resource allocation that can be very important in the context of Tactile Internet, especially IIo T.
Abstract: Fifth generation mobile communication networks are currently being deployed, thus making Tactile Internet possible. Tactile Internet is the future advancement of the current Internet of Things (IoT) vision wherein haptics, or touch and senses, can be communicated from one geographical place to another, enabling near real-time control and navigation of remote objects. Tactile Internet will have its use cases in several application domains, with the industrial sector being among the most prominent ones. With the Industrial Internet of Things (IIoT), Tactile Internet will be used in healthcare, manufacturing, mining, education, autonomous driving, etc. The acceptable delay in most of these tactile applications will be under one millisecond. Since Tactile Internet communicates haptics and gives visual feedback, quality of service (QoS) becomes an important issue. Similarly, user's satisfaction on the service quality [often measured as quality of experience (QoE)] becomes equally important. To reap the true potential of Tactile Internet, sophisticated and intelligent mechanisms are required between the end-nodes. A middleware such as fog computing can be vital in this context, since it can allocate resources based on the QoS/QoE requirements of each service. In this context, we present a QoE-aware model for dynamic resource allocation for tactile applications in IIoT. We implement the model using Java and discuss the empirical results to elaborate more on the impact of such a model for QoE-aware resource allocation that can be very important in the context of Tactile Internet, especially IIoT. We also discuss some of the most prominent use cases of Tactile IIoT.

111 citations


Journal ArticleDOI
TL;DR: In vitro results are presented, which demonstrate that catheter-based haptic force feedback has a benefit for improving catheterization skills as well as reducing the cognitive workload of operators.
Abstract: The robot-assisted endovascular catheterization system (RAECS) has the potential to address some of the procedural challenges and separate interventionalists from the X-ray radiation during the endovascular surgery. However, the employment of robotic systems is partly changing the natural gestures and behavior of medical professionals. This paper presents a RAECS that augments surgeon's motions using the conventional catheter as well as generates the haptic force feedback to ensure surgery safety. The magnetorheological fluids based haptic interfaceexperiment is proposed to measure the actions of the operator and provide the haptic force. The slave catheter manipulator is presented to actuate the patient catheter in two degrees of freedom (radial and axial direction). In addition, a force model is provided to characterize the kinematics of the catheter intervention. Afterward, the “pseudocollision” and “real collision” are utilized to descript the catheter tip–vessel interaction. The in vitro results are presented, which demonstrate that catheter-based haptic force feedback has a benefit for improving catheterization skills as well as reducing the cognitive workload of operators.

105 citations


Journal ArticleDOI
01 Feb 2019
TL;DR: In this article, the authors present the fundamentals and state of the art in haptic codec design for the Tactile Internet and discuss how limitations of the human haptic perception system can be exploited for efficient perceptual coding of kinesthetic and tactile information.
Abstract: The Tactile Internet will enable users to physically explore remote environments and to make their skills available across distances. An important technological aspect in this context is the acquisition, compression, transmission, and display of haptic information. In this paper, we present the fundamentals and state of the art in haptic codec design for the Tactile Internet. The discussion covers both kinesthetic data reduction and tactile signal compression approaches. We put a special focus on how limitations of the human haptic perception system can be exploited for efficient perceptual coding of kinesthetic and tactile information. Further aspects addressed in this paper are the multiplexing of audio and video with haptic information and the quality evaluation of haptic communication solutions. Finally, we describe the current status of the ongoing IEEE standardization activity P1918.1.1 which has the ambition to standardize the first set of codecs for kinesthetic and tactile information exchange across communication networks.

104 citations


Journal ArticleDOI
01 Apr 2019
TL;DR: This paper surveys the paradigm shift of haptic display occurred in the past 30 years, which is classified into three stages, including desktop haptics, surface haptic, and wearable haptic systems, and the importance of understanding human haptic perception for designing effective haptic devices is addressed.
Abstract: Immersion, interaction, and imagination are three features of virtual reality (VR). Existing VR systems possess fairly realistic visual and auditory feedbacks, and however, are poor with haptic feedback, by means of which human can perceive the physical world via abundant haptic properties. Haptic display is an interface aiming to enable bilateral signal communications between human and computer, and thus to greatly enhance the immersion and interaction of VR systems. This paper surveys the paradigm shift of haptic display occurred in the past 30 years, which is classified into three stages, including desktop haptics, surface haptics, and wearable haptics. The driving forces, key technologies and typical applications in each stage are critically reviewed. Toward the future high-fidelity VR interaction, research challenges are highlighted concerning handheld haptic device, multimodal haptic device, and high fidelity haptic rendering. In the end, the importance of understanding human haptic perception for designing effective haptic devices is addressed.

98 citations


Journal ArticleDOI
01 Feb 2019
TL;DR: The tactile internet will enable a new range of capabilities to enable immersive remote operations and interactions with a physical world, as it will provide necessary capabilities for the demanding communication needs in terms of reliability and low latency, for operators or teleoperated systems that are connected wirelessly.
Abstract: The tactile internet will enable a new range of capabilities to enable immersive remote operations and interactions with a physical world. Tactile internet use cases span over many fields, from remote operation of industrial applications in, e.g., hazardous environments, via remote-controlled driving in a fully automated intelligent transport system, to remote surgery where the unique expert skills can be delivered to different locations in the world. Fifth-generation (5G) communication will play a fundamental part in this tactile internet vision, as it will provide necessary capabilities for the demanding communication needs in terms of reliability and low latency, for operators or teleoperated systems that are connected wirelessly. This paper provides an overview of tactile internet services and haptic interactions and communication. The 5G functionality for ultrareliable and low-latency services is described in depth and it is shown how 5G new radio (NR) and the evolved long-term evolution (LTE) radio interface can achieve guaranteed low-latency wireless transmission. The costs for providing reliable and low-latency wireless transmission in terms of reduced spectral efficiency and coverage are discussed. The 5G system architecture with a software-based network design based on a distributed cloud platform is presented. It is shown how the 5G network is configured for tactile internet services via multidomain orchestration.

95 citations


Journal ArticleDOI
TL;DR: A robot intelligence framework by merging robot learning technology and perception mechanism is developed and is effective where the task performed with repeatability and rapidity in a teleoperated mode.
Abstract: Due to the lack of transparent and friendly human–robot interaction (HRI) interface, as well as various uncertainties, it is usually a challenge to remotely manipulate a robot to accomplish a complicated task. To improve the teleoperation performance, we propose a new perception mechanism by integrating a novel learning method to operate the robots in the distance. In order to enhance the perception of the teleoperation system, we utilize a surface electromyogram signal to extract the human operator’s muscle activation. As a response to the changes in the external environment, as sensed through haptic and visual feedback, a human operator naturally reacts with various muscle activations. By imitating the human behaviors in task execution, not only motion trajectory but also arm stiffness adjusted by muscle activation, it is expected that the robot would be able to carry out the repetitive tasks autonomously or uncertain tasks with improved intelligence. To this end, we develop a robot learning algorithm based on probability statistics under an integrated framework of the hidden semi-Markov model (HSMM) and the Gaussian mixture method. This method is employed to obtain a generative task model based on the robot’s trajectory. Then, Gaussian mixture regression based on HSMM is applied to correct the robot trajectory with the reproduced results from the learned task model. The execution procedures consist of a learning phase and a reproduction phase . To guarantee the stability, immersion, and maneuverability of the teleoperation system, a variable gain control method that involves electromyography (EMG) is introduced. Experimental results have demonstrated the effectiveness of the proposed method. Note to Practitioners —This paper is inspired by the limitations of teleoperation to perform a task with unfriendly HRI and lack of intelligence. The human operators need to concentrate on the manipulation in the traditional setup of a teleoperation system; thus, it is quite a labor intensive for a human operator. This is a huge challenge for the requirement of increasingly complicated, diverse tasks in teleoperation. Therefore, efficient ways of the robot intelligence need to be urgently developed for the telerobots. In this paper, we develop a robot intelligence framework by merging robot learning technology and perception mechanism. The proposed framework is effective where the task performed with repeatability and rapidity in a teleoperated mode. The proposed method includes three following ideas: 1) remote operation information can be actively sensed by infusing muscle activation with a haptics EMG perception mechanism; 2) the robot intelligence can be enhanced by employing a robot learning method. The developed approach is verified by the experimental results; and 3) the proposed method can be potentially used for telemanufacturing, teletehabilitation, and telemedicine, and so on. In our future work, more interactive information between humans and telerobots should be taken into consideration in the telerobot perception system to enhance the robot intelligence.

93 citations


Proceedings ArticleDOI
Majed Samad1, Elia Gatti1, Anne Hermes1, Hrvoje Benko1, Cesare Parise1 
02 May 2019
TL;DR: These findings provide the first quantification of the range of C/D-ratio that can be used to simulate weight in virtual reality and discuss these findings in terms of estimation of physical work needed to lift an object.
Abstract: In virtual reality, the lack of kinesthetic feedback often prevents users from experiencing the weight of virtual objects. Control-to-display (C/D) ratio manipulation has been proposed as a method to induce weight perception without kinesthetic feedback. Based on the fact that lighter (heavier) objects are easier (harder) to move, this method induces an illusory perception of weight by manipulating the rendered position of users' hands---increasing or decreasing their displayed movements. In a series of experiments we demonstrate that C/D-ratio induces a genuine perception of weight, while preserving ownership over the virtual hand. This means that such a manipulation can be easily introduced in current VR experiences without disrupting the sense of presence. We discuss these findings in terms of estimation of physical work needed to lift an object. Our findings provide the first quantification of the range of C/D-ratio that can be used to simulate weight in virtual reality.

90 citations


Proceedings ArticleDOI
Jaeyeon Lee1, Mike Sinclair2, Mar Gonzalez-Franco2, Eyal Ofek2, Christian Holz2 
02 May 2019
TL;DR: Torc as discussed by the authors is a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance, allowing users to interact with virtual objects by sliding their thumb on TORC's trackpad.
Abstract: Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user's hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC's trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. Our evaluation showed that using TORC, participants could manipulate virtual objects more precisely (e.g., position and rotate objects in 3D) than when using a conventional VR controller.

Proceedings ArticleDOI
09 Jul 2019
TL;DR: Tasbi is presented, a multisensory haptic wristband capable of delivering squeeze and vibrotactile feedback, and early explorations into Tasbi’s utility as a sensory substitution device for hand interactions, employing squeeze, vibration, and pseudo-haptic effects to render a highly believable virtual button.
Abstract: Augmented and virtual reality are poised to deliver the next generation of computing interfaces. To fully immerse users, it will become increasingly important to couple visual information with tactile feedback for interactions with the virtual world. Small wearable devices which approximate or substitute for sensations in the hands offer an attractive path forward. In this work, we present Tasbi, a multisensory haptic wristband capable of delivering squeeze and vibrotactile feedback. The device features a novel mechanism for generating evenly distributed and purely normal squeeze forces around the wrist. Our approach ensures that Tasbi’s six radially spaced vibrotactors maintain position and exhibit consistent skin coupling. In addition to experimental device characterization, we present early explorations into Tasbi’s utility as a sensory substitution device for hand interactions, employing squeeze, vibration, and pseudo-haptic effects to render a highly believable virtual button.

Journal ArticleDOI
TL;DR: The challenges in the collaboration between human operators and industrial robots for assembly operations focusing on safety and simplified interaction are discussed, and a case study is presented, involving perception technologies for the robot in conjunction with wearable devices used by the operator.
Abstract: This paper discusses the challenges in the collaboration between human operators and industrial robots for assembly operations focusing on safety and simplified interaction. A case study is presented, involving perception technologies for the robot in conjunction with wearable devices used by the operator. In terms of robot perception, a manual guidance module, an air pressor contact sensor namely skin, and a vision system for recognition and tracking of objects have been developed and integrated. Concerning the wearable devices, an advanced user interface including audio and haptic commands accompanied by augmented reality technology are used to support the operator and provide awareness by visualizing information related to production and safety aspects. In parallel, safety functionalities are implemented through collision detection technologies such as a safety skin and safety monitored regions delimiting the area of the robot activities. The complete system is coordinated under a common integration platform and it is validated in a case study of the white goods industry.

Posted Content
TL;DR: Self-supervision is used to learn a compact and multimodal representation of the authors' sensory inputs, which can then be used to improve the sample efficiency of the policy learning of self-supervised learning algorithms.
Abstract: Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback. It is non-trivial to manually design a robot controller that combines these modalities which have very different characteristics. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. In this work, we use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. Evaluating our method on a peg insertion task, we show that it generalizes over varying geometries, configurations, and clearances, while being robust to external perturbations. We also systematically study different self-supervised learning objectives and representation learning architectures. Results are presented in simulation and on a physical robot.

Proceedings ArticleDOI
02 May 2019
TL;DR: Haptics is presented, an autonomous safe-to-touch quadcopter and its integration with a virtual shopping experience and its approach for tackling inherent challenges of hovering encountered-type haptic devices, including the use of display techniques, visuo-haptic illusions, and collision avoidance.
Abstract: Quadcopters have been used as hovering encountered-type haptic devices in virtual reality. We suggest that quadcopters can facilitate rich haptic interactions beyond force feedback by appropriating physical objects and the environment. We present HoverHaptics, an autonomous safe-to-touch quadcopter and its integration with a virtual shopping experience. HoverHaptics highlights three affordances of quadcopters that enable these rich haptic interactions: (1) dynamic positioning of passive haptics, (2) texture mapping, and (3) animating passive props. We identify inherent challenges of hovering encountered-type haptic devices, such as their limited speed, inadequate control accuracy, and safety concerns. We then detail our approach for tackling these challenges, including the use of display techniques, visuo-haptic illusions, and collision avoidance. We conclude by describing a preliminary study (n = 9) to better understand the subjective user experience when interacting with a quadcopter in virtual reality using these techniques.

Proceedings ArticleDOI
02 May 2019
TL;DR: The concept and implementation of Drag:on is presented, an ungrounded shape-changing VR controller that provides dynamic passive haptic feedback based on drag, i.e. air resistance, and weight shift that increases the haptic realism in VR compared to standard controllers and when rotated or swung improves the perception of virtual resistance.
Abstract: Standard controllers for virtual reality (VR) lack sophisticated means to convey a realistic, kinesthetic impression of size, resistance or inertia. We present the concept and implementation of Drag:on, an ungrounded shape-changing VR controller that provides dynamic passive haptic feedback based on drag, i.e. air resistance, and weight shift. Drag:on leverages the airflow occurring at the controller during interaction. By dynamically adjusting its surface area, the controller changes the drag and rotational inertia felt by the user. In a user study, we found that Drag:on can provide distinguishable levels of haptic feedback. Our prototype increases the haptic realism in VR compared to standard controllers and when rotated or swung improves the perception of virtual resistance. By this, Drag:on provides haptic feedback suitable for rendering different virtual mechanical resistances, virtual gas streams, and virtual objects differing in scale, material and fill state.

Journal ArticleDOI
TL;DR: A method to enable robots to intelligently and compliantly adapt their motions to the intention of a human during physical Human–Robot Interaction in a multi-task setting by employing a class of parameterized dynamical systems that allows for smooth and adaptive transitions between encoded tasks.
Abstract: The goal of this work is to enable robots to intelligently and compliantly adapt their motions to the intention of a human during physical Human–Robot Interaction in a multi-task setting. We employ a class of parameterized dynamical systems that allows for smooth and adaptive transitions between encoded tasks. To comply with human intention, we propose a mechanism that adapts generated motions (i.e., the desired velocity) to those intended by the human user (i.e., the real velocity) thereby switching to the most similar task. We provide a rigorous analytical evaluation of our method in terms of stability, convergence, and optimality yielding an interaction behavior which is safe and intuitive for the human. We investigate our method through experimental evaluations ranging in different setups: a 3-DoF haptic device, a 7-DoF manipulator and a mobile platform.

Journal ArticleDOI
24 Apr 2019
TL;DR: A catheter autonomously navigated a blood-filled heart using enhanced sensing and control with results comparable to expert navigation, demonstrating that the performance of an autonomously controlled robotic catheter rivaled that of an experienced clinician.
Abstract: While all minimally invasive procedures involve navigating from a small incision in the skin to the site of the intervention, it has not been previously demonstrated how this can be done autonomously. To show that autonomous navigation is possible, we investigated it in the hardest place to do it - inside the beating heart. We created a robotic catheter that can navigate through the blood-filled heart using wall-following algorithms inspired by positively thigmotactic animals. The catheter employs haptic vision, a hybrid sense using imaging for both touch-based surface identification and force sensing, to accomplish wall following inside the blood-filled heart. Through in vivo animal experiments, we demonstrate that the performance of an autonomously-controlled robotic catheter rivals that of an experienced clinician. Autonomous navigation is a fundamental capability on which more sophisticated levels of autonomy can be built, e.g., to perform a procedure. Similar to the role of automation in fighter aircraft, such capabilities can free the clinician to focus on the most critical aspects of the procedure while providing precise and repeatable tool motions independent of operator experience and fatigue.

Journal ArticleDOI
TL;DR: In this paper, a survey of emerging material technologies for the sense of touch is presented, including sensory considerations and requirements, materials, actuation principles, and design and fabrication methods.
Abstract: The sense of touch is essential to most physical activities that sustain and enrich our lives. Human touch sensing is facilitated by resources spanning the nervous system, musculoskeletal system, and the skin, our largest sensory organ. Together, these comprise what is referred to as the haptic sense. The sensory organ of touch is the skin. It is the largest organ of the body, comprising about 15% of our body mass.[1] It is a marvel of biological engineering, sensitive enough to detect The sense of touch is involved in nearly all human activities, but information technologies for displaying tactile sensory information to the skin are rudimentary when compared to state-of-the-art video and audio displays, or to tactile perceptual capabilities. Realizing tactile displays with good perceptual fidelity will require major advances in engineering, design, and fabrication. Research over several decades has highlighted the difficulties of meeting the required performance benchmarks using conventional devices, processes, and techniques. This has highlighted the important role that will be played by new material technologies that can bridge the electronic and mechanical domains. This must occur at the smallest scales, because of the great perceptual spatial and temporal acuity of the sense of touch. The requirements involved also furnish valuable performance benchmarks against which many emerging material technologies are being evaluated. This article highlights recent research and possibilities enabled through new material technologies, ranging from organic electronic materials, to carbon nanomaterials, and a variety of composites. Emerging material technologies are surveyed for the sense of touch, including sensory considerations and requirements, materials, actuation principles, and design and fabrication methods. A conclusion reflects on the main open challenges and future prospects for research in this area. Haptic Devices

Journal ArticleDOI
TL;DR: A multi-modal pneumatic feedback system designed to allow for tactile, kinesthetic, and vibrotactile feedback demonstrated an increased reduction over single modality feedback solutions and indicated that the system can help users achieve average grip forces closer to those normally possible with the human hand.
Abstract: Minimally invasive robotic surgery allows for many advantages over traditional surgical procedures, but the loss of force feedback combined with a potential for strong grasping forces can result in excessive tissue damage. Single modality haptic feedback systems have been designed and tested in an attempt to diminish grasping forces, but the results still fall short of natural performance. A multi-modal pneumatic feedback system was designed to allow for tactile, kinesthetic, and vibrotactile feedback, with the aims of more closely imitating natural touch and further improving the effectiveness of HFS in robotic surgical applications and tasks such as tissue grasping and manipulation. Testing of the multi-modal system yielded very promising results with an average force reduction of nearly 50% between the no feedback and hybrid (tactile and kinesthetic) trials (p < 1.0E-16). The multi-modal system demonstrated an increased reduction over single modality feedback solutions and indicated that the system can help users achieve average grip forces closer to those normally possible with the human hand.

Journal ArticleDOI
03 Oct 2019
TL;DR: This special issue provides state-of-the-art coverage of the theoretical, scientific, and practical aspects related to flexible electronic skin.
Abstract: With robots entering our lives in a number of ways, their safe interaction, while operating autonomously, has gained significant attention. We no longer speak of robots as only the industrial tools needed for repetitive tasks such as picking and placing, or robots kept away from people. Not that such tasks are unimportant, it is that significant progresses have been made in these application areas and now the focus is gradually shifting toward robots handling real-world objects under arbitrary circumstances, working safely alongside humans, and assisting them. The physical interaction by touching is important during such tasks to get an estimation of contact parameters (e.g., force, soft contact, hardness, texture, temperature, etc.) and this makes the tactile/touch sensors and haptic feedback critical for robot technology, which is opening up more and more applications [1] This trend will continue as we enter the era of smart factories, Industry 4.0, social robots, telesurgery, and so on, where robots are intended to work closely with human. Robotics is set to become the driving technology underpinning a whole new generation of autonomous devices and cognitive artifacts, providing a link between the digital and physical world. In fact, we are looking at a profound evolution, where artificial intelligent (AI) systems could be extended into robots with new embodiments for applications such as brain-controlled robots and haptic avatars [2] . New automation concepts such as human–robot collaboration (HRC) and cyber–physical systems (CPSs) are recognized as having the potential to impact and revolutionize the production landscape. A rich sensorization will be critical to such advances endowed with a large number of different sensors types (touch, temperature, pain, electrochemical, gas sensors, idiothetic sensing, etc.). Critical to these advances are the ways in which the technological advances are managed or made to develop electronic skin (e-skin) and the way it is employed to understand various dimensions of physical interaction in unconstrained environments [3] .

Journal ArticleDOI
10 Jul 2019
TL;DR: In this paper, the authors presented novel ultrareliable and low-latency communication (URLLC) techniques for tactile Internet services, such as tactile internet services, which are teleoperation, immersive virtual reality, cooperative automated driving, and so on.
Abstract: This paper presents novel ultrareliable and low-latency communication (URLLC) techniques for URLLC services, such as Tactile Internet services. Among typical use cases of URLLC services are teleoperation, immersive virtual reality, cooperative automated driving, and so on. In such URLLC services, new kinds of traffic such as haptic information including kinesthetic information and tactile information need to be delivered in addition to high-quality video and audio traffic in traditional multimedia services. Furthermore, such a variety of traffic has various characteristics in terms of packet sizes and data rates with a variety of requirements of latency and reliability. Furthermore, some traffic may occur in a sporadic manner but requires reliable delivery of packets of medium to large sizes within a low latency, which is not supported by current state-of-the-art wireless communication systems and is very challenging for future wireless communication systems. Thus, to meet such a variety of tight traffic requirements in a wireless communication system, novel technologies from the physical layer to the network layer need to be devised. In this paper, some novel physical layer technologies such as waveform multiplexing, multiple-access scheme, channel code design, synchronization, and full-duplex transmission for spectrally efficient URLLC are introduced. In addition, a novel performance evaluation approach, which combines a ray-tracing tool and system-level simulation, is suggested for evaluating the performance of the proposed schemes. Simulation results show the feasibility of the proposed schemes providing realistic URLLC services in realistic geographical environments, which encourages further efforts to substantiate the proposed work.

Journal ArticleDOI
TL;DR: Haptic feedback is added to virtual reality simulators to increase the fidelity and thereby improve training effect and force parameters and force feedback in box trainers have been shown to improve training results.

Journal ArticleDOI
TL;DR: SEMG technologies that involve Multichannel sEMG electrodes array and processing methods are provided, and current state-of-the-art of artificial sensation and haptic feedback are reviewed, to outline challenging issues and future developments.
Abstract: With the trend going on in ubiquitous computing, everything is going to be connected to the Internet and its data will be used for various progressive purposes, creating not only information from it, but also, knowledge and even wisdom. Internet of Things (IoT) is becoming important because the amount of data could make it possible to create more usefulness and develop smart applications for the users. Meanwhile, it mainly focuses on how to enable general objects to see, hear, and smell the physical world for themselves, and make them connected to share the observations. In this paper, we focus our attention on the integration of artificial sensory perception and haptic feedback in sEMG hands, which is an intelligent application of the IoT. Artificial sensory perception and haptic feedback are essential elements for amputees with myoelectric hands to restore the grasping function. They can provide information to users, such as forces of interaction and surface properties at points of contact between hands and objects. Recent advancements in robot tactile sensing led to development of many computational techniques that exploit this important sensory channel. At the same time, Surface electromyography (sEMG) is perhaps most useful for providing insight into how the neuromuscular system behaves. Therefore, integration of sEMG technology, artificial sensation and haptic feedback plays an important role in improving the manipulation performance and enhancing perceptual embodiment for users. This paper provides sEMG technologies that involve Multichannel sEMG electrodes array and processing methods, and then reviews current state-of-the-art of artificial sensation and haptic feedback. Drawing from advancements and taking into design considerations of each feedback modality and individual haptic technology, the paper outline challenging issues and future developments.

Journal ArticleDOI
TL;DR: This paper describes the Tactile Internet's human-in-the-loop-centric design principles and haptic communications models, and elaborate on the development of decentralized cooperative dynamic bandwidth allocation algorithms for end-to-end resource coordination in fiber-wireless (FiWi) access networks.
Abstract: Historically, research efforts in optical networks have focused on the goal of continuously increasing capacity rather than on lowering end-to-end latency. This slowly started to change in the access environment with post-Next-Generation Passive Optical Network 2 research. The emphasis on latency grew in importance with the introduction of 5G ultra-reliable and low-latency communication requirements. In this paper, we focus on the emerging Tactile Internet as one of the most interesting 5G low-latency applications enabling novel immersive experiences. After describing the Tactile Internet's human-in-the-loop-centric design principles and haptic communications models, we elaborate on the development of decentralized cooperative dynamic bandwidth allocation algorithms for end-to-end resource coordination in fiber-wireless (FiWi) access networks. We then use machine learning in the context of FiWi enhanced heterogeneous networks to decouple haptic feedback from the impact of extensive propagation delays. This enables humans to perceive remote task environments in time at a 1-ms granularity.

Journal ArticleDOI
TL;DR: The novel perceptual environment of VR may affect vision for action, by shifting users away from a dorsal mode of control, which may create a fundamental disparity between virtual and real-world skills that has important consequences for how the authors understand perception and action in the virtual world.
Abstract: Virtual reality (VR) is a promising tool for expanding the possibilities of psychological experimentation and implementing immersive training applications. Despite a recent surge in interest, there remains an inadequate understanding of how VR impacts basic cognitive processes. Due to the artificial presentation of egocentric distance cues in virtual environments, a number of cues to depth in the optic array are impaired or placed in conflict with each other. Moreover, realistic haptic information is all but absent from current VR systems. The resulting conflicts could impact not only the execution of motor skills in VR but also raise deeper concerns about basic visual processing, and the extent to which virtual objects elicit neural and behavioural responses representative of real objects. In this brief review, we outline how the novel perceptual environment of VR may affect vision for action, by shifting users away from a dorsal mode of control. Fewer binocular cues to depth, conflicting depth information and limited haptic feedback may all impair the specialised, efficient, online control of action characteristic of the dorsal stream. A shift from dorsal to ventral control of action may create a fundamental disparity between virtual and real-world skills that has important consequences for how we understand perception and action in the virtual world.

Journal ArticleDOI
TL;DR: By examining existing force feedback gloves, the pros and cons of existing design solutions to the major sub-systems including sensing, actuation, control, transmission and structure are discussed.
Abstract: Force feedback gloves have found many applications in fields such as teleoperation and virtual reality. In order to enhance the immersive feeling of interaction with remote or virtual environments, glove-like haptic devices are used, which enable users to touch and manipulate virtual objects in a more intuitive and direct way via the dexterous manipulation and sensitive perception capabilities of human hands. In this survey, we aim to identify the gaps between existing force feedback gloves and the desired ones that can provide robust and realistic sensation of the interaction with diverse virtual environments. By examining existing force feedback gloves, the pros and cons of existing design solutions to the major sub-systems including sensing, actuation, control, transmission and structure are discussed. Future research topics are put forward with design challenges being elaborated. Innovative design solutions are needed to enable the utility of wearable haptic gloves in the upcoming virtual reality era.

Proceedings ArticleDOI
23 Mar 2019
TL;DR: The results indicate that within a certain range, desktop-scale VR hand redirection can go unnoticed by the user, but that this range is narrow, and of value for the development of VR applications that aim to redirect users in an undetectable manner.
Abstract: Virtual reality (VR) interaction techniques like haptic retargeting offset the user's rendered virtual hand from the real hand location to redirect the user's physical hand movement. This paper explores the order of magnitude of hand redirection that can be applied without the user noticing it. By deriving lower-bound estimates of detection thresholds, we quantify the range of unnoticeable redirection for the three basic redirection dimensions, horizontal, vertical and gain-based hand warping. In a two-alternative forced choice (2AFC) experiment, we individually explore these three hand warping dimensions each in three different scenarios: a very conservative scenario without any distraction and two conservative but more realistic scenarios that distract users from the redirection. Additionally, we combine the results of all scenarios to derive robust recommendations for each redirection technique. Our results indicate that within a certain range, desktop-scale VR hand redirection can go unnoticed by the user, but that this range is narrow. The findings show that the virtual hand can be unnoticeably displaced horizontally or vertically by up to 4.5° in either direction, respectively. This allows for a range of ca. 9°, in which users cannot reliably detect applied redirection. For our gain-based hand redirection technique, we found that gain factors between g = 0.88 and g = 1.07 can go unnoticed, which corresponds to a user grasping up to 13.75% further or up to 6.18% less far than in virtual space. Our findings are of value for the development of VR applications that aim to redirect users in an undetectable manner, such as for haptic retargeting.

Proceedings ArticleDOI
05 Jun 2019
TL;DR: A novel evaluation that examines the influence of different types of haptic feedback on presence and performance regarding manual tasks in VR indicates that regarding presence vibrotactile feedback outperforms haptic Feedback which performs better than visual feedback only.
Abstract: Haptic feedback may support immersion and presence in virtual reality (VR) environments. The emerging market of consumer devices offers first devices which are expected to increase the degree of feeling being actually present in a virtual environment. In this paper we introduce a novel evaluation that examines the influence of different types of haptic feedback on presence and performance regarding manual tasks in VR. Therefore, we conducted a comprehensive user study involving 14 subjects, who performed throwing, stacking and object identification tasks in VR with visual (i.e., sensory substitution), vibrotactile or force feedback. We measured the degree of presence and task-related performance metrics. Our results indicate that regarding presence vibrotactile feedback outperforms haptic feedback which performs better than visual feedback only. In addition, force feedback significantly lowered the execution time for the throwing and the stacking task. In object identification tasks, the vibrotactile feedback increased the detection rates compared to the vibrotactile and force feedback, but also increased the required time of identification. Despite the inadequacies of the still young consumer technology, there were nevertheless strong indications of connections between presence, task fulfillment and the type of haptic feedback.

Proceedings ArticleDOI
20 May 2019
TL;DR: This paper explores learning a robust model that maps tactile sensor signals to force via neural networks for the SynTouch BioTac sensor and proposes a voxelized input feature layer for spatial signals and leverage information about the sensor surface to regularize the loss function.
Abstract: Current methods for estimating force from tactile sensor signals are either inaccurate analytic models or task-specific learned models. In this paper, we explore learning a robust model that maps tactile sensor signals to force. We specifically explore learning a mapping for the SynTouch BioTac sensor via neural networks. We propose a voxelized input feature layer for spatial signals and leverage information about the sensor surface to regularize the loss function. To learn a robust tactile force model that transfers across tasks, we generate ground truth data from three different sources: (1) the BioTac rigidly mounted to a force torque (FT) sensor, (2) a robot interacting with a ball rigidly attached to the same FT sensor, and (3) through force inference on a planar pushing task by formalizing the mechanics as a system of particles and optimizing over the object motion. A total of 140k samples were collected from the three sources. We achieve a median angular accuracy of 3.5 degrees in predicting force direction (66% improvement over the current state of the art) and a median magnitude accuracy of 0.06 N (93% improvement) on a test dataset. Additionally, we evaluate the learned force model in a force feedback grasp controller performing object lifting and gentle placement. Our results can be found on https: //sites.google.com/view/tactile-force.