scispace - formally typeset
Search or ask a question

Showing papers on "Haptic technology published in 2014"


Journal ArticleDOI
19 Nov 2014
TL;DR: This paper outlines the algorithm for controlling the volumetric distribution of the acoustic radiation force field in the form of a three-dimensional shape, and demonstrates how this field is created and how users interact with it.
Abstract: We present a method for creating three-dimensional haptic shapes in mid-air using focused ultrasound This approach applies the principles of acoustic radiation force, whereby the non-linear effects of sound produce forces on the skin which are strong enough to generate tactile sensations This mid-air haptic feedback eliminates the need for any attachment of actuators or contact with physical devices The user perceives a discernible haptic shape when the corresponding acoustic interference pattern is generated above a precisely controlled two-dimensional phased array of ultrasound transducers In this paper, we outline our algorithm for controlling the volumetric distribution of the acoustic radiation force field in the form of a three-dimensional shape We demonstrate how we create this acoustic radiation force field and how we interact with it We then describe our implementation of the system and provide evidence from both visual and technical evaluations of its ability to render different shapes We conclude with a subjective user evaluation to examine users' performance for different shapes

251 citations


Book
30 Jan 2014
TL;DR: This book discusses the relationship between touch and the representation of the body in the authors' mind outside the boundaries of their bodies and the role of touch in sexual behavior.
Abstract: TOUCH IN THE LABORATORY 1: INTRODUCING THE SENSE OF TOUCH 1. Introduction 2. The fundamentals of touch: The organization of the somatosensory system 3. Tactile perceptual organization TOUCH IN THE LABORATORY 2: THE HIGHER ORDER FACTORS THAT AFFECT TACTILE PERCEPTION 4. The awareness of touch 5. A memory for touch 6. Tactile attention 7. Caressing the skin: The social side of touch 8. Outside the boundaries of our bodies: The relationship between touch and the representation of the body in our mind TOUCH IN THE REAL WORLD 1: OVERCOMING THE LIMITATIONS IN TACTILE INFORMATION PROCESSING 9. Technologies of touch 10. Tactile and multisensory warning signals TOUCH IN THE REAL WORLD 2: ENHANCING THE AFFECTIVE DESIGN OF TOUCH 11. Touch in the marketplace: Selling by means of touch 12. Touch in the museum: Sculpture, art, aesthetics, and visual impairment 13. Touch in the bedroom: The role of touch in sexual behavior 14. Touch in the restaurant: A touch of gastronomy CONCLUSIONS 15. Touching the future References

201 citations


Patent
10 Feb 2014
TL;DR: In this paper, a system that produces a dynamic haptic effect and generates a drive signal that includes a gesture signal and a real or virtual device sensor signal is presented, which can be modified dynamically based on both the gesture signals and the real-or virtual-device sensors.
Abstract: A system that produces a dynamic haptic effect and generates a drive signal that includes a gesture signal and a real or virtual device sensor signal. The haptic effect is modified dynamically based on both the gesture signal and the real or virtual device sensor signal such as from an accelerometer or gyroscope, or by a signal created from processing data such as still images, video or sound. The haptic effect may optionally be modified dynamically by using the gesture signal and the real or virtual device sensor signal and a physical model, or may optionally be applied concurrently to multiple devices which are connected via a communication link. The haptic effect may optionally be encoded into a data file on a first device. The data file is then communicated to a second device and the haptic effect is read from the data file and applied to the second device.

195 citations


Patent
03 Sep 2014
TL;DR: In this article, the authors coupleable to a display screen includes a camera system that acquires optical data of a user comfortably gesturing in a user-customizable interaction zone having a z 0 plane.
Abstract: An electronic device coupleable to a display screen includes a camera system that acquires optical data of a user comfortably gesturing in a user-customizable interaction zone having a z 0 plane, while controlling operation of the device. Subtle gestures include hand movements commenced in a dynamically resizable and relocatable interaction zone. Preferably (x,y,z) locations in the interaction zone are mapped to two-dimensional display screen locations. Detected user hand movements can signal the device that an interaction is occurring in gesture mode. Device response includes presenting GUI on the display screen, creating user feedback including haptic feedback. User three-dimensional interaction can manipulate displayed virtual objects, including releasing such objects. User hand gesture trajectory clues enable the device to anticipate probable user intent and to appropriately update display screen renderings.

152 citations


Patent
08 Apr 2014
TL;DR: In this article, the authors describe methods for providing real-time feedback to an end user of a mobile device as they are interacting with or manipulating one or more virtual objects within an augmented reality environment.
Abstract: Methods for providing real-time feedback to an end user of a mobile device as they are interacting with or manipulating one or more virtual objects within an augmented reality environment are described. The real-time feedback may comprise visual feedback, audio feedback, and/or haptic feedback. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may determine an object classification associated with a virtual object within an augmented reality environment, detect an object manipulation gesture performed by an end user of the mobile device, detect an interaction with the virtual object based on the object manipulation gesture, determine a magnitude of a virtual force associated with the interaction, and provide real-time feedback to the end user of the mobile device based on the interaction, the magnitude of the virtual force applied to the virtual object, and the object classification associated with the virtual object.

150 citations


Journal ArticleDOI
TL;DR: The results show that the best repartition of control in terms of cooperation between human and machine can be identified through an analysis of the steering wheel reversal rate, the steering effort and the mean lateral position of the vehicle.
Abstract: This study investigated human-machine cooperation when driving with different degrees of a shared control system. By means of a direct intervention on the steering wheel, shared control systems partially correct the vehicle's trajectory and, at the same time, provide continuous haptic guidance to the driver. A crucial point is to determine the optimal level of steering assistance for effective cooperation between the two agents. Five system settings were compared with a condition in which no assistance was present. In addition, road visibility was manipulated by means of additional fog or self-controlled visual occlusions. Several performance indicators and subjective assessments were analyzed. The results show that the best repartition of control in terms of cooperation between human and machine can be identified through an analysis of the steering wheel reversal rate, the steering effort and the mean lateral position of the vehicle. The best cooperation was achieved with systems of relatively low-level haptic authority, although more intervention may be preferable in poor visibility conditions. Increasing haptic authority did not yield higher benefits in terms of steering behavior, visual demand or subjective feeling.

136 citations


Journal ArticleDOI
TL;DR: Results suggest that incorporating the aforementioned haptic feedback strategies, together with user-driven compliance of the hand, facilitate execution of safe and stable grasps, while suggesting that a low-cost, robust hand employing hardware-based synergies might be a good alternative to traditional myoelectric prostheses.
Abstract: This paper proposes a teleimpedance controller with tactile feedback for more intuitive control of the Pisa/IIT SoftHand. With the aim to realize a robust, efficient and low-cost hand prosthesis design, the SoftHand is developed based on the motor control principle of synergies, through which the immense complexity of the hand is simplified into distinct motor patterns. Due to the built-in flexibility of the hand joints, as the SoftHand grasps, it follows a synergistic path while allowing grasping of objects of various shapes using only a single motor. The DC motor of the hand incorporates a novel teleimpedance control in which the user’s postural and stiffness synergy references are tracked in real-time. In addition, for intuitive control of the hand, two tactile interfaces are developed. The first interface (mechanotactile) exploits a disturbance observer which estimates the interaction forces in contact with the grasped object. Estimated interaction forces are then converted and applied to the upper arm of the user via a custom made pressure cuff. The second interface employs vibrotactile feedback based on surface irregularities and acceleration signals and is used to provide the user with information about the surface properties of the object as well as detection of object slippage while grasping. Grasp robustness and intuitiveness of hand control were evaluated in two sets of experiments. Results suggest that incorporating the aforementioned haptic feedback strategies, together with user-driven compliance of the hand, facilitate execution of safe and stable grasps, while suggesting that a low-cost, robust hand employing hardware-based synergies might be a good alternative to traditional myoelectric prostheses.

118 citations


Proceedings ArticleDOI
26 Apr 2014
TL;DR: Remote handshaking in which a robot hand was attached just under a videoconferencing terminal's display moved according to the opening and closing motion of a conversation partner's hand revealed that the feeling of being close to the partner can be enhanced by mutual touch in which the partner's action needs to occur but should be invisible.
Abstract: Since past studies on haptic and visual communication have tended to be isolated from each other, it has remained unclear whether a touch channel can still enrich mediated communication where video and audio channels are already available. To clarify this, we analyzed remote handshaking in which a robot hand that was attached just under a videoconferencing terminal's display moved according to the opening and closing motion of a conversation partner's hand. Combining touch and video channels raises a question as to whether the partner's action of touching a haptic device should be visible to the user. If it can be invisible, the action may be unnecessary, and a unilaterally controlled device may be enough to establish an effective touch channel. Our analysis revealed that the feeling of being close to the partner can be enhanced by mutual touch in which the partner's action needs to occur but should be invisible.

112 citations


Proceedings ArticleDOI
01 May 2014
TL;DR: A framework for combining vision and haptic information in human-robot joint actions that consists of a hybrid controller that uses both visual servoing and impedance controllers to allow for a more collaborative setup.
Abstract: We propose a framework for combining vision and haptic information in human-robot joint actions. It consists of a hybrid controller that uses both visual servoing and impedance controllers. This can be applied to tasks that cannot be done with vision or haptic information alone. In this framework, the state of the task can be obtained from visual information while haptic information is crucial for safe physical interaction with the human partner. The approach is validated on the task of jointly carrying a flat surface (e.g. a table) and then preventing an object (e.g. a ball) on top from falling off. The results show that this task can be successfully achieved. Furthermore, the framework presented allows for a more collaborative setup, by imparting task knowledge to the robot as opposed to a passive follower.

103 citations


Proceedings ArticleDOI
20 Mar 2014
TL;DR: A method for resampling the texture models so they can be rendered at a sampling rate other than the 10 kHz used when recording data, to increase the adaptability and utility of HaTT.
Abstract: This paper introduces the Penn Haptic Texture Toolkit (HaTT), a publicly available repository of haptic texture models for use by the research community. HaTT includes 100 haptic texture and friction models, the recorded data from which the models were made, images of the textures, and the code and methods necessary to render these textures using an impedance-type haptic interface such as a SensAble Phantom Omni. This paper reviews our previously developed methods for modeling haptic virtual textures, describes our technique for modeling Coulomb friction between a tooltip and a surface, discusses the adaptation of our rendering methods for display using an impedance-type haptic device, and provides an overview of the information included in the toolkit. Each texture and friction model was based on a ten-second recording of the force, speed, and high-frequency acceleration experienced by a handheld tool moved by an experimenter against the surface in a natural manner. We modeled each texture's recorded acceleration signal as a piecewise autoregressive (AR) process and stored the individual AR models in a Delaunay triangulation as a function of the force and speed used when recording the data. To increase the adaptability and utility of HaTT, we developed a method for resampling the texture models so they can be rendered at a sampling rate other than the 10 kHz used when recording data. Measurements of the user's instantaneous normal force and tangential speed are used to synthesize texture vibrations in real time. These vibrations are transformed into a texture force vector that is added to the friction and normal force vectors for display to the user.

101 citations



Journal ArticleDOI
29 Sep 2014
TL;DR: This article generates a foundational library of usable haptic vocabulary and does so with a methodology that allows ongoing additions to the library in a principled and effective way.
Abstract: Despite a long history of use in communication, haptic feedback is a relatively new addition to the toolbox of special effects. Unlike artists who use sound or vision, haptic designers cannot simply access libraries of effects that map cleanly to media content, and they lack even guiding principles for creating such effects. In this article, we make progress toward both capabilities: we generate a foundational library of usable haptic vocabulary and do so with a methodology that allows ongoing additions to the library in a principled and effective way. We define a feel effect as an explicit pairing between a meaningful linguistic phrase and a rendered haptic pattern. Our initial experiment demonstrates that users who have only their intrinsic language capacities, and no haptic expertise, can generate a core set of feel effects that lend themselves via semantic inference to the design of additional effects. The resulting collection of more than 40 effects covers a wide range of situations (including precipitation, animal locomotion, striking, and pulsating events) and is empirically shown to produce the named sensation for the majority of our test users in a second experiment. Our experiments demonstrate a unique and systematic approach to designing a vocabulary of haptic sensations that are related in both the semantic and parametric spaces.

Journal ArticleDOI
01 Apr 2014
TL;DR: A novel approach aims at improving the realism of the haptic rendering, while preserving its stability, by modulating cutaneous force to compensate for a lack of kinesthesia.
Abstract: A study on the role of cutaneous and kinesthetic force feedback in teleoperation is presented. Cutaneous cues provide less transparency than kinesthetic force, but they do not affect the stability of the teleoperation system. On the other hand, kinesthesia provides a compelling illusion of telepresence but affects the stability of the haptic loop. However, when employing common grounded haptic interfaces, it is not possible to independently control the cutaneous and kinesthetic components of the interaction. For this reason, many control techniques ensure a stable interaction by scaling down both kinesthetic and cutaneous force feedback, even though acting on the cutaneous channel is not necessary.We discuss here the feasibility of a novel approach. It aims at improving the realism of the haptic rendering, while preserving its stability, by modulating cutaneous force to compensate for a lack of kinesthesia. We carried out two teleoperation experiments, evaluating (1) the role of cutaneous stimuli when reducing kinesthesia and (2) the extent to which an overactuation of the cutaneous channel can fully compensate for a lack of kinesthetic force feedback. Results showed that, to some extent, it is possible to compensate for a lack of kinesthesia with the aforementioned technique, without significant performance degradation. Moreover, users showed a high comfort level in using the proposed system.

Journal ArticleDOI
TL;DR: A surgical training simulator with virtual and haptic force feedback for maxillofacial surgery was developed and validated the effects on the learning of bone-sawing skills through empirical evaluation, indicating that this simulator was able to produce the effect of learning bone- Sawing skill, and it could provide a training alternative for novices.

Proceedings ArticleDOI
26 Apr 2014
TL;DR: Two experiments conducted into two fundamental aspects of ultrasonic haptic perception: localisation of a static point and the perception of motion, which would provide insight into 1) the spatial resolution of an ultrasonic interface and 2) what forms of feedback give the most convincing illusion of movement.
Abstract: Ultrasonic haptic feedback is a promising means of providing tactile sensations in mid-air without encumbering the user with an actuator. However, controlled and rigorous HCI research is needed to understand the basic characteristics of perception of this new feedback medium, and so how best to utilise ultrasonic haptics in an interface. This paper describes two experiments conducted into two fundamental aspects of ultrasonic haptic perception: 1) localisation of a static point and 2) the perception of motion. Understanding these would provide insight into 1) the spatial resolution of an ultrasonic interface and 2) what forms of feedback give the most convincing illusion of movement. Results show an average localisation error of 8.5mm, with higher error along the longitudinal axis. Convincing sensations of motion were produced when travelling longer distances, using longer stimulus durations and stimulating multiple points along the trajectory. Guidelines for feedback design are given.

Patent
25 Apr 2014
TL;DR: In this paper, a user interface device includes a flexible layer comprising a touch surface configured to receive a touch by a user, a plurality of haptic cells covered by the flexible layer, each haptic cell comprising a haptic output device, a sensor configured to sense an amount and/or rate of deformation of the flexible surface when a user touches the touch surface, and a processor configurable to receive an output signal from the sensor, generate a haptic control signal based on the output signal, and output the haptic control signal to at least one haptic controller
Abstract: A user interface device includes a flexible layer comprising a touch surface configured to receive a touch by a user, a plurality of haptic cells covered by the flexible layer, each haptic cell comprising a haptic output device, a sensor configured to sense an amount and/or rate of deformation of the flexible layer when a user touches the touch surface, and a processor configured to receive an output signal from the sensor, generate a haptic control signal based on the output signal from the sensor, and output the haptic control signal to at least one haptic output device of the plurality of haptic cells to cause the haptic output device to deform an associated haptic cell in response to the sensed deformation of the flexible layer.

Journal ArticleDOI
TL;DR: A novel approach to force feedback in robot-assisted surgery by substituting haptic stimuli, composed of a kinesthetic component and a skin deformation, with cutaneous stimuli only, which shows improved performance in terms of completion time, force exerted, and total displacement of the rings with respect to two popular sensory substitution techniques.
Abstract: This study presents a novel approach to force feedback in robot-assisted surgery. It consists of substituting haptic stimuli, composed of a kinesthetic component and a skin deformation, with cutaneous stimuli only. The force generated can then be thought as a subtraction between the complete haptic interaction, cutaneous, and kinesthetic, and the kinesthetic part of it. For this reason, we refer to this approach as sensory subtraction. Sensory subtraction aims at outperforming other nonkinesthetic feedback techniques in teleoperation (e.g., sensory substitution) while guaranteeing the stability and safety of the system. We tested the proposed approach in a challenging 7-DoF bimanual teleoperation task, similar to the Pegboard experiment of the da Vinci Skills Simulator. Sensory subtraction showed improved performance in terms of completion time, force exerted, and total displacement of the rings with respect to two popular sensory substitution techniques. Moreover, it guaranteed a stable interaction in the presence of a communication delay in the haptic loop.

Proceedings ArticleDOI
26 Apr 2014
TL;DR: Hptic turk is presented, a different approach to motion platforms that is light and mobile and the key idea is to replace motors and mechanical components with humans.
Abstract: Motion platforms are used to increase the realism of virtual interaction. Unfortunately, their size and weight is proportional to the size of what they actuate. We present haptic turk, a different approach to motion platforms that is light and mobile. The key idea is to replace motors and mechanical components with humans. All haptic turk setups consist of a player who is supported by one or more turkers. The player enjoys an interactive experience, such as a flight simulation. The motion in the player's experience is generated by the turkers who manually lift, tilt, and push the player's limbs or torso. To get the timing and force right, timed motion instructions in a format familiar from rhythm games are displayed on turkers' mobile devices, which they attach to the player's body. We demonstrate a range of installations based on mobile phones, projectors, and head-mounted displays. In our user study, participants rated not only the experience as player as enjoyable (6.1/7), but also the experience as a turker (4.4/7). The approach of leveraging humans allows us to deploy our approach anytime anywhere, as we demonstrate by experimentally deploying at an art festival in the Nevada desert.

Proceedings ArticleDOI
01 Sep 2014
TL;DR: This work designs actions involving use of tools such as forks and knives that obtain haptic data containing information about the physical properties of the object, and presents a method to compactly represent the robot's beliefs about the object's properties using a generative model.
Abstract: Manipulation of complex deformable semi-solids such as food objects is an important skill for personal robots to have. In this work, our goal is to model and learn the physical properties of such objects. We design actions involving use of tools such as forks and knives that obtain haptic data containing information about the physical properties of the object. We then design appropriate features and use supervised learning to map these features to certain physical properties (hardness, plasticity, elasticity, tensile strength, brittleness, adhesiveness). Additionally, we present a method to compactly represent the robot's beliefs about the object's properties using a generative model, which we use to plan appropriate manipulation actions. We extensively evaluate our approach on a dataset including haptic data from 12 categories of food (including categories not seen before by the robot) obtained in 941 experiments. Our robot prepared a salad during 60 sequential robotic experiments where it made a mistake in only 4 instances.

Proceedings ArticleDOI
20 Mar 2014
TL;DR: Modeling and experimental results on both ultrasonic and electrostatic surface haptic devices are presented, characterizing their dynamics and their bandwidth for rendering haptic effects.
Abstract: Surface haptic devices modulate the friction between the surface and the fingertip, and can thus be used to create a tactile perception of surface features or textures We present modeling and experimental results on both ultrasonic and electrostatic surface haptic devices, characterizing their dynamics and their bandwidth for rendering haptic effects

Patent
12 Mar 2014
TL;DR: In this paper, a system and methods for parameter modification of one or more haptic effects are disclosed. But, in this paper, we focus on a single haptic effect.
Abstract: Systems and methods for parameter modification of one or more haptic effects are disclosed. In one embodiment an electronic device determines a haptic effect. The electronic device can receive an input signal indicating an environmental condition. The input signal may be generated by an environmental sensor. The environmental condition may be a temperature, vibration, noise, movement, trait of a user such as a user's weight, gender, age, height, another suitable trait of a user, another suitable environmental condition, or a combination thereof. The electronic device may modify the haptic effect based at least in part on the input signal. The electronic device can generate a haptic output signal based at least in part on the modified haptic effect. The haptic output signal may be configured to cause a haptic output device to output the modified haptic effect. The electronic device may output the haptic output signal.

Patent
08 Sep 2014
TL;DR: In this article, a system and methods for visual processing of spectrograms to generate haptic effects are disclosed, and a haptic effect can be determined based at least in part on the spectrogram.
Abstract: Systems and methods for visual processing of spectrograms to generate haptic effects are disclosed. In one embodiment, a signal comprising at least an audio signal is received (410). One or more spectrograms may be generated (420) based at least in part on the received signal. One or more haptic effects may be determined (430) based at least in part on the spectrogram. For example, a generated spectrogram may be a two-dimensional image and this image can be analyzed to determine one or more haptic effects. Once a haptic effect has been determined, one or more haptic output signals can be generated (440). A generated haptic output signal may be output (450) to one or more haptic output devices.

Proceedings ArticleDOI
07 Mar 2014
TL;DR: The design space for haptic feedback is systematically investigated and differences between strengths of EMS and vibrotactile feedback are experimentally explored to provide a basis for the design of haptic Feedback appropriate for the given type of interaction and the material.
Abstract: Free-hand interaction with large displays is getting more common, for example in public settings and exertion games. Adding haptic feedback offers the potential for more realistic and immersive experiences. While vibrotactile feedback is well known, electrical muscle stimulation (EMS) has not yet been explored in free-hand interaction with large displays. EMS offers a wide range of different strengths and qualities of haptic feedback. In this paper we first systematically investigate the design space for haptic feedback. Second, we experimentally explore differences between strengths of EMS and vibrotactile feedback. Third, based on the results, we evaluate EMS and vibrotactile feedback with regard to different virtual objects (soft, hard) and interaction with different gestures (touch, grasp, punch) in front of a large display. The results provide a basis for the design of haptic feedback that is appropriate for the given type of interaction and the material.

Proceedings ArticleDOI
29 Mar 2014
TL;DR: 3D direct selection of objects in the virtual 3D space as they might occur for 3D menus or floating objects in space is analyzed and some guidelines for the design of direct selection techniques in IVEs are suggested.
Abstract: The design of 3D user interfaces (3DUIs) for immersive head-mounted display (HMD) environments is an inherently difficult task. The fact that usually haptic feedback is absent and that visual body feedback is missing, hinders an efficient direct interaction with virtual objects. Moreover, the perceptual conflicts, such as double vision and space misperception, as well as the well-known vergence-accommodation mismatch further complicate the interaction, in particular with virtual objects floating in the virtual environment (VE). However, the potential benefits of direct and natural interaction offered by immersive virtual environments (IVEs) encourage the research in the field to create more efficient selection methods. Utilizing a Fitts' Law experiment, we analyzed the 3D direct selection of objects in the virtual 3D space as they might occur for 3D menus or floating objects in space. We examined the direct interaction space in front of the user and divided it into a set of interaction regions for which we compared different levels of selection difficulty. Our results indicate that the selection errors are highest along the view axis, less along the motion axis and marginal along the orthogonal plane. Based on these results we suggest some guidelines for the design of direct selection techniques in IVEs.

Book ChapterDOI
01 Jan 2014
TL;DR: A new robotic system designed to assist sonographers in performing ultrasound examinations by addressing common limitations of sonography, namely the physical fatigue that can result from performing the examination, and the difficulty in interpreting ultrasound data is presented.
Abstract: This paper presents a new robotic system designed to assist sonographers in performing ultrasound examinations by addressing common limitations of sonography, namely the physical fatigue that can result from performing the examination, and the difficulty in interpreting ultrasound data. The proposed system comprises a robot manipulator that operates the transducer, and an integrated user interface that offers 3D visualization and a haptic device as the main user interaction tool. The sonographer controls the slave robot movements either haptically (collaborative tele-operation mode), or by prior programming of a desired path (semi-automatic mode). A force controller maintains a constant contact force between the transducer and the patient’s skin while the robot drives the transducer to the desired anatomical locations. The ultrasound imaging system is connected to a 3D visualization application which registers in real time the streaming 2D images generated by the transducer and displays the resulting data as 3D volumetric representation which can be further examined off-line.

Patent
21 Mar 2014
TL;DR: In this paper, a bendable-foldable display that has bendable flaps connected by a hinge is presented, where the haptic system interprets the input to determine deformation characteristics of the display.
Abstract: A flexible device includes a bendable-foldable display that has bendable flaps connected by a hinge. The display has sensors for detecting a folding characteristic between the at least two flaps and for detecting a bending characteristic in at least one flap. The display has a haptic system with haptic output devices, where the haptic system receives input from the sensors indicating deformation of the bendable-foldable display device. A flexible device also includes bendable, foldable, or rollable displays that have sensors and actuators to augment user interaction with the device. Based on one or more measurements provided by the input, the haptic system interprets the input to determine deformation characteristics of the bendable-foldable display device. The haptic system generates haptic feedback based on the deformation characteristics.

Patent
31 Jan 2014
TL;DR: In this article, a system for managing a plurality of wearable devices on a user receives information to be conveyed using haptic effects and determines an intent of the information, then maps the information as a haptic effect to one or more of the wearable haptic devices based at least on the determined locations on the user and the haptic capabilities.
Abstract: A system for managing a plurality of wearable devices on a user receives information to be conveyed using haptic effects and determines an intent of the information. The system then determines, for each of the plurality of wearable haptic devices, a location of the wearable haptic device on the user and a haptic capability. The system then maps the information as a haptic effect to one or more of the wearable haptic devices based at least on the determined locations on the user and the haptic capabilities.

Journal ArticleDOI
TL;DR: Sophia-3 (string-operated planar haptic interface for arm rehabilitation) is a planar cable-driven device with a tilting working plane featuring a moving pulley-block that allows the robot to achieve excellent force capabilities, despite the low number of cables.
Abstract: Sophia-3 (string-operated planar haptic interface for arm rehabilitation) is a planar cable-driven device with a tilting working plane. It represents the first application of the adaptive cable-driven design paradigm recently introduced by the authors, featuring a moving pulley-block that allows the robot to achieve excellent force capabilities, despite the low number of cables. This study presents the design, kinematics, and control of the device and results of experimental validation on healthy subjects.

Proceedings ArticleDOI
01 May 2014
TL;DR: The design of an FBG-based, multi-function instrument that is capable of measuring mN-level forces at the instrument tip located inside the eye, and also the sclera contact location on the instrument shaft and the corresponding contact force is presented.
Abstract: Robotic systems have the potential to assist vitreoretinal surgeons in extremely difficult surgical tasks inside the human eye. In addition to reducing hand tremor and improving tool positioning, a robotic assistant can provide assistive motion guidance using virtual fixtures, and incorporate real-time feedback from intraocular force sensing ophthalmic instruments to present tissue manipulation forces, that are otherwise physically imperceptible to the surgeon. This paper presents the design of an FBG-based, multi-function instrument that is capable of measuring mN-level forces at the instrument tip located inside the eye, and also the sclera contact location on the instrument shaft and the corresponding contact force. The given information is used to augment cooperatively controlled robot behavior with variable admittance control. This effectively creates an adaptive remote center-of-motion (RCM) constraint to minimize eye motion, but also allows the translation of the RCM location if the instrument is not near the retina. In addition, it provides force scaling for sclera force feedback. The calibration and validation of the multi-function force sensing instrument are presented, along with demonstration and performance assessment of the variable admittance robot control on an eye phantom.

Journal ArticleDOI
TL;DR: Kissenger as discussed by the authors is an interactive device that provides a physical interface for transmitting a kiss between two remotely connected people, each device is paired with another and the amount of force and shape of the kiss by the user is sensed and transmitted to another device that is replicated using actuators.
Abstract: Intimate interactions between remotely located individuals are not well supported by conventional communication tools, mainly due to the lack of physical contact. Also, haptic research has not focused on the use of a kiss as a mode of interaction that maintains intimacy in long distance relationships. In this study, we designed and developed a haptic device called Kissenger (Kiss-Messenger) for this issue. Kissenger is an interactive device that provides a physical interface for transmitting a kiss between two remotely connected people. Each device is paired with another and the amount of force and shape of the kiss by the user is sensed and transmitted to another device that is replicated using actuators. Kissenger is designed to augment already existing remote communication technologies. Challenges in the design and development of the system are addressed through an iterative design process involving constant evaluation by users after each stage. The devices are evaluated through a short- and a long-term user study with remotely located couples. The results point to an initial acceptance of the device with feedback from the users on directions to improve the overall experience. This study discusses potential issues that designers should be aware of when prototyping for remote intimate interactions.