scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Haptics in 2012"


Journal ArticleDOI
TL;DR: This paper employs a sensorized handheld tool to capture the feel of a given texture, reduces the three-dimensional acceleration signals to a perceptually equivalent one-dimensional signal, and uses linear predictive coding to distill this raw haptic information into a database of frequency-domain texture models.
Abstract: Modern haptic interfaces are adept at conveying the large-scale shape of virtual objects, but they often provide unrealistic or no feedback when it comes to the microscopic details of surface texture. Direct texture-rendering challenges the state of the art in haptics because it requires a finely detailed model of the surface's properties, real-time dynamic simulation of complex interactions, and high-bandwidth haptic output to enable the user to feel the resulting contacts. This paper presents a new, fully realized solution for creating realistic virtual textures. Our system employs a sensorized handheld tool to capture the feel of a given texture, recording three-dimensional tool acceleration, tool position, and contact force over time. We reduce the three-dimensional acceleration signals to a perceptually equivalent one-dimensional signal, and then we use linear predictive coding to distill this raw haptic information into a database of frequency-domain texture models. Finally, we render these texture models in real time on a Wacom tablet using a stylus augmented with small voice coil actuators. The resulting virtual textures provide a compelling simulation of contact with the real surfaces, which we verify through a human subject study.

199 citations


Journal ArticleDOI
TL;DR: In this article, a novel sensory substitution technique is presented, where both kinesthetic and cutaneous force feedback are substituted by cutaneous feedback only, provided by two wearable devices able to apply forces to the index finger and the thumb, while holding a handle during teleoperation tasks.
Abstract: A novel sensory substitution technique is presented. Kinesthetic and cutaneous force feedback are substituted by cutaneous feedback (CF) only, provided by two wearable devices able to apply forces to the index finger and the thumb, while holding a handle during a teleoperation task. The force pattern, fed back to the user while using the cutaneous devices, is similar, in terms of intensity and area of application, to the cutaneous force pattern applied to the finger pad while interacting with a haptic device providing both cutaneous and kinesthetic force feedback. The pattern generated using the cutaneous devices can be thought as a subtraction between the complete haptic feedback (HF) and the kinesthetic part of it. For this reason, we refer to this approach as sensory subtraction instead of sensory substitution. A needle insertion scenario is considered to validate the approach. The haptic device is connected to a virtual environment simulating a needle insertion task. Experiments show that the perception of inserting a needle using the cutaneous-only force feedback is nearly indistinguishable from the one felt by the user while using both cutaneous and kinesthetic feedback. As most of the sensory substitution approaches, the proposed sensory subtraction technique also has the advantage of not suffering from stability issues of teleoperation systems due, for instance, to communication delays. Moreover, experiments show that the sensory subtraction technique outperforms sensory substitution with more conventional visual feedback (VF).

148 citations


Journal ArticleDOI
TL;DR: The results show that directional responses are fastest when direction is conveyed through the location of the tactile stimulus or steady lateral skin stretch, and feedback that clearly conveys movement direction enables subjects to reach target positions most quickly.
Abstract: Tactile motion guidance systems aim to direct the user's movement toward a target pose or trajectory by delivering tactile cues through lightweight wearable actuators. This study evaluates 10 forms of tactile feedback for guidance of wrist rotation to understand the traits that influence the effectiveness of such systems. We present five wearable actuators capable of tapping, dragging across, squeezing, twisting, or vibrating against the user's wrist; each actuator can be controlled via steady or pulsing drive algorithms. Ten subjects used each form of feedback to perform three unsighted movement tasks: directional response, position targeting, and trajectory following. The results show that directional responses are fastest when direction is conveyed through the location of the tactile stimulus or steady lateral skin stretch. Feedback that clearly conveys movement direction enables subjects to reach target positions most quickly, though tactile magnitude cues (steady intensity and especially pulsing frequency) can also be used when direction is difficult to discern. Subjects closely tracked arbitrary trajectories only when both movement direction and cue magnitude were subjectively rated as very easy to discern. The best overall performance was achieved by the actuator that repeatedly taps on the subject's wrist on the side toward which they should turn.

96 citations


Journal ArticleDOI
TL;DR: It is suggested that the guiding principle should be the design of interfaces that serve as a transparent medium for augmenting the authors' natural skills of interaction with the world, instead of requiring conscious attention to the interface as an opaque object in the world.
Abstract: The cognitive sciences are increasingly coming to terms with the embodied, embedded, extended, and experiential aspects of the mind. Exemplifying this shift, the enactive approach points to an essential role of goal-directed bodily activity in the generation of meaningful perceptual experience, i.e., sense-making. Here, building on recent insights into the transformative effects of practical tool-use, we make use of the enactive approach in order to provide a definition of an enactive interface in terms of augmented sense-making. We introduce such a custom-built interface, the Enactive Torch, and present a study of its experiential effects. The results demonstrate that the user experience is not adequately captured by any standardly assumed perceptual modality; rather, it is a new feeling that is mediated by the design of the device and shaped by the overall situation of the task. Taken together these findings show that there is much to be gained by synergies between engineering and the cognitive sciences in the creation of new experience-centered technology. We suggest that the guiding principle should be the design of interfaces that serve as a transparent medium for augmenting our natural skills of interaction with the world, instead of requiring conscious attention to the interface as an opaque object in the world.

85 citations


Journal ArticleDOI
TL;DR: A novel guidance paradigm taxonomy is proposed intended to help classify and compare the multitude of implementations in the literature, as well as a revised proxy rendering model to allow for the implementation of more complex guidance paradigms.
Abstract: Shared-control haptic guidance is a common form of robot-mediated training used to teach novice subjects to perform dynamic tasks. Shared-control guidance is distinct from more traditional guidance controllers, such as virtual fixtures, in that it provides novices with real-time visual and haptic feedback from a real or virtual expert. Previous studies have shown varying levels of training efficacy using shared-control guidance paradigms; it is hypothesized that these mixed results are due to interactions between specific guidance implementations (“paradigms”) and tasks. This work proposes a novel guidance paradigm taxonomy intended to help classify and compare the multitude of implementations in the literature, as well as a revised proxy rendering model to allow for the implementation of more complex guidance paradigms. The efficacies of four common paradigms are compared in a controlled study with 50 healthy subjects and two dynamic tasks. The results show that guidance paradigms must be matched to a task's dynamic characteristics to elicit effective training and low workload. Based on these results, we provide suggestions for the future development of improved haptic guidance paradigms.

84 citations


Journal ArticleDOI
TL;DR: This study constructed an electrotactile display using a 1.45 μs feedback loop, and real-time pulse width modulation was proposed, and the relationship between skin resistance and absolute threshold was measured to find a function for determining a suitable pulse width from skin resistance.
Abstract: An electrotactile display is a tactile interface composed of skin surface electrodes. The use of such a device is limited by the variability of the elicited sensation. One possible solution to this problem is to monitor skin electrical impedance. Previous studies revealed a correlation between impedance and threshold, but did not construct real-time feedback loops. In this study, an electrotactile display was constructed using a 1.45 μs feedback loop. Real-time pulse width modulation was proposed, and the relationship between skin resistance and absolute threshold was measured to find a function for determining a suitable pulse width from skin resistance. An evaluation experiment revealed that the proposed algorithm suppressed spatial variation and reduced temporal change.

78 citations


Journal ArticleDOI
TL;DR: A novel implementation of conventional actuation principles achieves a compact design with superior performance compared to devices of a similar footprint, demonstrating an excellent combination of tactor spatiotemporal resolution, force, and amplitude.
Abstract: This paper presents the development of a compact tactile display and its integration in teleoperation. The system's operation is based on the display of surface shape to an area of the fingertip through a 4 × 4 array of tactors moving perpendicularly to the skin surface. The tactors are spring loaded and are actuated remotely by dc motors through a flexible tendon transmission. This novel implementation of conventional actuation principles achieves a compact design with superior performance compared to devices of a similar footprint, demonstrating an excellent combination of tactor spatiotemporal resolution, force, and amplitude. The display's ergonomic design and high performance make it suitable for integration on haptic devices for tactile feedback in VR and in Teleoperation. This paper presents the design, control, and performance of the tactile display and of the transmission system. It also demonstrates its integration on an Omega7 force feedback device for the teleoperation of an LWR KUKA manipulator. An experiment is presented where users teleoperated the stylus of the robot in a 3D contour following task with and without tactile feedback. In this experiment, force feedback from the slave is fused with model-based local tactile feedback. Subjects' performances indicate an improvement in teleoperation when both tactile and force feedback are present.

76 citations


Journal ArticleDOI
TL;DR: It is found that penetration between tool and teeth or cheek will greatly decrease the fidelity of the simulation, therefore, it is necessary to utilize 6-DOF haptic device with both force and torque feedback in dental simulator, and accordingly it is needed to extend point-based rendering to 6- DoD haptic rendering of multiregion contacts.
Abstract: Performance evaluation is indispensable for a surgical simulator to become acceptable. A haptics-based dental simulator (iDental) has been developed and preliminary user evaluation on its first-generation prototype has been carried out to gain the knowledge. Based on detailed requirement analysis of Periodontics procedures, a combined evaluation method including qualitative and quantitative analysis was designed. Construct validity was used to compare the performance difference between two groups of participants (faculty members and dental graduate students). These participants were required to perform three periodontal examination and treatment procedures including periodontal pocket probing, calculus detection, and removal. From the evaluation results, we found that penetration between tool and teeth or cheek will greatly decrease the fidelity of the simulation, therefore, it is necessary to utilize 6-DOF haptic device with both force and torque feedback in dental simulator, and accordingly it is needed to extend point-based rendering to 6-DOF haptic rendering of multiregion contacts. Furthermore, several other key research topics that will enable haptic technology to be effective in a practical dental simulator were identified, including simulation of deformable body such as tongue and gingival, and simulation of occlusion of tongue and cheek on teeth, etc.

75 citations


Journal ArticleDOI
TL;DR: An analogous Turing-like handshake test is developed to determine if a machine can produce similarly indistinguishable movements and the Tit-for-Tat and the Machine Learning models generated handshakes that were perceived as the most human-like among the three models that were tested.
Abstract: In the Turing test a computer model is deemed to “think intelligently” if it can generate answers that are indistinguishable from those of a human. We developed an analogous Turing-like handshake test to determine if a machine can produce similarly indistinguishable movements. The test is administered through a telerobotic system in which an interrogator holds a robotic stylus and interacts with another party - artificial or human with varying levels of noise. The interrogator is asked which party seems to be more human. Here, we compare the human-likeness levels of three different models for handshake: (1) Tit-for-Tat model, (2) λ model, and (3) Machine Learning model. The Tit-for-Tat and the Machine Learning models generated handshakes that were perceived as the most human-like among the three models that were tested. Combining the best aspects of each of the three models into a single robotic handshake algorithm might allow us to advance our understanding of the way the nervous system controls sensorimotor interactions and further improve the human-likeness of robotic handshakes.

72 citations


Journal ArticleDOI
TL;DR: In these studies, participants rated the tendency of pictured objects to invite touch, or “touch-ability,” which varied reliably with structural attributes of objects, and the structural influences were distinct from those on other ratings such as attractiveness and apparent expense.
Abstract: Touch has received increasing interest in marketing, given research indicating that contact with products influences evaluation and the tendency to purchase. However, little is known from the marketing or psychophysical literature about visible attributes of objects that elicit touch for hedonic purposes. In these studies, participants rated the tendency of pictured objects to invite touch, or “touch-ability.” Rated touch-ability varied reliably with structural attributes of objects, and the structural influences were distinct from those on other ratings such as attractiveness and apparent expense. Although the trends varied across object sets, touch-ability generally declined as surface textures became markedly rough and shape complexity became extreme. Holding stimulus factors constant, touch-ability also varied with the specific hand movements that were anticipated. Finally, mean touch-ability ratings were correlated across participants with the “Need for Touch” scale, which measures an individual's tendency to touch products. The studies point to touch-ability as a potential factor that might be incorporated into product design.

68 citations


Journal ArticleDOI
TL;DR: A novel electrotactile display that can be integrated into current handheld devices with touch screens by transmitting small currents through electrodes is presented, which results in the perception of simulated textures being perceived as being equal to reference textures.
Abstract: We present a novel electrotactile display that can be integrated into current handheld devices with touch screens. In this display, tactile information is presented to the fingertip of the user by transmitting small currents through electrodes. Experiments were conducted to investigate the perception of simulated textures using this electrotactile display technique. One fundamental feature of texture, which is the focus of this study, is roughness. The aim of the first experiment was to investigate the relationship between electrotactile stimulation parameters such as current and pulse frequency and the perception of roughness. An increase in the current magnitude resulted in an increase in perceived roughness. The aim of the second experiment was to investigate parameter combinations of electrotactile stimuli can be used to simulate textures. Subjects adjusted the intensity and frequency of the current stimuli until the simulated textures were perceived as being equal to reference textures such as sandpapers of varying grit numbers and grooved woods with varying groove widths. Subjects tended to find an electrotactile stimulus with a high current magnitude and a low pulse frequency more suitable to represent rough surfaces. They tended to find just-perceptible current magnitudes suitable for very smooth surfaces and did not show a preference for any frequency.

Journal ArticleDOI
TL;DR: Results show that the haptic feedback of indenting a real silicone tumor with a rod can be approximated reasonably well with the underlying algorithm, and the feasibility of using this technology for medical training systems is explored.
Abstract: Haptic augmented reality (AR) is an emerging research area, which targets the modulation of haptic properties of real objects by means of virtual feedback. In our research, we explore the feasibility of using this technology for medical training systems. As a possible demonstration example, we currently examine the use of augmentation in the context of breast tumor palpation. The key idea in our prototype system is to augment the real feedback of a silicone breast mock-up with simulated forces stemming from virtual tumors. In this paper, we introduce and evaluate the underlying algorithm to provide these force augmentations. This includes a method for the identification of the contact dynamics model via measurements on real sample objects. The performance of our augmentation is examined quantitatively as well as in a user study. Initial results show that the haptic feedback of indenting a real silicone tumor with a rod can be approximated reasonably well with our algorithm. The advantage of such an augmentation approach over physical training models is the ability to create a nearly infinite variety of palpable findings.

Journal ArticleDOI
TL;DR: A new variable admittance control law is designed that guarantees the stability of the robot during constrained motion and also provides a very intuitive human interaction during human-robot interaction.
Abstract: Safety and dependability are of the utmost importance for physical human-robot interaction due to the potential risks that a relatively powerful robot poses to human beings From the control standpoint, it is possible to improve safety by guaranteeing that the robot will never exhibit any unstable behavior However, stability is not the only concern in the design of a controller for such a robot During human-robot interaction, the resulting cooperative motion should be truly intuitive and should not restrict in any way the human performance For this purpose, we have designed a new variable admittance control law that guarantees the stability of the robot during constrained motion and also provides a very intuitive human interaction The former characteristic is provided by the design of a stability observer while the latter is based on a variable admittance control scheme that uses the time derivative of the contact force to assess human intentions The stability observer is based on a previously published stability investigation of cooperative motion which implies the knowledge of the interaction stiffness A method to accurately estimate this stiffness online using the data coming from the encoder and from a multiaxis force sensor at the end effector is also provided The stability and intuitivity of the control law are verified in a user study involving a cooperative drawing task with a 3 degree-of-freedom (dof) parallel robot as well as in experiments performed with a prototype of an industrial Intelligent Assist Device

Journal ArticleDOI
TL;DR: It is concluded that force measurements in a box trainer can be used to classify the level of performance of trainees and can contribute to objective assessment of suture skills.
Abstract: When equipped with motion and force sensors, box-trainers can be good alternatives for relatively expensive Virtual Reality (VR) trainers. As in VR trainers, the sensors in a box trainer could provide the trainee with objective information about his performance. Recently, multiple tracking systems were developed for classification of participants based on motion and time parameters. The aim of this study is the development of force parameters that reflect the trainee's performance in a suture task. Our second goal is to investigate if the level of the participant's skills can be classified as experts or novice level. In the experiment, experts (n = 11) and novices (n = 21) performed a two-handed needle driving and knot tying task on artificial tissue inside a box trainer. The tissue was mounted on the Force platform that was used to measure the force, which the subject applied on the tissue in three directions. We evaluated the potential of 16 different performance parameters, related to the magnitude, direction, and variability of applied forces, to distinguish between different levels of surgical expertise. Nine of the parameters showed significant differences between experts and novices. Principal Component Analysis was used to convert these nine partly correlating parameters, such as peak force, mean force, and main direction of force, into two uncorrelated variables. By performing a Leave-One-Out-Cross Validation with Linear Discriminant Analysis on each participants' score on these two variables, it was possible to correctly classify 84 percent of all participants as an expert or novice. We conclude that force measurements in a box trainer can be used to classify the level of performance of trainees and can contribute to objective assessment of suture skills.

Journal ArticleDOI
TL;DR: Haptic interaction between a human leader and a robot follower in waltz is studied and two robot controllers, namely, admittance with virtual force controller and inverted pendulum controller, are proposed and evaluated in experiments.
Abstract: Haptic interaction between a human leader and a robot follower in waltz is studied in this paper. An inverted pendulum model is used to approximate the human's body dynamics. With the feedbacks from the force sensor and laser range finders, the robot is able to estimate the human leader's state by using an extended Kalman filter (EKF). To reduce interaction force, two robot controllers, namely, admittance with virtual force controller, and inverted pendulum controller, are proposed and evaluated in experiments. The former controller failed the experiment; reasons for the failure are explained. At the same time, the use of the latter controller is validated by experiment results.

Journal ArticleDOI
TL;DR: Results show users are capable of interpreting all stimuli with high accuracy and can use the direction cues for mobile navigation as well as investigating the user response to all stimulus modes.
Abstract: This paper reports on a series of user experiments evaluating the design of a multimodal test platform capable of rendering visual, audio, vibrotactile, and directional skin-stretch stimuli. The test platform is a handheld, wirelessly controlled device that will facilitate experiments with mobile users in realistic environments. Stimuli rendered by the device are fully characterized, and have little variance in stimulus onset timing. A series of user experiments utilizing navigational cues validates the function of the device and investigates the user response to all stimulus modes. Results show users are capable of interpreting all stimuli with high accuracy and can use the direction cues for mobile navigation. Tests included both stationary (seated) and mobile (walking a simple obstacle course) tasks. Accuracy and response time patterns are similar in both seated and mobile conditions. This device provides a means of designing and evaluating multimodal communication methods for handheld devices and will facilitate experiments investigating the effects of stimulus mode on device usability and situation awareness.

Journal ArticleDOI
TL;DR: The proposed haptic interface, together with the virtual reality, form a highly realistic training simulator for endoscopic surgeons, applicable not only to colonoscopy, but also to similar interventions.
Abstract: Inspection of the colon with an endoscope for early signs of cancer (colonoscopy) has become an extremely widespread procedure, since early treatment radically improves the outlook of patients. The procedure requires a close coordination between the sense of touch and vision to navigate the endoscope along the colon. This raises the need to develop efficient training methods for physicians. Training simulators based on virtual reality, where realistic graphics are combined with a mechatronic system providing haptic feedback, are alternative to traditional training methods. To provide physicians with realistic haptic sensations of an endoscopic procedure, we have designed a haptic interface, instrumented a clinical endoscope and combined them with a simulation software for colonoscopy. In this contribution, we present the mechatronic components of the simulator. The haptic interface is able to generate high forces using the combination of electrical motors and brakes in a compact design. Experiments were performed to determine the characteristics of the device. A model-based control has been implemented and the results show that the control successfully compensates for the device nonlinearities, such as friction. The proposed haptic interface, together with the virtual reality, form a highly realistic training simulator for endoscopic surgeons, applicable not only to colonoscopy, but also to similar interventions.

Journal ArticleDOI
TL;DR: Insight is presented on how human-computer interaction can be enriched by employing the computers with behavioral patterns that naturally appear in human-human negotiation scenarios, and a two-party negotiation game specifically built for studying the effectiveness of haptic and audio-visual cues in conveying negotiation related behaviors.
Abstract: An active research goal for human-computer interaction is to allow humans to communicate with computers in an intuitive and natural fashion, especially in real-life interaction scenarios. One approach that has been advocated to achieve this has been to build computer systems with human-like qualities and capabilities. In this paper, we present insight on how human-computer interaction can be enriched by employing the computers with behavioral patterns that naturally appear in human-human negotiation scenarios. For this purpose, we introduce a two-party negotiation game specifically built for studying the effectiveness of haptic and audio-visual cues in conveying negotiation related behaviors. The game is centered around a real-time continuous two-party negotiation scenario based on the existing game-theory and negotiation literature. During the game, humans are confronted with a computer opponent, which can display different behaviors, such as concession, competition, and negotiation. Through a user study, we show that the behaviors that are associated with human negotiation can be incorporated into human-computer interaction, and the addition of haptic cues provides a statistically significant increase in the human-recognition accuracy of machine-displayed behaviors. In addition to aspects of conveying these negotiation-related behaviors, we also focus on and report game-theoretical aspects of the overall interaction experience. In particular, we show that, as reported in the game-theory literature, certain negotiation strategies such as tit-for-tat may generate maximum combined utility for the negotiating parties, providing an excellent balance between the energy spent by the user and the combined utility of the negotiating parties.

Journal ArticleDOI
TL;DR: It is expected that the proposed vibrotactile flow generation method and vibration isolation design may be useful in applications including generating directional information in navigation maps or for identifying callers in mobile devices.
Abstract: This paper proposes a method for generating a smooth directional vibrotactile flow on a thin plate. While actuating two piezoelectric actuators spatially across the plate, temporal sweeping of the input excitation frequency from zero to the first mode of the resonance frequency can smooth the perceived directional vibrotactile flow, as compared to a vibrotactile flow generated by conventional apparent tactile movement and phantom sensation methods. In order to ascertain important factors in the excitation pattern, a user study was conducted for three factors (amplitude (constant versus modulated), frequency (constant versus swept), and ending shape (sharp versus smooth)). The results showed that frequency sweeping in addition to amplitude modulation and smooth ending were the most important factors in smoothing vibrotactile flows. Moreover, an excitation signal with a smooth ending shape was important for generating nonspiky flows at the midpoint. In this study, a vibration isolation design is also proposed in order to substantially decrease the transmission of the actuator vibration to the mockup housing. As such, it is expected that the proposed vibrotactile flow generation method and vibration isolation design may be useful in applications including generating directional information in navigation maps or for identifying callers in mobile devices.

Journal ArticleDOI
TL;DR: A novel interactive haptic bone-burring model based on impulse-based dynamics to simulate the contact forces, including resistant and frictional forces is proposed.
Abstract: Bone-burring is a common procedure in orthopedic, dental, and otologic surgeries. Virtual reality (VR)-based surgical simulations with both visual and haptic feedbacks provide novice surgeons with a feasible and safe way to practice their burring skill. However, creating realistic haptic interactions between a high-speed rotary burr and stiff bone is a challenging task. In this paper, we propose a novel interactive haptic bone-burring model based on impulse-based dynamics to simulate the contact forces, including resistant and frictional forces. In order to mimic the lateral and axial burring vibration forces, a 3D vibration model has been developed. A prototype haptic simulation system for the bone-burring procedure has been implemented to evaluate the proposed haptic rendering methods. Several experiments of force evaluations and task-oriented tests were conducted on the prototype system. The results demonstrate the validity and feasibility of the proposed methods.

Journal ArticleDOI
TL;DR: This study develops a haptic processor that can control multiple motors, and furthermore it is discussed how to create traveling vibrotactile waves in mobile devices.
Abstract: Mobile device users can now experience diverse graphical content ranging from a simple static object to an object having complex dynamic behavior. A user who manipulates and plays with such “objects” wants to haptically “feel” the presence of a static object or the motion of a dynamic object. To satisfy this demand, we previously proposed a vibrotactile rendering method based on a vibrotactile traveling wave. Although the proposed method can haptically simulate the dynamic behavior of a target object, it is not easy to delicately generate the traveling vibrotactile wave. The reason is that the sampling rate of the haptic loop in the system determines the performance of the traveling vibrotactile wave. In this study, we develop a haptic processor that can control multiple motors, and furthermore we discuss how we could create traveling vibrotactile waves in mobile devices.

Journal ArticleDOI
TL;DR: An algorithm and training method using a force-feedback joystick with an “assist-as-needed” paradigm for driving training, which shows results with a group study on typically developing toddlers that such a haptic guidance algorithm is superior to training with a conventional joystick.
Abstract: The broader goal of our research is to train infants with special needs to safely and purposefully drive a mobile robot to explore the environment. The hypothesis is that these impaired infants will benefit from mobility in their early years and attain childhood milestones, similar to their healthy peers. In this paper, we present an algorithm and training method using a force-feedback joystick with an “assist-as-needed” paradigm for driving training. In this “assist-as-needed” approach, if the child steers the joystick outside a force tunnel centered on the desired direction, the driver experiences a bias force on the hand. We show results with a group study on typically developing toddlers that such a haptic guidance algorithm is superior to training with a conventional joystick. We also provide a case study on two special needs children, under three years old, who learn to make sharp turns during driving, when trained over a five-day period with the force-feedback joystick using the algorithm.

Journal ArticleDOI
TL;DR: The existence of a singularity property is pointed out, at which interaction force estimation is impossible, and close to which it may be infeasible, and the Force Sensor Free (FSF) transformation for linear teleoperation systems is reported.
Abstract: Measuring interaction forces in bilateral teleoperation systems may be difficult, due to size and cost restrictions on the force sensors. Obtaining the interaction forces by estimation can be a viable alternative. The primary contribution of this paper is the study of the effect of interaction force estimation on performance in bilateral teleoperation. A distinction is made between the obvious effect as a result of inaccurate estimation, and the less obvious effect as a result of the inherent theoretical properties of a system that has two points of interaction with its surroundings (a teleoperator) as opposed to one point of interaction (single robot). Specifically, the existence of a singularity property is pointed out, at which interaction force estimation is impossible, and close to which it may be infeasible. The secondary contribution of the paper is the Force Sensor Free (FSF) transformation for linear teleoperation systems, which is an automated procedure that turns a teleoperation controller with force sensing into an equivalent controller with force estimation. An experiment is reported whose objective is to validate the operation of the FSF transformation on a real teleoperator.

Journal ArticleDOI
TL;DR: The ability of laypeople to detect artificial tumors of various hardness values embedded in silicone gels was assessed in a simulated MIS environment and participants were significantly more accurate and more efficient at tumor detection with the finger as compared to the other methods of exploration.
Abstract: Minimally invasive surgery uses optical cameras and special surgical tools in order to operate from an environment one step removed from the body cavity of interest to the surgeon. It has been suggested that constraints posed by this arrangement, in particular the lack of direct haptic feedback to the surgeon, may affect the surgeon's ability to identify tissues and accurately maneuver inside the body cavity. In the present study, the ability of laypeople to detect artificial tumors of various hardness values embedded in silicone gels was assessed in a simulated MIS environment. Participants explored the gels under three conditions all with remote viewing; using the unrestricted bare finger, using a stick-like surgical tool also unrestricted, and using the surgical tool restricted by its insertion through an operating port as in MIS. Participants were significantly more accurate and more efficient at tumor detection with the finger as compared to the other methods of exploration, and they were also better at detecting harder tumors as compared to softer ones. The potential implications of these results for the role of haptic perception in minimally invasive surgery are discussed.

Journal ArticleDOI
TL;DR: In this article, the authors present a prototype interface that provides negative vibrotactile feedback, which is generated when an inactive or ambiguous part of the screen, such as the area between two buttons, is touched.
Abstract: Touchscreen technology has become pervasive in the consumer product arena over the last decade, offering some distinct advantages such as software reconfigurable interfaces and the removal of space consuming mice and keyboards. However, there are significant drawbacks to these devices that have limited their adoption by some users. Most notably, standard touchscreens demand the user's visual attention and require them to look at the input device to avoid pressing the wrong button. This issue is particularly important for mobile, capacitive sensing, nonstylus devices, such as the iPhone where small button sizes can generate high error rates. While previous work has shown the benefits of augmenting such interfaces with audio or vibrotactile feedback, only positive feedback (confirmation of button presses) has been considered. In this paper, we present a simple prototype interface that provides negative vibrotactile feedback. By negative, we mean feedback is generated when an inactive or ambiguous part of the screen, such as the area between two buttons, is touched. First, we present a usability study comparing positive and negative vibrotactile feedback for a benchmark numerical data entry task. The difference in performance is not statistically significant, implying negative feedback provides comparable benefits. Next, based on the experimenter's observations and the users comments, we introduce a multimodal feedback strategy-combining complementary positive audio and negative vibrotactile signals. User tests on a text entry experiment show that, with multimodal feedback, users exhibit a (statistically significant) 24 percent reduction in corrective key presses, as compared to positive audio feedback alone. Exit survey comments indicate that users favor multimodal feedback.

Journal ArticleDOI
TL;DR: The interaction evaluated in this paper separates the task and guidance forces by guiding one hand so the user can actively recreate the motion with their other hand that receives task-related forces.
Abstract: When teaching physical skills, experts or robotic assistants commonly move a novice through a task. However, this guiding motion is only partially effective at portraying the full experience because the guided person is only performing the task passively and the guidance and task forces can become ambiguously intertwined. The interaction evaluated in this paper separates the task and guidance forces by guiding one hand so the user can actively recreate the motion with their other hand that receives task-related forces. This method is based on the ability of humans to easily move their hands through similar paths, such as drawing circles, compared to the difficulty of simultaneously drawing a square with one hand and a circle with the other. Several experiments were first performed to characterize the reference frames, interaction stiffnesses, and trajectories that humans can recreate. Visual Symmetry and Joint-Space Symmetry proved to be easier than Point Mirror Symmetry and participants' recreated motions typically lagged by approximately 50-100 ms. Based on these results, participants used bimanual guidance to identify the orientation of a hard rod embedded in a soft material. The results show that participants could identify the orientation of the rod equally well when working independently compared to being bimanually guided through a desired motion.

Journal ArticleDOI
TL;DR: This work wants to make the SBVI aware of the deictic gestures performed by the teacher over the graphic in conjunction with speech, and employs a haptic glove interface to facilitate this embodiment awareness.
Abstract: Mathematics instruction and discourse typically involve two modes of communication: speech and graphical presentation. For the communication to remain situated, dynamic synchrony must be maintained between the speech and dynamic focus in the graphics. In sighted people, vision is used for two purposes: access to graphical material and awareness of embodied behavior. This embodiment awareness keeps communication situated with visual material and speech. Our goal is to assist students who are blind or visually impaired (SBVI) in the access to such instruction/communication. We employ the typical approach of sensory replacement for the missing visual sense. Haptic fingertip reading can replace visual material. We want to make the SBVI aware of the deictic gestures performed by the teacher over the graphic in conjunction with speech. We employ a haptic glove interface to facilitate this embodiment awareness. We address issues from the conception through the design and implementation to the effective and successful use of our Haptic Deictic System (HDS) in inclusive classrooms.

Journal ArticleDOI
TL;DR: The perception of curvature and change in curvature improve the performance of humans in perception of the whole shape, whereas edges, when not directly contributing to the task, disrupt performance.
Abstract: In previous studies, the effect on perception of individual features such as curvature and edges have been studied with specifically designed stimuli. However, the effect of local properties on the perception of the global object has so far received little attention. In this study, cylinders with an elliptical cross section and rectangular blocks were used to investigate the effect and relative importance of curvature, change in curvature and edges, as local properties, on the ability of subjects to determine the orientation of the stimuli, which is a global property. We found that when curvature was present the threshold to determine the orientation was 43 percent lower than when curvature was absent. When, in addition, the change in curvature could be felt, the threshold was 37 percent lower than when only curvature could be felt. Finally, when edges were felt during exploration, the threshold increased by 46 percent compared to when the subjects were instructed to avoid the edges in the blocks. We conclude that the perception of curvature and change in curvature improve the performance of humans in perception of the whole shape, whereas edges, when not directly contributing to the task, disrupt performance.

Journal ArticleDOI
TL;DR: Two new haptic-assistive techniques are presented that utilize the 3 DOF capabilities of the Phantom Omni to produce assistance that is designed specifically for motion-impaired computer users and include haptic cones and V-shaped funnels.
Abstract: Haptic assistance is the process of using force feedback to aid the operator in human-computer interaction (HCI). This may take the form of guiding the operator toward a target or assisting them in its selection. Haptic feedback has previously been investigated to assist motion-impaired computer users; however, limitations of previous 2 DOF haptic target acquisition techniques such as gravity wells and high-friction-targets have hampered progress. In this paper, two new haptic-assistive techniques are presented that utilize the 3 DOF capabilities of the Phantom Omni to produce assistance that is designed specifically for motion-impaired computer users. These include haptic cones and V-shaped funnels. To evaluate the effectiveness of the new haptic techniques, a series of point-and-click experiments were undertaken in parallel with cursor analysis to compare the levels of performance. The task required the operator to produce a predefined sentence on the Windows-On-Screen Keyboard. The results of the study prove that higher performance levels can be achieved using techniques that are less constricting than traditional assistance and without many of the drawbacks. Haptic cones produced the most significant results when compared to an unassisted interface with a mean improvement of 53 percent in the number of missed clicks and 145 percent improvement in throughput.

Journal ArticleDOI
TL;DR: The development of a haptic device to be used in a simulator aiming to train the skills of gastroenterology assistants in abdominal palpation during colonoscopy, as well as to train team interaction skills for the Colonoscopy team.
Abstract: In this paper, we describe the development of a haptic device to be used in a simulator aiming to train the skills of gastroenterology assistants in abdominal palpation during colonoscopy, as well as to train team interaction skills for the colonoscopy team. To understand the haptic feedback forces to be simulated by the haptic device, we conducted an experiment with five participants of varying BMI. The applied forces and displacements were measured and hysteresis modeling was used to characterize the experimental data. These models were used to determine the haptic feedback forces required to simulate a BMI case in response to the real-time user interactions. The pneumatic haptic device consisted of a sphygmomanometer bladder as the haptic interface and a fuzzy controller to regulate the bladder pressure. The haptic device showed good steady state and dynamic response was adequate for simulating haptic interactions. Tracking accuracy averaged 94.2 percent within 300 ms of the reference input while the user was actively applying abdominal palpation and minor repositioning.