scispace - formally typeset
Search or ask a question

Showing papers on "Haptic technology published in 1995"


Proceedings ArticleDOI
05 Aug 1995
TL;DR: In this article, a haptic rendering algorithm is proposed to generate convincing interaction forces for objects modeled as rigid polyhedra, called the god-object, which conforms to the virtual environment.
Abstract: Haptic display is the process of applying forces to a human "observer" giving the sensation of touching and interacting with real physical objects. Touch is unique among the senses because it allows simultaneous exploration and manipulation of an environment. A haptic display system has three main components. The first is the haptic interface, or display device, generally some type of electro-mechanical system able to exert controllable forces on the user with one or more degrees of freedom. The second is the object model-a mathematical representation of the object containing its shape and other properties related to the way it feels. The third component, the haptic rendering algorithm, joins the first two components to compute, in real time, the model-based forces to give the user the sensation of touching the simulated objects. This paper focuses on a new haptic rendering algorithm for generating convincing interaction forces for objects modeled as rigid polyhedra. We create a virtual model of the haptic interface, called the god-object, which conforms to the virtual environment. The haptic interface can then be servo-ed to this virtual model. This algorithm is extensible to other functional descriptions and lays the groundwork for displaying not only shape information, but surface properties such as friction and compliance.

838 citations


04 May 1995
TL;DR: Before the can create virtual world solutions to real world problems the authors must learn how to interact with information and controls distributed about a user instead of concentrated in a window in front of him.
Abstract: Virtual environments have shown considerable promise as a natural (and thus it is hoped more effective) form of human-computer interaction. In a virtual world you can use your eyes, ears, and hands much as you do in the real world: move your head to set your viewpoint, listen to sounds that have direction, reach out your hands to grab and manipulate virtual objects. Virtual worlds technologies (such as head-tracking and stereo, head-mounted displays) provide a better understanding of three-dimensional shapes and spaces through perceptual phenomena such as head-motion parallax, the kinetic depth effect, and stereopsis. Precise interaction, however, is difficult in a virtual world. Virtual environments suffer from a lack of haptic feedback (which helps us to control our interaction in the real world) and current alphanumeric input techniques for the virtual world (which we use for precise interaction in the computer world) are ineffective. We are unfamiliar with this new medium we work in; we do not fully understand how to immerse a user within an application. Before we can create virtual world solutions to real world problems we must learn how to interact with information and controls distributed about a user instead of concentrated in a window in front of him. We must identify natural forms of interaction and extend them in ways not possible in the real world.

483 citations


Proceedings ArticleDOI
05 Aug 1995
TL;DR: In this paper, the authors describe some of the challenges of creating realistic haptic perceptions of tool use and present a haptic display for training tool use, including collision detection and collision avoidance.
Abstract: Our group is interested in using haptic display for training tool use. Applications include training doctors to use tools during surgery, and training astronauts to use tools during EVA. This paper describes some of the challenges of creating realistic haptic perceptions of tool use. Many of these challenges stem from the importance of unilateral constraints during tool use. Unilateral constraints occur whenever rigid bodies collide, resisting the interpenetration of the bodies, but not holding the bodies together. To identify unilateral constraints, a tool/environment simulation must perform collision detection. To respond properly to a collision, the simulation must estimate the forces that ensue, and integrate the equations of motion. All of these computations must occur in real time, and the simulation as a whole must be stable (to ensure the user's safety). Approaches to these problems are described.

419 citations


Patent
16 Oct 1995
TL;DR: In this article, the position and orientation of the user is utilized to generate a virtual reality force field and forces are in turn generated on the user as a function of this force field.
Abstract: A system and method for providing a tactile virtual reality to a user is present. The position and orientation of the user is utilized to generate a virtual reality force field. Forces are in turn generated on the user as a function of this force field. A six-axis manipulator is presented for providing a user interface to such a system. This manipulator provides a unique kinematic structure with two constant force springs which provide gravity compensation so that the manipulator effectively floats.

412 citations


Journal ArticleDOI
TL;DR: The authors are working to develop new technology to rectify this sensory deficit by relaying tactile information from the surgical site to the surgeon by developing a variety of tactile sensors that can be mounted in a probe or surgical instrument.
Abstract: One of a surgeon's most important tools is a highly developed sense of touch. Surgeons rely on sensations from the finger tips to guide manipulation and to perceive a wide variety of anatomical structures and pathologies. Unfortunately, new surgical techniques separate the surgeon's hands from the surgical site. These techniques include minimally invasive procedures such as laparoscopy and thoracoscopy, and new techniques involving robotic manipulators. In these situations the surgeon's perception is limited to visual feedback from a video camera, or gross motion and force feedback through the handles of long instruments. The authors are working to develop new technology to rectify this sensory deficit by relaying tactile information from the surgical site to the surgeon. They have developed a variety of tactile sensors that can be mounted in a probe or surgical instrument. The tactile information provided by these sensors may then be conveyed through the tactile display devices the authors have developed to recreate the tactile stimulus directly on the surgeon's finger tip. By using these remote palpation devices, the surgeon may regain some of the perceptual and manipulative skills present in conventional open-incision surgery. Among the tactile feedback parameters the authors are investigating are force reflection, vibration, and small-scale shape. >

305 citations


Proceedings ArticleDOI
15 Apr 1995
TL;DR: The software techniques needed to generate sensations of contact interaction and material properties are described, appropriate for use with the Phantom haptic interface, a force generating display device developed in the laboratory.
Abstract: Haptic rendering is the process of computing and generating forces in response to user interactions with virtual objects. Recent efforts by our team at MIT's AI laboratory have resulted in the development of haptic interface devices and algorithms for generating the forces of interaction with virtual objects. This paper focuses on the software techniques needed to generate sensations of contact interaction and material properties. In particular, the techniques we describe are appropriate for use with the Phantom haptic interface, a force generating display device developed in our laboratory. We also briefly describe a technique for representing and rendering the feel of arbitrary polyhedral shapes and address issues related to rendering the feel of non-homogeneous materials. A number of demonstrations of simple haptic tasks which combine our rendering techniques are also described.

302 citations


Patent
05 Jun 1995
TL;DR: In this article, a multi-processor system architecture is described for providing commands to a computer through tracked manual gestures and for providing feedback to the user through forces applied to the interface.
Abstract: A method and apparatus for use with a computer for providing commands to a computer through tracked manual gestures and for providing feedback to the user through forces applied to the interface. A user manipulatable object is coupled to a mechanical linkage which is, in turn, supportable on a fixed surface. The mechanical linkage or the user manipulatable object is tracked by sensors for sensing the location and/or orientation of the object. A multi-processor system architecture is disclosed wherein a host computer system is interfaced with a dedicated microprocessor which is responsive to the output of the sensors and provides the host computer with information derived from the sensors. The host computer has an application program which responds to the information provided via the microprocessor and which can provide force-feedback commands back to the microprocessor. The force feedback is felt by a user via the user manipulatable object.

280 citations


Patent
29 Mar 1995
TL;DR: This paper describes the development of a virtual surgery system, called VQSplat, which is capable of display multiple volumetric models, manipulating and editing (cutting) them, using a hierarchical multi-resolution point rendering system derived from an existing point renders system, QSplat.
Abstract: A virtual surgery system or virtual testing system provides a simulation or test based on image data. A simulator combined with a real exam requires simulation tasks by a test taker. Additionally, a surgical procedure may be simulated using image data of a patient in devices simulating the physical instruments a surgeon uses in performing the actual procedure, for example. The user input device, such as a mouse, three dimensional mouse, joystick, seven dimensional joystick, full size simulator, etc., can be used in a virtual simulation to move through the image data while the user looks at the data and interaction of the input device with the image data on a display screen. Force feedback can be provided based on physical constraint models (of the anatomy, for example), or based on edge and collision detection between the virtual scope or virtual tool used by the operator and walls or edges of the image data in the image space. The virtual simulator may be used as a teaching, training, testing, demonstration, or remote telesurgery device, for example.

257 citations


01 Dec 1995
TL;DR: A new book enPDFd haptic interfaces for virtual environment and teleoperator systems proceedings that can be a new way to explore the knowledge and get one thing to always remember in every reading time, even step by step.
Abstract: Spend your time even for only few minutes to read a book. Reading a book will never reduce and waste your time to be useless. Reading, for some people become a need that is to do every day such as spending time for eating. Now, what about you? Do you like to read a book? Now, we will show you a new book enPDFd haptic interfaces for virtual environment and teleoperator systems proceedings that can be a new way to explore the knowledge. When reading this book, you can get one thing to always remember in every reading time, even step by step.

227 citations



Journal ArticleDOI
01 Aug 1995
TL;DR: In this article, a low-cost and compact force feedback joystick was developed as a new user interface to communicate with a computer, which can simulate haptic effects such as hitting a wall, pushing a spring, reaction force of firing a hand gun, and crash of a vehicle.
Abstract: The authors have developed a low-cost and compact force feedback joystick as a new user interface to communicate with a computer. They adopted a DC-motor powered joystick to create the illusion of force to a human hand. As applications in entertainment, without modifying the source code of the PC video games, they intercept the interrupts reserved for mouse, colormap and keyboard input and replace them with a new interrupt service routine in which force effects are added for the new PC video games. Effects such as hitting a wall, pushing a spring, reaction force of firing a hand gun, and crash of a vehicle can be easily simulated, thus making haptic simulation readily available in virtual environments. >

Proceedings ArticleDOI
11 Mar 1995
TL;DR: It was revealed by the experiments with a haptic interface SPICE that an operator could touch and trace smoothly on a curved surface of stiff virtual object, even if the update rate of the virtual plane is relatively low.
Abstract: A method of intermediate space for controlling haptic interfaces is characterized by updating a virtual plane at a low frequency while maintaining a high update rate at force control loop of the interface. By using the virtual plane, the detection of collisions between the tip of finger and virtual objects became independent from the control of the haptic interface. This will enable the haptic interface to display more and more complex surfaces in keeping the same sampling frequency of impedance control. It was revealed by the experiments with a haptic interface SPICE that an operator could touch and trace smoothly on a curved surface of stiff virtual object, even if the update rate of the virtual plane is relatively low.

Patent
26 May 1995
TL;DR: In this article, a method for communicating graphic data such as plotted two-dimensional curves to a user such as a sight impaired person is disclosed, which uses haptic or tactile stimulation of a user's extremity such as is relied upon in the Braille code fingertip communication already known to many sight impaired persons.
Abstract: A method for communicating graphic data such as plotted two-dimensional curves to a user such as a sight impaired person is disclosed. The disclosed arrangement uses haptic or tactile stimulation of a user's extremity such as is relied--upon in the Braille code fingertip communication already known to many sight impaired persons. Computerized control over a Braille character-like display and use of the computer mouse as a data selection input device; are included in the invention. The invention also includes provision of data enhancement and data interpretation aids including axis names, multiple curve identifications, grid line identifications and the addition of audio information such as tick sounds and spoken utterances to supplement the tactile communication. A computer-based embodiment of the invention is disclosed, this in the form of hardware block diagrams, software flow diagrams and computer code listing, the latter being primarily in microfiche appendix form.

Journal ArticleDOI
TL;DR: The operating room and hospital of the future will be first designed and tested in virtual reality, bringing together the full power of the digital physician.
Abstract: Medical applications for virtual reality (VR) are just beginning to emerge. These include VR surgical simulators, telepresence surgery, complex medical database visualization, and rehabilitation. These applications are mediated through the computer interface and as such are the embodiment of VR as an integral part of the paradigm shift in the field of medicine. The Green Telepresence Surgery System consists of two components, the surgical workstation and remote worksite. At the remote site there is a 3-D camera system and responsive manipulators with sensory input. At the workstation there is a 3-D monitor and dexterous handles with force feedback. The VR surgical simulator is a stylized recreation of the human abdomen with several essential organs. Using a helmet mounted display and DataGlove, a person can learn anatomy from a new perspective by 'flying' inside and around the organs, or can practice surgical procedures with a scalpel and clamps. Database visualization creates 3-D images of complex medical data for new perspectives in analysis. Rehabilitation medicine permits impaired individuals to explore worlds not otherwise available to them, allows accurate assessment and therapy for their disabilities, and helps architects understand their critical needs in public or personal space. And to support these advanced technologies, the operating room and hospital of the future will be first designed and tested in virtual reality, bringing together the full power of the digital physician.

Patent
27 Jan 1995
TL;DR: In this paper, a system for sensing and displaying sensory information to a human operator comprising a world modeler adapted to receive sensory input from the operator over a communications medium is described.
Abstract: A system for sensing and displaying sensory information to a human operator comprising a world modeler adapted to receive sensory input from the operator over a communications medium. The world modeler modifies either a virtual world which is a mathematical model of an environment, or modifies a real world which is a remote real environment. The world modeler then sends sensory information from the real or virtual environments over the communication medium to the haptic system. The haptic system displays this sensory information to the human operator. The result is that the operator is able to manipulate a real or virtual environment remotely through the information sensed and displayed by the haptic system. The method provides for compression of the haptic information, reducing the bandwidth required to transmit sensory information between the haptic system and the world modeler.

Journal ArticleDOI
TL;DR: The data suggest that object recognition can occur when global volumetric primitives cannot directly be extracted, and that the effect of exposure duration was observed primarily with minimal cuing, indicating compensatory effects of top-down processing.
Abstract: Subjects identified common objects under conditions of a “haptic glance,” a brief haptic exposure that placed severe spatial and temporal constraints on stimulus processing. They received no advance cue, a superordinate-level name as cue, or a superordinate and basic-level name as cue. The objects varied in size relative to the fingertip and in the most diagnostic attribute, either texture or shape. The data suggest that object recognition can occur when global volumetric primitives cannot directly be extracted. Even with no cue, confusion errors resembled the target object and indicated extraction of material and local shape information, which was sufficient to provide accuracy above 20%. Performance improved with cuing, and the effect of exposure duration was observed primarily with minimal cuing, indicating compensatory effects of top-down processing.

Proceedings ArticleDOI
21 Dec 1995
TL;DR: In this article, a peg-insertion task with force feedback was performed in which human subjects were asked to execute a peg insertion task through a telepresence link with force-feedback.
Abstract: An empirical study was performed in which human subjects were asked to execute a peg- insertion task through a telepresence link with force-feedback. Subjects controlled a remote manipulator through natural hand motions by using an anthropomorphic upper body exoskeleton. The force-reflecting exoskeleton could present haptic sensations in six degrees of freedom. Subjects viewed the remote site through a high fidelity stereo vision system. Subjects performed the peg-insertion task under three different conditions: (1) in-person (direct manipulation), (2) through the telepresence link (telemanipulation), and (3) through the telepresence link while using abstract virtual haptic overlays known as `virtual fixtures' (telemanipulation with virtual fixturing). Five different haptic overlays were tested which included virtual surfaces, virtual damping fields, virtual snap-to-planes, and virtual snap-to- lines. Results of subject testing confirmed that human performance was significantly degraded when comparing telepresence manipulation to direct in-person manipulation. Results also confirmed that by introducing abstract haptic overlays into telepresence link, operator performance could be restored closer to natural in-person capacity. The use of 3D haptic overlays were found to as much as double manual performance in the standard peg-insertion task.

Proceedings ArticleDOI
21 May 1995
TL;DR: A new approach to motion constraint is presented, established simulating a virtual ideal mechanism acting as a jig, and connected to the master and slave arms via springs and dampers, to impose any motion constraint to the system, including non-linear constraints involving coupling between translations and rotations.
Abstract: In a teleoperation system, assistance can be given to the operator by constraining the telerobot position to remain within a restricted subspace of its workspace. A new approach to motion constraint is presented in this paper. The control law is established simulating a virtual ideal mechanism acting as a jig, and connected to the master and slave arms via springs and dampers. Using this approach, it is possible to impose any (sufficiently smooth) motion constraint to the system, including non-linear constraints (complex surfaces) involving coupling between translations and rotations. Physical equivalence ensures that the controller is passive. Experimental results obtained with a 6-DOF teleoperation system are given. Other applications of the virtual mechanism concept include hybrid position-force control and haptic interfaces.

Journal ArticleDOI
TL;DR: The Green Telepresence Surgery System consists of two components, the surgical workstation and the remote worksite, and is the building blocks for the digital physician of the 21st century.
Abstract: We are seeing the emergence of medical applications for virtual reality (VR). These include telepresence surgery, three-dimensional (3-D) visualization of anatomy for medical education, VR surgical simulators, and virtual prototyping of surgical equipment and operating rooms. Today, approximately 90% of the knowledge a physician requires can be obtained through electronic means, such as diagnostic sensors and imaging modalities, directly seeing the patient with a video camera for medical consultation, or using electronic medical records. In addition, with telepresence, a therapy can be effected electronically, regardless of the physical location of the patient. Therefore, it makes sense to send the electronic information or manipulation, rather than sending the patient or blood samples, to obtain tests or to produce a cure. In that these applications are mediated through the computer interface, they are the embodiment of VR as the major force for change in the field of medicine. The Green Telepresence Surgery System consists of two components, the surgical workstation and the remote worksite. At the remote site are a 3-D camera system and responsive manipulators with sensory input. At the workstation are a 3-D monitor and dexterous handles with force feedback. The next generation in medical education can learn anatomy from a new perspective by "flying" inside and around the organs, using sophisticated computer systems and 3-D visualization. The VR surgical simulator is a stylized recreation of the human abdomen with several essential organs. Using this, students and surgeons can practice surgical procedures with virtual scalpels and clamps. To support these advanced technologies, the operating room and hospital of the future will first be designed and tested in virtual reality, allowing multiple iterations of equipment and surgical rooms before they are actually built. Insofar as all these technologies are based on digital information, they are the building blocks for the digital physician of the 21st century.

Journal ArticleDOI
01 Dec 1995
TL;DR: A new teleoperation system, consisting of a conventional manipulator and two identical magnetically levitated wrists, has been developed using a combination of position and rate control and is described in this paper.
Abstract: A new approach to the design of teleoperation systems is presented. It is proposed that the teleoperation slave be a coarse-fine manipulator with a fine-motion wrist identical to the teleoperation master. By using a combination of position and rate control, such a system would require only small operator hand motions but would provide low mechanical impedance, high motion resolution and force feedback over a substantial volume. A new teleoperation system, consisting of a conventional manipulator and two identical magnetically levitated wrists, has been developed using this approach and is described in this paper. Aspects of mechanical, system and computational design are discussed. It is shown that the best way to position the slave is by decoupling position and rate control, with the conventional robot controlled in rate mode and its wrist in position mode. Kinesthetic feedback is achieved through wrist-level coordinated force control. Transparency is improved through feedforward of sensed hand forces to the master and environment forces to the slave. To maintain stability, the amount of damping in the system is controlled by the sensed environment forces. Experimental results demonstrating excellent performance are presented.

Proceedings ArticleDOI
30 Mar 1995
TL;DR: In this article, the authors describe a virtual computer/personal digital assistant that appears as a hand held flat panel display and allows the user to interact with it using a virtual finger or stylus to touch the screen.
Abstract: Recent advances in both rendering algorithms and hardware has brought virtual reality to the threshold of being able to model realistically complex environments, e.g., the mockup of a large structure. As impressive as these advances have been there is still little that a user can do within a VR system other than look--and react. If VR is to be usable in a design setting users will need to be able to interact, to control and modify their virtual environments using virtual tools inside that environment. In this paper we describe a realistic virtual computer/personal digital assistant that we have built. To the user this virtual computer appears as a hand held flat panel display. Input can be provided to this virtual computer using a virtual finger or stylus to `touch' the screen. We migrate applications developed for a flat screen environment into the virtual environment without modifying the application code. A major strength of this approach is that we meld the naturally 3D interaction metaphor of a hand held virtual tool with the software support provided by some 2D user interface toolkits. Our approach has provided us great flexibility in both designing and implementing user interface components for VR environments, and it has enabled us to represent the familiar flat screen human computer interaction metaphor within the VR context. We will describe some applications which made use of our capability.

Proceedings ArticleDOI
11 Mar 1995
TL;DR: The paper first discusses the kinematics and calibration followed by the integration of the device into a single-user, ethernet-distributed, virtual reality (VR) environment.
Abstract: A novel compact hand master device with force feedback is presented. The Second Generation Rutgers Master (RM-II) integrates position-sensing and force-feedback to multiple fingers in a single structure, without the use of sensing gloves. The paper first discusses the kinematics and calibration followed by the integration of the device into a single-user, ethernet-distributed, virtual reality (VR) environment. The VR simulation features: visual feedback, force feedback, interactive sound and object interaction.

Book ChapterDOI
01 Jan 1995
TL;DR: Work in progress that demonstrates the utility of a voxel-based data format for modeling physical interactions between virtual objects is discussed, data structures that help to optimize storage requirements and preserve object integrity during object movement are presented, and prototype systems are described.
Abstract: This paper proposes the use of a voxel-based data representation not only for visualization, but also for physical modeling of objects and structures derived from volumetric data. Work in progress that demonstrates the utility of a voxel-based data format for modeling physical interactions between virtual objects is discussed, data structures that help to optimize storage requirements and preserve object integrity during object movement are presented, and prototype systems are described. These prototypes include 2D and 3D systems that illustrate voxel-based collision detection and avoidance, a force-feedback system that enables haptic, (or tactile), exploration of virtual objects, and a 2D system that illustrates interactive modeling of deformable voxel-based objects.

01 Dec 1995
TL;DR: In this paper, the authors describe the development of custom-built tactile feedback hardware and its integration with an available force-reflecting haptic interface, and select a small solenoid actuator for application in a closed-loop force control tactile feedback system.
Abstract: : This thesis describes the development of custom-built tactile feedback hardware and its integration with an available force-reflecting haptic interface. Design requirements were motivated strongly by the characteristics of the human tactile sense as well as the biomechanical characteristics of the human finger. The work explores the feasibility of various actuators, and selects a small solenoid actuator for application in a closed-loop force control tactile feedback system. An adaptive PI algorithm using continuously variable gain scheduling helps to compensate for nonlinearities in the solenoid actuator. The system demonstrates adequate closed-loop control, but the mass added to the force-reflecting haptic interface proves less than optimal. Design suggestions for future prototypes may reduce the mass added by the tactile feedback hardware by over 30%. The work concludes with recommendations for psychophysical research that will increase understanding of human performance in tasks using haptic feedback devices.

Journal ArticleDOI
01 Oct 1995
TL;DR: Data from a study of disabled patients and normals are presented to illustrate the efficacy of "virtual force" in terms of a performance measure related to transinformation (capacity or bits/second).
Abstract: Present virtual reality (VR) systems present powerful graphical information with human interaction capability but may be missing corresponding motion or force sensing inputs to the human operator. The concept of "virtual force" is introduced herein as a way to provide some of this missing sensing information. The "virtual force" environment is created in this paper using a displacement stick controller (joystick) with the reflective force on the controller being coordinated, in a spatial sense, with a visual scene displayed on a monitor. Force type fields are built about visual objects to provide proprioceptive feedback to the operator to improve his telepresence about the virtual scene. Since this study involves both information and control theory, certain equivalences are derived in the appendixes which extend to different plant dynamics not excluding small time delays which may appear anywhere in system. Data from a study of disabled patients and normals are presented to illustrate the efficacy of "virtual force" in terms of a performance measure related to transinformation (capacity or bits/second). >

Proceedings ArticleDOI
22 Oct 1995
TL;DR: This experimental procedure was able to decouple the effects on the overall telemanipulator performance introduced by the single components of the system, master manipulator, display, slave manipulator rind bilateral controller.
Abstract: In this paper we describe a novel experimental procedure for the evaluation of a telemanipulator performance. A group of subjects performed the same set of tasks directly on a physical setup, on a virtual implementation capable of providing visual and force feedback through an haptic display, and remotely on the real setup using a telemanipulation system. Using this experimental procedure we were able to decouple the effects on the overall telemanipulator performance introduced by the single components of the system, master manipulator, display, slave manipulator rind bilateral controller.

Book ChapterDOI
01 Jan 1995
TL;DR: Dynamic touch is the kind of touching that occurs when a utensil, a tool, or any medium-sized object is grasped and lifted, turned, carried, and so on as mentioned in this paper.
Abstract: Publisher Summary A kind of touch that is prominent in everyday manipulatory activity involves the hand supporting the manipulated object and in contact with only a part of it. Both statically and in motion, the manipulated object affects the tensile states of the muscles and tendons of the hand and arm and thereby the patterning of activity in the ensembles of muscle and tendon receptors. Muscle and tendon deformations characterize the haptic subsystem of dynamic touch, more so than deformations of the skin and changes in joint angles. Dynamic touching is the kind of touching that occurs when a utensil, a tool, or any medium-sized object is grasped and lifted, turned, carried, and so on. Dynamic touch is implied whenever an object is grasped and wielded, whenever a solid surface is palpated or vibrated, and whenever a hand-held implement is brought into contact with other objects.

Journal ArticleDOI
TL;DR: At the University of Tokyo, a prototype of a device that allows users to directly manipulate an object in a virtual environment by touching a physical model of the object's surface as presented by an intermediate device outside the computer is created.
Abstract: An integral part of successfully manipulating objects is the sensation of touch or force. Experiments with telerobots (robots controlled at a distance) show that the sensation of force and contact improves the efficiency and accuracy of such tasks. Many believe that the same can be said of tasks in a virtual environment. Unfortunately, it is not possible to actually grasp a virtual object in the same manner as you would a real object because virtual objects are defined in the computer, while the user exists in the real world. Thus, there must be some intermediate device that provides the user with the effects of touch, either through the virtual environment itself or in a physical model of the object, which then communicates information to the virtual environment and displays the virtual object to the user. At the University of Tokyo, we are experimenting with surface display, a method that allows users to directly manipulate an object in a virtual environment by touching a physical model of the object's surface as presented by an intermediate device outside the computer. We have created a prototype of such a device that measures the force exerted on the surface. We have also implemented control and calculation methods, and have evaluated both the device and the methods in experiments with users. These experiments have validated the concepts underlying our work, and we are continuing to investigate implementation issues. >

Journal ArticleDOI
TL;DR: In this article, the authors describe the dynamic behavior of a hand controller when it is maneuvered by a human and a general control architecture is developed that guarantees various impedance's on the hand controller.
Abstract: This article describes the dynamic behavior of a hand controller when it is maneuvered by a human. A general control architecture is developed that guarantees various impedance's on the hand controller. We show that some compliancy either in the hand controller or in the human arm is necessary to achieve stability of the hand controller and the human arm taken as a whole. The actuators' backdrivability, the dynamics of the hand controller mechanisms, and the computer sampling time are discussed as they relate to system stability. A set of experiments that were performed on a prototype hand controller are presented to show the system performance.

Proceedings ArticleDOI
05 Aug 1995
TL;DR: The response of the levitated device has been made successfully to emulate virtual devices such as gimbals and bearings as well as different dynamic interactions such as hard solid contacts, dry and viscous friction, and textured surfaces.
Abstract: A high-performance magnetic levitation haptic interface has been developed to enable the user to interact dynamically with simulated environments by holding a levitated structure and directly feeling its computed force and motion responses. The haptic device consists of a levitated body with six degrees of freedom and motion ranges of /spl plusmn/5 mm and /spl plusmn/3.5 degrees in all directions. The current device can support weights of up to 20 N and can generate a torque of 1.7 Nm. Control bandwidths of up to 50 Hz and stiffnesses from 0.01 to 23 N/mm have been achieved by the device using a digital velocity estimator and 1 KHz control on each axis. The response of the levitated device has been made successfully to emulate virtual devices such as gimbals and bearings as well as different dynamic interactions such as hard solid contacts, dry and viscous friction, and textured surfaces.