scispace - formally typeset
Search or ask a question

Showing papers in "Presence: Teleoperators & Virtual Environments in 2002"


Journal ArticleDOI
TL;DR: An experiment to assess the anxiety responses of people giving 5 min. presentations to virtual audiences consisting of eight male avatars shows that post-talk PRCS is significantly and positively correlated to PRCS measured prior to the experiment in the case only of the positive and static audiences.
Abstract: This paper describes an experiment to assess the anxiety responses of people giving 5 min. presentations to virtual audiences consisting of eight male avatars. There were three different types of audience behavior: an emotionally neutral audience that remained static throughout the talk, a positive audience that exhibited friendly and appreciative behavior towards the speaker, and a negative audience that exhibited hostile and bored expressions throughout the talk A second factor was immersion: half of the forty subjects experienced the virtual seminar room through a head-tracked, head-mounted display and the remainder on a desktop system. Responses were measured using the standard Personal Report of Confidence as a Public Speaker (PRCS), which was elicited prior to the experiment and after each talk. Several other standard psychological measures such as SCL-90-R (for screening for psychological disorder), the SAD. and the FNE were also measured prior to the experiment. Other response variables included subjectively assessed somaticization and a subject self-rating scale on performance during the talk. The subjects gave the talk twice each to a different audience, but in the analysis only the results of the first talk are presented, thus making this a between-groups design. The results show that post-talk PRCS is significantly and positively correlated to PRCS measured prior to the experiment in the case only of the positive and static audiences, For the negative audience, prior PRCS was not a predictor of post-PRCS which was higher than for the other two audiences and constant. The negative audience clearly provoked an anxiety response irrespective of the normal level of public speaking confidence of the subject. The somatic response also showed a higher level of anxiety for the negative audience than for the other two, but self-rating was generally higher only for the static audience, each of these results taking into account prior PRCS.

489 citations


Journal ArticleDOI
TL;DR: This paper reviews the user interface of the initial Studierstube system, in particular the implementation of collaborative augmented reality, and the Personal Interaction Panel, a two-handed interface for interaction with the system.
Abstract: Our starting point for developing the Studierstube system was the belief that augmented reality, the less obtrusive cousin of virtual reality, has a better chance of becoming a viable user interface for applications requiring manipulation of complex three-dimensional information as a daily routine. In essence, we are searching for a 3-D user interface metaphor as powerful as the desktop metaphor for 2-D. At the heart of the Studierstube system, collaborative augmented reality is used to embed computer-generated images into the real work environment. In the first part of this paper, we review the user interface of the initial Studierstube system, in particular the implementation of collaborative augmented reality, and the Personal Interaction Panel, a two-handed interface for interaction with the system. In the second part, an extended Studierstube system based on a heterogeneous distributed architecture is presented. This system allows the user to combine multiple approaches-augmented reality, projection displays, and ubiquitous computing--to the interface as needed. The environment is controlled by the Personal Iteraction Panel, a two-handed, pen-and-pad interface that has versatile uses for interacting with the virtual environment. Studierstube also borrows elements from the desktop, such as multi-tasking and multi-windowing. The resulting software architecture is a user interface management system for complex augmented reality applications. The presentation is complemented by selected application examples.

471 citations


Journal ArticleDOI
TL;DR: An overview of VE usability evaluation is presented to organize and critically analyze diverse work from this field, and a simple classification space for Ve usability evaluation methods is discussed.
Abstract: Virtual environments (VEs) are a relatively new type of human-computer interface in which users perceive and act in a three-dimensional world. The designers of such systems cannot rely solely on design guidelines for traditional two-dimensional interfaces, so usability evaluation is crucial for VEs. This paper presents an overview of VE usability evaluation to organize and critically analyze diverse work from this field. First, we discuss some of the issues that differentiate VE usability evaluation from evaluation of traditional user interfaces such as GUIs. We also present a review of some VE evaluation methods currently in use, and discuss a simple classification space for VE usability evaluation methods. This classification space provides a structured means for comparing evaluation methods according to three key characteristics: involvement of representative users, context of evaluation, and types of results produced. Finally, to illustrate these concepts, we compare two existing evaluation approaches: testbed evaluation (Bowman, Johnson, & Hodges, 1999) and sequential evaluation (Gabbard, Hix, & Swan, 1999).

345 citations


Journal ArticleDOI
TL;DR: The details of how the unknown subsystems of the stock PHANToM can be replaced with known, high-performance systems and how additional measurement electronics can be interfaced to compensate for some of the PHAN toM's shortcomings are presented.
Abstract: This paper presents a critical study of the mechanical and electrical properties of the PHANToM haptic interface and improvements to overcome its limitations for applications requiring high-performance control. Target applications share the common requirements of low-noise/granularity/latency measurements, an accurate system model, high bandwidth, the need for an open architecture, and the ability to operate for long periods without interruption while exerting significant forces. To satisfy these requirements, the kinematics, dynamics, high-frequency dynamic response, and velocity estimation of the PHANToM system are studied. Furthermore, this paper presents the details of how the unknown subsystems of the stock PHANToM can be replaced with known, high-performance systems and how additional measurement electronics can be interfaced to compensate for some of the PHANToM's shortcomings. With these modifications, it is possible to increase the maximum achievable virtual wall stiffness by 35%, active viscous damping by 120%, and teleoperation loop gain by 50% over the original system. With the modified system, it is also possible to maintain higher forces for longer periods without causing motor overheating.

248 citations


Journal ArticleDOI
TL;DR: An evaluation of the effects of automation level and decision-aid fidelity on the number of simulated remotely operated vehicles that could be successfully controlled by a single operator during a target acquisition task indicates that an automation level incorporating management-by-consent had some clear performance advantages over the more autonomous and less autonomous levels of automation.
Abstract: Remotely operated vehicles (ROVs) are vehicular robotic systems that are teleoperated by a geographically separated user. Advances in computing technology have enabled ROV operators to manage multiple ROVs by means of supervisory control techniques. The challenge of incorporating telepresence in any one vehicle is replaced by the need to keep the human "in the loop" of the activities of all vehicles. An evaluation was conducted to compare the effects of automation level and decision-aid fidelity on the number of simulated remotely operated vehicles that could be successfully controlled by a single operator during a target acquisition task. The specific ROVs instantiated for the study were unmanned air vehicles (UAVs). Levels of automation (LOAs) included manual control management-by-consent, and management-by-exception. Levels of decision-aid fidelity (100% correct and 95% correct) were achieved by intentionally injecting error into the decision-aiding capabilities of the simulation. Additionally, the number of UAVs to be controlled varied (one, two, and four vehicles). Twelve participants acted as UAV operators. A mixed-subject design was utilized (with decision-aid fidelity as the between-subjects factor), and participants were not informed of decision-aid fidelity prior to data collection. Dependent variables included mission efficiency, percentage correct detection of incorrect decision aids. workload and situation awareness ratings, and trust in automation ratings. Results indicate that an automation level incorporating management-by-consent had some clear performance advantages over the more autonomous (management-by-exception) and less autonomous (manual control) levels of automation. However, automation level interacted with the other factors for subjective measures of workload, situation awareness, and trust. Additionally, although a 3D perspective view of the mission scene was always available, it was used only during low-workload periods and did not appear to improve the operator's sense of presence. The implications for ROV interface design are discussed, and future research directions are proposed.

220 citations


Journal ArticleDOI
TL;DR: It is argued that the mental representation of possible actions should especially enhance spatial presence, and to a lesser extent the involvement and realness of a VE.
Abstract: It has long been argued that the possibility to interact in and with a virtual environment (VE) enhances the sense of presence. On the basis of a three-component model of presence, we specify this hypothesis and argue that the mental representation of possible actions should especially enhance spatial presence, and to a lesser extent the involvement and realness of a VE. We support this hypothesis in three studies. A correlative study showed that self-reported interaction possibilities correlated significantly with spatial presence, but not with the other two factors. A first experimental study showed that possible self-movement significantly increased spatial presence and realness. A second experimental study showed that even the illusion of interaction, with no actual interaction taking place, significantly increased spatial presence.

219 citations


Journal ArticleDOI
TL;DR: Visual path integration without any vestibular or kinesthetic cues can be sufficient for elementary navigation tasks like rotations, translations, and triangle completion.
Abstract: The literature often suggests that proprioceptive and especially vestibular cues are required for navigation and spatial orientation tasks involving rotations of the observer. To test this notion, we conducted a set of experiments in virtual environments in which only visual cues were provided. Participants had to execute turns, reproduce distances, or perform triangle completion tasks. Most experiments were performed in a simulated 3D field of blobs, thus restricting navigation strategies to path integration based on optic flow. For our experimental set-up (half-cylindrical 180 deg. projection screen), optic flow information alone proved to be sufficient for untrained participants to perform turns and reproduce distances with negligible systematic errors, irrespective of movement velocity. Path integration by optic flow was sufficient for homing by triangle completion, but homing distances were biased towards the mean response. Additional landmarks that were only temporarily available did not improve homing performance. However, navigation by stable, reliable landmarks led to almost perfect homing performance. Mental spatial ability test scores correlated positively with homing performance, especially for the more complex triangle completion tasks-suggesting that mental spatial abilities might be a determining factor for navigation performance. In summary, visual path integration without any vestibular or kinesthetic cues can be sufficient for elementary navigation tasks like rotations, translations, and triangle completion.

184 citations


Journal ArticleDOI
TL;DR: This paper proposes an interaction method, called Just Follow Me (JFM), that uses an intuitive ghost metaphor and a first-person viewpoint for effective motion training, and evaluation results show that JFM produces training and transfer effects as good as and, in certain situations, better than the real world.
Abstract: Training is usually regarded as one of the most natural application areas of virtual reality (VR). To date, most VR-based training systems have been situation based, but this paper examines the utility of VR for a different class of training: learning to execute exact motions, which are often required in sports and the arts. In this paper, we propose an interaction method, called just Follow Me (JFM), that uses an intuitive "ghost" metaphor and a first-person viewpoint for effective motion training. Using the ghost metaphor (GM), JFM visualizes the motion of the trainer in real time as a ghost (initially superimposed on the trainee) that emerges from one's own body. The trainee who observes the motion from the first-person viewpoint "follows" the ghostly master as closely as possible to learn the motion. Our basic hypothesis is that such a VR system can help a student learn motion effectively and quickly, comparably to the indirect real-world teaching methods. Our evaluation results show that JFM produces training and transfer effects as good as--and, in certain situations, better than in the real world. We believe that this is due to the more direct and correct transfer of proprioceptive information from the trainer to the trainee.

183 citations


Journal ArticleDOI
TL;DR: The notion that presence may be considered as a selection mechanism that organizes the stream of sensory data into an environmental gestalt or perceptual hypothesis about current environment is discussed and physiological measures indicating breaks in presence are studied.
Abstract: This paper discusses the notion that presence may be considered as a selection mechanism that organizes the stream of sensory data into an environmental gestalt or perceptual hypothesis about current environment. A particular environmental gestalt results in scan-sensing of the world in a particular pattern reminiscent of saccades and fixations in eye scan paths. The environment hypothesis is continually reverified or else a break in presence occurs. Presence is therefore compared to visual hypothesis selection in the work of Richard Gregory and Lawrence Stark. The implications for measurement are discussed, and it is concluded that physiological measures indicating breaks in presence are worthy of study, and that the study of presence is also the study of what maintains an environmental gestalt.

161 citations


Journal ArticleDOI
TL;DR: A first pass at a conceptual framework is described and it is used to inform the design of different kinds of activities for children to experiment with to investigate how different MRE setups affected children's exploratory behavior and their understanding of them.
Abstract: How do we conceptualize and design mixed reality environments (MREs)? Here we describe a first pass at a conceptual framework and use it to inform the design of different kinds of activities for children to experiment with. Our aim was to investigate how different MRE setups affected children's exploratory behavior and their understanding of them. The familiar activity of color mixing was used: different setups were provided, where paint or light colors could be mixed by using either physical tools, digital tools, or a combination of these. The findings of our study showed that novel mixes of physical and digital "transforms" engendered much exploration and reflection.

151 citations


Journal ArticleDOI
TL;DR: Surprisingly, providing different combinations of visual, auditory, and vibrotactile feedback to the operator did not significantly change performance, however, there was an interaction between spatial ability and feedback condition that affected teleoperation performance.
Abstract: Teleoperation requires a complex combination of the operator's cognitive, perceptual, and motor skills. Our experiment tested the ability of subjects to teleoperate a remote robot under different conditions of increasing sensory feedback. We also evaluated each operator's spatial perception skills using a battery of tests to understand the effect of spatial perception on the operator's ability to perform the teleoperation task. The experiment showed that the spatial ability of an operator-as reflected by a test battery of two spatial recognition and two spatial manipulation tests-was significantly correlated with the ability to teleoperate the robot through a maze. Surprisingly, providing different combinations of visual, auditory, and vibro-tactile feedback to the operator did not significantly change performance. However, there was an interaction between spatial ability and feedback condition that affected teleoperation performance.

Journal ArticleDOI
TL;DR: This first interface combines three technologies: augmented reality (AR), immersive virtual reality (VR), and computer vision-based hand and object tracking and explores alternative interface techniques, including a zoomable user interface, paddle interactions, and pen annotations.
Abstract: In this paper, we describe two explorations in the use of hybrid user interfaces for collaborative geographic data visualization. Our first interface combines three technologies: augmented reality (AR), immersive virtual reality (VR), and computer vision-based hand and object tracking. Wearing a lightweight display with an attached camera, users can look at a real map and see three-dimensional virtual terrain models overlaid on the map. From this AR interface, they can fly in and experience the model immersively, or use free hand gestures or physical markers to change the data representation. Building on this work, our second interface explores alternative interface techniques, including a zoomable user interface, paddle interactions, and pen annotations. We describe the system hardware and software and the implications for GIS and spatial science applications.

Journal ArticleDOI
TL;DR: This paper introduces a method for calibrating monocular optical seethrough displays and extends it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure.
Abstract: Augmented reality (AR) is a technology in which a user's view of the real world is enhanced or augmented with additional information generated from a computer model. To have a working AR system, the see-through display system must be calibrated so that the graphics are properly rendered. The optical see-through systems present an additional challenge because, unlike the video see-through systems, we do not have direct access to the image data to be used in various calibration procedures.This paper reports on a calibration method we developed for optical see-through head-mounted displays. We first introduce a method for calibrating monocular optical see-through displays (that is, a display for one eye only) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure. The method integrates the measurements for the camera and a six-degrees-of-freedom tracker that is attached to the camera to do the calibration. We have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built. In the monocular case, the calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. In this method, the user interaction to perform the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head immobile while performing the calibration. In the stereo calibration case, the user aligns a stereoscopically fused 2D marker, which is perceived in depth, with a single target point in the world whose coordinates are known. As in the monocular case, there is no requirement that the user keep his or her head fixed.

Journal ArticleDOI
TL;DR: Experiments demonstrated that human subjects could identify tissues with similar accuracy when performing a real or simulated cutting task, and the use of haptic recordings to generate the simulations was simple and efficient, but it lacked flexibility because only the information obtained during data acquisition could be displayed.
Abstract: The forces experienced while surgically cutting anatomical tissues from a sheep and two rats were investigated for three scissor types. Data were collected in situ using instrumented Mayo, Metzenbaum anc Iris scissors immediately after death to minimize postmortem effects. The force-position relationship, the frequency components presens in the signal, the significance of the cutting rate. and other invariant properties were investigated after segmentation of the data into distinct task phases. Measurements were found to be independent of the cutting speed for Mayo and Metzenbaum scissors, but the results for Iris scissors were inconclusive. Sensitivity to cutting tissues longitudinally or transversely depended on both the tissue and on the scissor type. Data from cutting three tissues (rat skin, liver, and tendon) with Metzenbaum scissors as well as blank runs were processed and displayed as haptic recordings through a custom-designed haptic interface. Experiments demonstrated that human subjects could identify tissues with similar accuracy when performing a real or simulated cutting task. The use of haptic recordings to generate the simulations was simple and efficient, but it lacked flexibility because only the information obtained during data acquisition could be displayed. Future experiments should account for the user grip, tissue thickness, tissue moisture content, hand orientation, and innate scissor dynamics. A database of the collected signals has been created on the Internet for public use at www.cim.mcgill.ca/~haptic/tissue/data.html.

Journal ArticleDOI
TL;DR: A new method for registration in augmented reality (AR) was developed that simultaneously tracks the position, orientation, and motion of the user's head, as well as estimating the three-dimensional (3D) structure of the scene.
Abstract: A new method for registration in augmented reality (AR) was developed that simultaneously tracks the position, orientation, and motion of the user's head, as well as estimating the three-dimensional (3D) structure of the scene. The method fuses data from head-mounted cameras and head-mounted inertial sensors. Two extended Kalman filters (EKFs) are used: one estimates the motion of the user's head and the other estimates the 3D locations of points in the scene. A recursive loop is used between the two EKFs. The algorithm was tested using a combination of synthetic and real data, and in general was found to perform well. A further test showed that a system using two cameras performed much better than a system using a single camera, although improving the accuracy of the inertial sensors can partially compensate for the loss of one camera. The method is suitable for use in completely unstructured and unprepared environments. Unlike previous work in this area, this method requires no a priori knowledge about the scene, and can work in environments in which the objects of interest are close to the user.

Journal ArticleDOI
TL;DR: The results suggest that the additional idiothetic information afforded in the realworld and HMD conditions is useful for orientation purposes in the presented task domain.
Abstract: Two experiments examined perceived spatial orientation in a small environment as a function of experiencing that environment under three conditions: real-world, desktop-display (DD), and head-mounted display (HMD). Across the three conditions, participants acquired two targets located on a perimeter surrounding them, and attempted to remember the relative locations of the targets. Subsequently, participants were tested on how accurately and consistently they could point in the remembered direction of a previously seen target. Results showed that participants were significantly more consistent in the real-world and HMD conditions than in the DD condition. Further, it is shown that the advantages observed in the HMD and real-world conditions were not simply due to nonspatial response strategies. These results suggest that the additional idiothetic information afforded in the real-world and HMD conditions is useful for orientation purposes in our presented task domain. Our results are relevant to interface design issues concerning tasks that require spatial search, navigation, and visualization.

Journal ArticleDOI
TL;DR: An accurate perception of the distance between an object and a nearby surface can increase a viewer's sense of presence in an immersive environment, particularly when a user is performing actions that affect or are affected by this distance.
Abstract: An accurate perception of the distance between an object and a nearby surface can increase a viewer's sense of presence in an immersive environment, particularly when a user is performing actions that affect or are affected by this distance. Two experiments were conducted examining the effectiveness of stereoscopic viewing, shadows, and interreflections at conveying this distance information. Subjects performed simple tasks based on the perception of the distance between a fixed virtual table and an approaching block in a virtual environment. In the first experiment, Subjects lowered a virtual block to a virtual table. For this task both stereoscopic viewing and shadows had statistically significant effects on subject performance. In the second experiment, subjects mechanically reported the perceived distance between a virtual block and virtual table. For this task, viewing condition, shadows, and interreflections were shown to be statistically significant distance cues.

Journal ArticleDOI
TL;DR: An attitude-measurement system (TISS-5-40) has been developed to achieve a wearable sensor for individuals having three fiberoptic gyroscopes and three accelerometers.
Abstract: An attitude-measurement system (TISS-5-40) has been developed to achieve a wearable sensor for individuals. This equipment is one of the inertial sensor systems having three fiberoptic gyroscopes and three accelerometers. Heading stability of 1 deg./hr. (1 σ) and attitude accuracy of ±0.5 deg. have been demonstrated. At present, some of the attitude-measurement Systems have been applied in the field of mixed-reality technology, and the users confirm and report its effectiveness (Hara, Anabuki, Satoh, Yamamoto, & Tamura, 2000).

Journal ArticleDOI
TL;DR: Accuracy and precision of rendered depth for near-field visualization were measured in a custom-designed bench prototype HMD and experimental results compared to theoretical predictions established from a computational model for rendering and presenting virtual images by Robinett and Rolland (1992).
Abstract: The utilization of head-mounted displays (HMDs) in hign-end applications such as medical, engineering, and scientific visualization necessitates that the position of objects be rendered accurately and precisely. Accuracy and precision of rendered depth for near-field visualization were measured in a custom-designed bench prototype HMD. Experimental results were compared to theoretical predictions established from a computational model for rendering and presenting virtual images by Robinett and Rolland (1992). Such a theoretical model provided the necessary graphics transformations required so that rendered virtual objects be perceived at the rendered depth in binocular HMDs. Three object shades of various sizes were investigated under two methodologies: tne method of constant stimuli modified for random size presentation and the method of adjustments. Results snow a 2 mm and an 8 mm performance for the accuracy and the precision of rendered depth in HMDs, respectively. Results of the assessment of rendered depth in HMDs for near-field visualization support employing the method of adjustments over the method of constant stimuli whether or not the method of constant stimuli is modified for random size presentation.

Journal ArticleDOI
TL;DR: The current state of research in the localization of nearby sound sources is summarized and the technical challenges involved in the creation of a near-field virtual audio display are outlined.
Abstract: Although virtual audio displays are capable of realistically simulating relatively distant sound sources, they are not yet able to accurately reproduce the spatial auditory cues that occur when sound sources are located near the listener's head. Researchers have long recognized that the binaural difference cues that dominate auditory localization are independent of distance beyond I m but change systematically with distance when the source approaches within I m of the listener's head. Recent research has shown that listeners are able to use these binaural cues to determine the distances of nearby sound sources. However. technical challenges in the collection and processing of near-field head-related transfer functions (HRTFs) have thus far prevented the construction of a fully functional near-field audio display. This paper summarizes the current state of research in the localization of nearby sound sources and outlines the technical challenges involved in the creation of a near-field virtual audio display. The potential applications of near-field displays in immersive virtual environments and multimodal interfaces are also discussed.

Journal ArticleDOI
TL;DR: A correlation study was conducted to demonstrate the reliability and validity of objectpresence as a construct and exhibited a pattern evident in previous studies of presence suggesting that object-presence and presence could be gender biased by the task to be completed or by the presence measure.
Abstract: A projection-augmented model is a type of nonimmersive, coincident haptic and visual display that uses a physical model as a three-dimensional screen for projected visual information. Supporting all physiological depth cues and two sensory modalities should create a strong sense of the object's existence. However, conventional measures of presence have been defined only for displays that surround and isolate a user from the real world. The idea of object-presence is thus suggested to measure "the subjective experience that a particular object exists in a user's environment, even when that object does not" (Stevens & Jerrams-Smith. 2000). A correlation study was conducted to demonstrate the reliability and validity of object-presence as a construct, The results of a modified Singer and Witmer Presence Questionnaire suggest the existence of a reliable construct that exhibits face validity. However, the Presence Questionnaire did not correlate significantly with a user's tendency to become immersed in traditional media which would support the assertion that this construct was object-presence. Considering previous worK, the results of the current correlation study exhibited a pattern evident in previous studies of presence suggesting that object-presence and presence could be gender biased by the task to be completed or by the presence measure.

Journal ArticleDOI
TL;DR: This work describes how the tracking algorithm allows an EyeTap to alter the light from a particular portion of the scene to give rise to a computer-controlled, selectively mediated reality.
Abstract: Diminished reality is as important as augmented reality, and both are possible with a device called the Reality Mediator. Over the past two decades, we have designed, built, worn, and tested many different embodiments of this device in the context of wearable computing, Incorporated into the Reality Mediator is an "EyeTap" system, which is a device that quantifies and resynthesizes light that would otherwise pass through one or both lenses of the eye(s) of a wearer. The functional principles of EyeTap devices are discussed, in detail. The EyeTap diverts into a spatial measurement system at least a portion of light that would otherwise pass through the center of projection of at least one lens of an eye of a wearer. The Reality Mediator has at least one mode of operation in which it reconstructs these rays of light, under the control of a wearable computer system. The computer system then uses new results in algebraic projective geometry and comparametric equations to perform head tracking, as well as to track motion of rigid planar patches present in the scene. We describe how our tracking algorithm allows an EyeTap to alter the light from a particular portion of the scene to give rise to a computer-controlled, selectively mediated reality. An important difference between mediated reality and augmented reality includes the ability to not just augment but also deliberately diminish or otherwise alter the visual perception of reality. For example, diminished reality allows additional information to be inserted without causing the user to experience information overload. Our tracking algorithm also takes into account the effects of automatic gain control, by performing motion estimation in both spatial as well as tonal motion coordinates.

Journal ArticleDOI
TL;DR: Early lessons in accommodating the needs of several interconnected user groups are presented as the use of the Active Worlds client/server technology for implementation of a 3-D multiuser virtual science museum, SciCentr, that incorporates interactive simulation-based exhibits is explored.
Abstract: As a focus of its exploration of desktop 3-D environments for science outreach, the Cornell Theory Center (CTC), Cornell University's high-performance computing center, has been exploring the use of the Active Worlds client/server technology for implementation of a 3-D multiuser virtual science museum. SciCentr, that incorporates interactive simulation-based exhibits. We present here early lessons in accommodating the needs of several interconnected user groups as we move forward with establishing the SciCentr community within the greater educational community of Active Worlds Educational Universe (AWEDU) and the Contact Consortium's VLearn3D initiative. We learned that we must provide the user communities with both social and spatial frameworks within which to work and play. Social Support ranges from one-on-one, over-the-shoulder help, to guidance and training within the environment, to coordination of "inworld" activities and in-person pizza parties. Spatial design requirements depend on the activities of the user group and benefit from study of real and virtual world examples. Our experience to date with a pilot group of teenaged participants is encouraging, and we believe that this medium has potential as a resource for constructivist informal science and technology education.

Journal ArticleDOI
TL;DR: In this article, the effects on postural stability of varying field of view (FOV), image resolution, and scene content in an immersive visual display have been examined for evaluating presence and simulator sickness in virtual environments.
Abstract: Image quality issues such as field of view (FOV) and resolution are important for evaluating "presence" and simulator sickness (SS) in virtual environments (VEs) This research examined effects on postural stability of varying FOV, image resolution, and scene content in an immersive visual display Two different scenes (a photograph of a fountain and a simple radial pattern) at two different resolutions were tested using six FOVs (30, 60, 90, 120, 150, and 180 deg) Both postural stability, recorded by force plates, and subjective difficulty ratings varied as a function of FOV, scene content, and image resolution Subjects exhibited more balance disturbance and reported more difficulty in maintaining posture in the wide-FOV, high-resolution, and natural scene conditions

Journal ArticleDOI
TL;DR: The concept and merits of Share-Z, a client/server depth-sensing scheme that multiple clients can share the 3-D information of the server, is discussed and an experimental system developed to demonstrate the feasibility is described.
Abstract: In mixed reality, occlusions and shadows are important to realize a natural fusion between the real and virtual worlds. In order to achieve this, it is necessary to acquire dense depth information of the real world from the observer's viewing position. The depth sensor must be attached to the see-through HMD of the observer because he/she moves around. The sensor should be small and light enough to be attached to the HMD and should be able to produce a reliable dense depth map at video rate. Unfortunately, however, no such depth sensors are available. We propose a client/server depth-sensing scheme to solve this problem. A server sensor located at a fixed position in the real world acquires the 3-D information of the world, and a client sensor attached to each observer produces the depth map from his/her viewing position using the 3-D information supplied from the server. Multiple clients can share the 3-D information of the server: we call it Share-Z. In this paper, the concept and merits of Share-Z are discussed. An experimental system developed to demonstrate the feasibility of Share-Z is also described.

Journal ArticleDOI
TL;DR: This paper describes how stereoscopic camera feedback from a remote vehicle and equipping the human operator with three-dimensional virtual cursors that can be used to interactively measure and model real features and objects in the remote environment are being used to improve the inspection of underground sewer pipes.
Abstract: The three-dimensional characterization and mapping of remote environments is an important task that generates a good deal of attention both by end users and by researchers across several fields of interest. In the mobile robotics community, a great deal of work has been done in equipping vehicles with sensors that can acquire three-dimensional and even multimodal information about the location and nature of features and objects in remote environments. However, the interpretation of such data using fully autonomous methods, such as computer vision, is usually a highly complex problem that, we believe, is much better suited to a human-oriented solution.In this paper, we describe our work in the development of augmented reality (AR) techniques for the telerobotic inspection and characterization of remote environments we describe how we are using stereoscopic camera feedback from a remote vehicle and equipping the human operator with three-dimensional virtual cursors that can be used to interactively measure and model real features and objects in the remote environment. We include a description of the calibration technicues used to correctly align the real and virtual images both statically and under vehicle and camera motion. We also describe how we are using our system to demonstrate the potential of AR for improving the inspection of underground sewer pipes.

Journal ArticleDOI
TL;DR: Different coordinated control aids are proposed to cope with collisions arising from delayed visual feedback from the remote location and extensive simulations of various planar rearrangement tasks employing local and remote graphics simulators over an ethernet LAN subject to a simulated communication delay are performed.
Abstract: In this paper, various coordinated control schemes are explored in Multioperator-Multirobot (MOMR) teleoperation through a communication network with time delay. Over the past decades, problems and several notable results have been reported mainly in the Single-Operator-Single-Robot (SOSR) teleoperation system. Recently, the need for cooperation has rapidly emerged in many possible applications. suck as plant maintenance, construction, and surgery, because multirobot co-operation would have a significant advantage over a single robot in such cases. Thus, there is a growing interest in the control of multirobot systems in remote teleoperation, too. However, the time delay over the network would pose a more difficult problem to MOMR teleoperation systems and seriously affect their performance. In this work, our recent efforts devoted to the coordinated control of the MOMR teleoperation is described. First, we build a virtual experimental test bed to investigate the cooperation between two telerobots in remote environments. Then, different coordinated control aids are proposed to cope with collisions arising from delayed visual feedback from the remote location. To verify the validity of the proposed schemes, we perform extensive simulations of various planar rearrangement tasks employing local and remote graphics simulators over an ethernet LAN subject to a simulated communication delay.

Journal ArticleDOI
TL;DR: This paper uses a tracked probe to sample the objects' geometries, and footage from the head-mounted cameras to capture textures, and provides visual feedback during modeling by overlaying the model onto the real object in the user's field of view.
Abstract: This paper presents an interactive "what-you-see-is-what-you-get" (WYSIWYG) method for creating textured 3-D models of real objects using video see-through augmented reality. We use a tracked probe to sample the objects' geometries, and we acquire video images from the head-mounted cameras to capture textures. Our system provides visual feedback during modeling by overlaying the model onto the real object in the user's field of view. This visual feedback makes the modeling process interactive and intuitive.

Journal ArticleDOI
TL;DR: Three VE navigation training aids were developed: local and global orientation cues, aerial views, and a themed environment enhanced with sights and sounds and divided into four distinct sectors that seemed to depend on how they were used during training.
Abstract: Virtual environments (VEs) have been used successfully to train wayfinders to navigate through buildings and learn their layout. However, at the same time, for many, the VE deficiencies have reduced the effectiveness of VEs for training spatial tasks. In an effort to improve VE effectiveness, we conducted research to determine if certain unique capabilities of VEs could compensate for its deficiencies. Research participants were required to learn the layout or configuration of one floor of an office building as portrayed in a VE. To improve spatial learning, we developed three VE navigation training aids: local and global orientation cues, aerial views, and a themed environment enhanced with sights and sounds and divided into four distinct sectors. The navigation aids were provided during the training but were not available during testing of survey knowledge. Of the three training aids investigated, only the aerial views were effective in improving performance on the survey knowledge tests. The effectiveness of the navigation aids seemed to depend on how they were used during training. A retention test given one week after training indicated that spatial knowledge acquired in a VE diminished little over the one-week retention interval.

Journal ArticleDOI
TL;DR: A comprehensive structured methodology for building VR systems, called CLEVR (Concurrent and LEvel by Level Development of VR System), which combines several conventional and new concepts, such as the simultaneous consideration of form, function, and behavior, hierarchical modeling and top-down creation of LODs.
Abstract: The development and maintenance of a virtual reality (VR) system requires in-depth knowledge and understanding in many different disciplines. Three major features that distinguish VR systems are real-time performance while maintaining acceptable realism and presence, objects with two clearly distinct yet inter-related aspects like geometry/structure and function/behavior, and the still experimental nature of multi-modal interaction design. Unti, now, little attention has been paid to methods and tools for the structured development of VR software that addresses these features. Many VR application development projects proceed by modeling needed objects or conventional CAD systems then programming the system using simulation packages. Usually, these activities are carried out without much planning, which may be acceptable for only small-scale or noncritical demonstration systems. However, for VR to be taken seriously as a media technology, a structural approach to developing VR applications is required for the construction of large-scale VR worlds, and this will undoubtedly involve and require complex resource management, abstractions for basic system/object functionalities and interaction tasks, and integration and easy plug-ins of different input and output methods. In this paper, we assembled a comprehensive structured methodology for building VR systems. called CLEVR (Concurrent and LEvel by Level Development of VR System), which combines several conventional and new concepts. For instance, we employ concepts such as the simultaneous consideration of form. function and behavior, hierarchical modeling and top-down creation of LODs (levels of detail), incremental execution and performance tuning, user task and interaction modeling, and compositional re-use of VR objects. The basic underlying modeling approach is to design VR objects (and the scenes they compose) hierarchically and incrementally, considering their realism, presence, behavioral correctness, performance, and even usability in a spiral manner. To support this modeling strategy, we developed a collection of computer-aided tools called P-VoT (POSTECH-Virtual reality system development Tool). We demonstrate our approach by illustrating a step-by-step design of a virtual ship simulator using the CLEVR/P-VoT, and demonstrate the effectiveness of our method in terms of the quality (performance and correctness) of the resulting software and reduced effort in its development and maintenance.