scispace - formally typeset
Search or ask a question

Showing papers in "Presence: Teleoperators & Virtual Environments in 2000"


Journal ArticleDOI
TL;DR: It is concluded that, although such questionnaires may be useful when all subjects experience the same type of environment, their utility is doubtful for the comparison of experiences across environments, such as immersive virtual compared to real, or desktop compared to immersive virtual.
Abstract: A between-group experiment was carried out to assess whether two different presence questionnaires can distinguish between real and virtual experiences. One group of ten subjects searched for a box in a real office environment. A second group of ten subjects carried out the same task in a virtual environment that simulated the same office. Immediately after their experience, subjects were given two different presence questionnaires in randomized order: the Witmer and Singer Presence (WS), and the questionnaire developed by Slater, Usoh, and Steed (SUS). The paper argues that questionnaires should be able to pass a “reality test” whereby under current conditions the presence scores should be higher for real experiences than for virtual ones. Nevertheless, only the SUS had a marginally higher mean score for the real compared to the virtual, and there was no significant difference at all between the WS mean scores. It is concluded that, although such questionnaires may be useful when all subjects experience the same type of environment, their utility is doubtful for the comparison of experiences across environments, such as immersive virtual compared to real, or desktop compared to immersive virtual.

735 citations


Journal ArticleDOI
TL;DR: A new measure for presence in immersive virtual environments (VEs) that is based on data that can be unobtrusively obtained during the course of a VE experience is described and lends support to interaction paradigms that are based on maximizing the match between sensory data and proprioception.
Abstract: This paper describes a new measure for presence in immersive virtual environments (VEs) that is based on data that can be unobtrusively obtained during the course of a VE experience. At different times during an experience, a participant will occasionally switch between interpreting the totality of sensory inputs as forming the VE or the real world. The number of transitions from virtual to real is counted, and, using some simplifying assumptions, a probabilistic Markov chain model can be constructed to model these transitions. This model can be used to estimate the equilibrium probability of being “present” in the VE. This technique was applied in the context of an experiment to assess the relationship between presence and body movement in an immersive VE. The movement was that required by subjects to reach out and touch successive pieces on a three-dimensional chess board. The experiment included twenty subjects, ten of whom had to reach out to touch the chess pieces (the active group) and ten of whom only had to click a handheld mouse button (the control group). The results revealed a significant positive association in the active group between body movement and presence. The results lend support to interaction paradigms that are based on maximizing the match between sensory data and proprioception.

585 citations


Journal ArticleDOI
TL;DR: The results suggest that the immersed person tended to emerge as the leader in the virtual group, but not in the real meeting, and group accord tended to be higher in theReal meeting than in thevirtual meeting, while group accord in the group increased with presence, the performance of the group, and the presence of women.
Abstract: This paper describes an experiment that compares behavior in small groups when its members carry out a task in a virtual environment (VE) and then continue the same task in a similar, real-world environment. The purpose of the experiment was not to examine task performance, but to compare various aspects of the social relations among the group members in the two environments. Ten groups of three people each, who had never met before, met first in a shared VE and carried out a task that required the identification and solution of puzzles that were presented on pieces of paper displayed around the walls of a room. The puzzle involved identifying that the same-numbered words across all the pieces of paper formed a riddle or saying. The group continued this task for fifteen minutes, and then stopped to answer a questionnaire. The group then reconvened in the real world and continued the same task. The experiment also required one of the group members to continually monitor a particular one of the others in order to examine whether social discomfort could be generated within a VE. In each group, there was one immersed person with a head-mounted display and head-tracking and two non-immersed people who experienced the environment on a workstation display. The results suggest that the immersed person tended to emerge as the leader in the virtual group, but not in the real meeting. Group accord tended to be higher in the real meeting than in the virtual meeting. Socially conditioned responses such as embarrassment could be generated in the virtual meeting, even though the individuals were presented to one another by very simple avatars. The study also found a positive relationship between presence of being in a place and copresence---the sense of being with the other people. Accord in the group increased with presence, the performance of the group, and the presence of women in the group. The study is seen as part of a much larger planned study, for which this experiment was used to begin to understand the issues involved in comparing real and virtual meetings.

384 citations


Journal ArticleDOI
TL;DR: This work compares two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices, as well as hybrid optical/video technology.
Abstract: We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.

333 citations


Journal ArticleDOI
TL;DR: Postural responses to a video sequence filmed from the hood of a car traversing a rally track, using stereoscopic and monoscopic presentation demonstrated a positive effect of stereoscopic presentation on the magnitude of postural responses elicited.
Abstract: We recently reported that direct subjective ratings of the sense of presence are potentially unstable and can be biased by previous judgments of the same stimuli (Freeman et al., 1999). Objective measures of the behavioral realism elicited by a display offer an alternative to subjective ratings. Behavioral measures and presence are linked by the premise that, when observers experience a mediated environment (VE or broadcast) that makes them feel present, they will respond to stimuli within the environment as they would to stimuli in the real world. The experiment presented here measured postural responses to a video sequence filmed from the hood of a car traversing a rally track, using stereoscopic and monoscopic presentation. Results demonstrated a positive effect of stereoscopic presentation on the magnitude of postural responses elicited. Posttest subjective ratings of presence, vection, and involvement were also higher for stereoscopically presented stimuli. The postural and subjective measures were not significantly correlated, indicating that nonproprioceptive postural responses are unlikely to provide accurate estimates of presence. Such postural responses may prove useful for the evaluation of displays for specific applications and in the corroboration of group subjective ratings of presence, but cannot be taken in place of subjective ratings.

265 citations


Journal ArticleDOI
TL;DR: It is confirmed that longer exposures produce more symptoms and that total sickness subsides over repeated exposures, and the generalizability of the relationships among sickness, exposure duration, and repeated exposures was verified.
Abstract: Although simulator sickness is known to increase with protracted exposure and to diminish with repeated sessions, limited systematic research has been performed in these areas. This study reviewed the few studies with sufficient information available to determine the effect that exposure duration and repeated exposure have on motion sickness. This evaluation confirmed that longer exposures produce more symptoms and that total sickness subsides over repeated exposures. Additional evaluation was performed to investigate the precise form of this relationship and to determine whether the same form was generalizable across varied simulator environments. The results indicated that exposure duration and repeated exposures are significantly linearly related to sickness outcomes (duration being positively related and repetition negatively related to total sickness). This was true over diverse systems and large subject pools. This result verified the generalizability of the relationships among sickness, exposure duration, and repeated exposures. Additional research is indicated to determine the optimal length of a single exposure and the optimal intersession interval to facilitate adaptation.

248 citations


Journal ArticleDOI
TL;DR: This paper investigates the role of global and local landmarks in virtual environment navigation in Hexatown, a regular hexagonal grid of streets and junctions, and shows that both local and global landmarks are used in wayfinding decisions.
Abstract: In visual navigation, landmarks can be used in a number of different ways. In this paper, we investigate the role of global and local landmarks in virtual environment navigation. We performed an experiment in a virtual environment called “Hexatown”, consisting of a regular hexagonal grid of streets and junctions. Each junction was identified by the presence of distinct local landmarks (buildings, phone box, and so on). Additionally, compass information or a global frame of reference was provided by global landmarks (hilltop, television tower, and city skyline). According to participants' movement decisions, egomotion was simulated, and displayed on a 180 deg. projection screen. Participants learned the route back and forth between two local landmarks. In the test phase, individual junctions were approached and the participant's movement decision was recorded. We performed two experiments involving landmark changes after learning. In the first, we used conflicting cues by transposing landmarks. In the second experiment, we reduced either local or global landmark information. Results show that both local and global landmarks are used in wayfinding decisions. However, different participants rely on different strategies. In the first experiment (cue conflict) for example, some of the participants used only local landmarks while others relied exclusively on global landmarks. Other participants used local landmarks at one location and global landmarks at the other. When removing one landmark type in the second experiment, the other type could be used by almost all participants, indicating that information about the neglected landmark type was present in memory.

248 citations


Journal ArticleDOI
TL;DR: A testbed developed at the San Francisco, Berkeley, and Santa Barbara campuses of the University of California for research in understanding, assessing, and training surgical skills is described, including virtual environments for training perceptual motor skills, spatial skills, and critical steps of surgical procedures.
Abstract: With the introduction of minimally invasive techniques, surgeons must learn skills and procedures that are radically different from traditional open surgery. Traditional methods of surgical training that were adequate when techniques and instrumentation changed relatively slowly may not be as efficient or effective in training substantially new procedures. Virtual environments are a promising new medium for training. This paper describes a testbed developed at the San Francisco, Berkeley, and Santa Barbara campuses of the University of California for research in understanding, assessing, and training surgical skills. The testbed includes virtual environments for training perceptual motor skills, spatial skills, and critical steps of surgical procedures. Novel technical elements of the testbed include a four-DOF haptic interface, a fast collision detection algorithm for detecting contact between rigid and deformable objects, and parallel processing of physical modeling and rendering. The major technical challenge in surgical simulation to be investigated using the testbed is the development of accurate, real-time methods for modeling deformable tissue behavior. Several simulations have been implemented in the testbed, including environments for assessing performance of basic perceptual motor skills, training the use of an angled laparoscope, and teaching critical steps of the cholecystectomy, a common laparoscopic procedure. The major challenges of extending and integrating these tools for training are discussed.

198 citations


Journal ArticleDOI
TL;DR: The results support the use of a simplified model of material in virtual auditory environments by demonstrating that similarity judgments in the first two studies were specific to instructions to judge material, and by confirming the greater importance of decay.
Abstract: Contact sounds can provide important perceptual cues in virtual environments. We investigated the relation between material perception and variables that govern the synthesis of contact sounds. A shape-invariant, auditory-decay parameter was a powerful determinant of the perceived material of an object. Subjects judged the similarity of synthesized sounds with respect to material (Experiment 1 and 2) or length (Experiment 3). The sounds corresponded to modal frequencies of clamped bars struck at an intermediate point, and they varied in fundamental frequency and frequency-dependent rate of decay. The latter parameter has been proposed as reflecting a shape-invariant material property: damping. Differences between sounds in both decay and frequency affected similarity judgments (magnitude of similarity and judgment duration), with decay playing a substantially larger role. Experiment 2, which varied the initial sound amplitude, showed that decay rate---rather than total energy or sound duration---was the critical factor in determining similarity. Experiment 3 demonstrated that similarity judgments in the first two studies were specific to instructions to judge material. Experiment 4, in which subjects assigned the sounds to one of four material categories, showed an influence of frequency and decay, but confirmed the greater importance of decay. Decay parameters associated with each category were estimated and found to correlate with physical measures of damping. The results support the use of a simplified model of material in virtual auditory environments.

190 citations


Journal ArticleDOI
TL;DR: The notion of togetherness, the sense of people being together in a shared space, is introduced, which is the counterpart for shared VEs to the presence of an individual in a VE.
Abstract: This Forum article discusses the relationships among people, their avatars, and their virtual environment workstations in a shared virtual environment. It introduces the notion of togetherness, the sense of people being together in a shared space, which is the counterpart for shared VEs to the presence of an individual in a VE. The role of tactual communication is emphasized as being fundamental to togetherness.

170 citations


Journal ArticleDOI
TL;DR: The SVE library provides more-comprehensive support for developing new VE applications and better supports the various device configurations of Ve applications than current systems for 3-D graphical applications.
Abstract: As virtual environment (VE) technology becomes accessible to (and affordable for) an ever-widening audience of users, the demand for VE applications will increase. Tools that assist and facilitate the development of these applications, therefore, will also be in demand. To support our efforts in quickly designing and implementing VE applications, we have developed the Simple Virtual Environment (SVE) library. In this article, we describe the characteristics of the library that support the development of both simple and complex VE applications. Simple applications are created by novice programmers or for rapid prototyping. More-complex applications incorporate new user input and output devices, as well as new techniques for user interaction, rendering, or animation. The SVE library provides more-comprehensive support for developing new VE applications and better supports the various device configurations of VE applications than current systems for 3-D graphical applications. The development of simple VE applications is supported through provided default interaction, rendering, and user input and output device handling. The library's framework includes an execution framework that provides structure for incrementally adding complexity to selected tasks of an application, and an environment model that provides a layer of abstraction between the application and the device configuration actually used at runtime. This design supports rapid development of VE applications through incremental development, code reuse, and independence from hardware resources during the development.

Journal ArticleDOI
TL;DR: The current limits for realism and the approaches to reaching and surpassing those limits are explored by describing and analyzing the most important components of VR-based endoscopic simulators.
Abstract: Virtual reality (VR)-based surgical simulator systems offer a very elegant approach to enriching and enhancing traditional training in endoscopic surgery. However, while a number of VR simulator systems have been proposed and realized in the past few years, most of these systems are far from being able to provide a reasonably realistic surgical environment. We explore the current limits for realism and the approaches to reaching and surpassing those limits by describing and analyzing the most important components of VR-based endoscopic simulators. The feasibility of the proposed techniques is demonstrated on a modular prototype system that implements the basic algorithms for VR training in gynaecologic laparoscopy.

Journal ArticleDOI
TL;DR: Psychophysical ex periments showed that subjects preferred inertial-force feedback to a spring-feedback force proportional to position or to position control, where the force feedback maintained a force of zero on the subject.
Abstract: The inertial force due to the acceleration of a locomotion interface is identified as a difference between virtual and real-world locomotion. To counter the inertial force, inertial-force feedback was implemented for the Treadport, a locomotion interface. A force controller was designed for a mechanical tether to apply the feedback force to the user. For the case of the user accelerating forward from rest, psychophysical ex periments showed that subjects preferred inertial-force feedback to a spring-feedback force proportional to position or to position control, where the force feedback maintained a force of zero on the subject.

Journal ArticleDOI
TL;DR: This work presents a model to support the initial steps in the design process of multiplayer games, defined in terms of the characteristics that are both inherent and special to multiplayer games but also related to the relevant elements of a game in general.
Abstract: Extensive research has shown that the act of play is extremely important in the lives of human beings. It is thus not surprising that games have a long and continuing history in the development of almost every culture and society. The advent of computers and technology in general has also been akin to the need for entertainment that every human being seeks. However, a curious dichotomy exists in the nature of electronic games: the vast majority of electronic games are individual in nature whereas the nonelectronic ones are collective by nature. On the other hand, recent technological breakthroughs are finally allowing for the implementation of electronic multiplayer games. Because of the limited experience in electronic, multiplayer game design, it becomes necessary to adapt existing expertise in the area of single-player game design to the realm of multiplayer games. This work presents a model to support the initial steps in the design process of multiplayer games. The model is defined in terms of the characteristics that are both inherent and special to multiplayer games but also related to the relevant elements of a game in general. Additionally, the model is used to assist in the design of two multiplayer games. “One of the most difficult tasks people can perform, however much others may despise it, is the invention of good games ...”

Journal ArticleDOI
TL;DR: The design and implementation of a distributed virtual reality platform that was developed to support the training of multiple users who must perform complex tasks in which situation assessment and critical thinking are the primary components of success are presented.
Abstract: This paper presents the design and implementation of a distributed virtual reality (VR) platform that was developed to support the training of multiple users who must perform complex tasks in which situation assessment and critical thinking are the primary components of success. The system is fully immersive and multimodal, and users are represented as tracked, full-body figures. The system supports the manipulation of virtual objects, allowing users to act upon the environment in a natural manner. The underlying intelligent simulation component creates an interactive, responsive world in which the consequences of such actions are presented within a realistic, time-critical scenario. The focus of this work has been on the training of medical emergency-response personnel. BioSimMER, an application of the system to training first responders to an act of bio-terrorism, has been implemented and is presented throughout the paper as a concrete example of how the underlying platform architecture supports complex training tasks. Finally, a preliminary field study was performed at the Texas Engineering Extension Service Fire Protection Training Division. The study focused on individual, rather than team, interaction with the system and was designed to gauge user acceptance of VR as a training tool. The results of this study are presented.

Journal ArticleDOI
TL;DR: The first taxonomy for large-scale distributed simulations is presented, describing the purpose of the system, its scope, and the salient characteristics of its interest management scheme and classify the ten systems according to the taxonomy.
Abstract: Large-scale distributed simulations model the activities of thousands of entities interacting in a virtual environment simulated over wide-area networks. Originally these systems used protocols that dictated that all entities broadcast messages about all activities, including remaining immobile or inactive, to all other entities, resulting in an explosion of incoming messages for all entities, most of which were of no interest. Using a filtering mechanism called interest management, some of these systems now allow entities to express interest in only the subset of information that is relevant to them. This paper surveys ten such systems, describing the purpose of the system, its scope, and the salient characteristics of its interest management scheme. We present the first taxonomy for such systems and classify the ten systems according to the taxonomy. The analysis of the classification reveals the fundamental nature of interest management and points to potential areas of research.

Journal ArticleDOI
TL;DR: This Forum paper focuses on research concerned with the use of VE technology for training spatial behavior in the real world, and presents an overview of issues and problems relevant to conducting research in this area.
Abstract: There is currently much research activity involving virtual environments (VEs) and spatial behavior (spatial perception, cognition, and performance). After some initial remarks describing and categorizing the different types of research being conducted on VEs and spatial behavior, discussion in this Forum paper focuses on one specific type, namely, research concerned with the use of VE technology for training spatial behavior in the real world. We initially present an overview of issues and problems relevant to conducting research in this area, and then, in the latter portion of the paper, present an overview of the research that we believe needs to be done in this area. We have written this paper for the forum section of Presence because, despite its length, it is essentially an opinion piece. Our aim here is not to report the results of research in our own laboratory nor to review the literature, as other available papers already serve these goals. Rather, the primary purpose of this paper is to stimulate open discussion about needed future research. In general, we believe that such a discussion can serve the research establishment as much as reports of completed work.

Journal ArticleDOI
TL;DR: Findings strongly suggest that the quality of realism in VEs must be a function of both auditory and visual display fidelities inclusive of each other.
Abstract: The quality of realism in virtual environments (VEs) is typically considered to be a function of visual and audio fidelity mutually exclusive of each other. However, the VE participant, being human, is multimodal by nature. Therefore, in order to validate more accurately the levels of auditory and visual fidelity that are required in a virtual environment, a better understanding is needed of the intersensory or crossmodal effects between the auditory and visual sense modalities. To identify whether any pertinent auditory-visual cross-modal perception phenomena exist, 108 subjects participated in three experiments which were completely automated using HTML, Java, and JavaScript programming languages. Visual and auditory display quality perceptions were measured intraand intermodally by manipulating the pixel resolution of the visual display and Gaussian white noise level, and by manipulating the sampling frequency of the auditory display and Gaussian white noise level. Statistically significant results indicate that high-quality auditory displays coupled with highquality visual displays increase the quality perception of the visual displays relative to the evaluation of the visual display alone, and that low-quality auditory displays coupled with high-quality visual displays decrease the quality perception of the auditory displays relative to the evaluation of the auditory display alone. These findings strongly suggest that the quality of realism in VEs must be a function of both auditory and visual display fidelities inclusive of each other.

Journal ArticleDOI
TL;DR: The reported findings indicate that object manipulation times are superior when IOs are employed as the interaction device, and that IO devices could therefore be adopted in VEs to provide haptic feedback for diverse applications and, in particular, for assembly task planning.
Abstract: This paper reports on an investigation into the proposed usability of virtual reality for a manufacturing application such as the assembly of a number of component parts into a final product. Before the assembly task itself is considered, the investigation explores the use of VR for the training of human assembly operators and compares the findings to conventionally adopted techniques for parts assembly. The investigation highlighted several limitations of using VR technology. Most significant was the lack of haptic feedback provided by current input devices for virtual environments. To address this, an instrumented object (IO) was employed that enabled the user to pick up and manipulate the IO as the representation of a component from a product to be assembled. The reported findings indicate that object manipulation times are superior when IOs are employed as the interaction device, and that IO devices could therefore be adopted in VEs to provide haptic feedback for diverse applications and, in particular, for assembly task planning.

Journal ArticleDOI
TL;DR: The apparent stability of the virtual object was impaired by a time delay between the observers' head motions and the corresponding change in the object position on the display.
Abstract: Observers adjusted a pointer to match the depicted distance of a monocular virtual object viewed in a see-through, had-mounted display. Distance information was available through motion parallax produced as the observers rocked side to side. The apparent stability of the virtual object was impaired by a time delay between the observers' head motions and the corresponding change in the object position on the display. Localizations were made for four time delays (31 ms, 64 ms, 131 ms, and 197 ms) and three depicted distances (75 cm, 95 cm, and 113 cm). The errors in localizations increased systematically with time delay and depicted distance. A model of the results shows that the judgment error and lateral projected position of the virtual object are each linearly related to time delay.

Journal ArticleDOI
TL;DR: The MAGI (microscope-assisted guided interventions) augmented-reality system, which allows surgeons to view virtual features segmented from preoperative radiological images accurately overlaid in stereo in the optical path of a surgical microscope, has a theoretical overlay accuracy of better than 1 mm at the focal plane of the microscope.
Abstract: This paper describes the MAGI (microscope-assisted guided interventions) augmented-reality system, which allows surgeons to view virtual features segmented from preoperative radiological images accurately overlaid in stereo in the optical path of a surgical microscope. The aim of the system is to enable the surgeon to see in the correct 3-D position the structures that are beneath the physical surface. The technical challenges involved are calibration, segmentation, registration, tracking, and visualization. This paper details our solutions to these problems. As it is difficult to make reliable quantitative assessments of the accuracy of augmented-reality systems, results are presented from a numerical simulation, and these show that the system has a theoretical overlay accuracy of better than 1 mm at the focal plane of the microscope. Implementations of the system have been tested on volunteers, phantoms, and seven patients in the operating room. Observations are consistent with this accuracy prediction.

Journal ArticleDOI
TL;DR: This work found that participants have difficulty in expressing their experience within the constraints of more-traditional research methods, and investigated different forms of presence experience, including, in this terminology, social, environmentally anchored, and self-presence.
Abstract: Gilkey and Weisenberger (1995) discussed the experience of sound and its importance for a sense of presence within an encompassing virtual environment. In this paper, we develop Gilkey and Weisenberger's work in three ways. Firstly, we review theoretical work regarding the role of auditory information in perceptual experience. Secondly, we report on previous empirical studies of induced hearing loss that have implicitly addressed issues pertinent to an understanding of presence in virtual environments. We draw on this work to further inform the theoretical contribution made to the study of presence with regards to auditory experience. Thirdly, we report our empirical work on induced hearing loss, addressing issues associated with presence using both qualitative and quantitative methodologies. We report our findings and discuss methodological issues surrounding the investigation of presence. This work found that participants have difficulty in expressing their experience within the constraints of more-traditional research methods. Evidence emerged of different forms of presence experience, including, in our terminology, social, environmentally anchored, and self-presence. Finally, we discuss the implications of this work for the development of immersive virtual environments.

Journal ArticleDOI
TL;DR: Four alternative control designs of a teleoperated device were compared in a simulated endoscopic task and task completion time under normal visual-motor mapping was found to be significantly shorter than under reversed visual-Motor mapping, emphasizing the potential advantage of ateleoperated endoscopic system.
Abstract: Endoscopic surgery, while offering considerable gains for the patient, has created new difficulties for the surgeon. One problem is the fulcrum effect, which causes the movement of a surgical instrument, as seen on the monitor, to be in the opposite direction to the movement of the surgeon's hand. The problem has been shown to impede the acquisition of endoscopic skills. Teleoperated robotic arms may circumvent this problem by allowing different control-response relations. Four alternative control designs of a teleoperated device were compared in a simulated endoscopic task. A rigid teleoperated robotic arm with two degrees of freedom representing a surgical tool was coupled to a joystick in a position control mode. Feedback was provided through a video display. Participants without prior experience in endoscopy performed a target acquisition task, first by pointing the robotic arm at the targets, and later by maneuvering an object. Performance was measured under four different combinations of visual-motor mapping (normal/reversed), and the joystick's orientation (upwards/downwards). Task completion time under normal visual-motor mapping was found to be significantly shorter than under reversed visual-motor mapping, emphasizing the potential advantage of a teleoperated endoscopic system. The joystick's orientation affected the maneuvering of an object under only the reversed visual-motor mapping, implying that the positioning of a surgical tool and the manipulation of tissues or objects with the tool may be differentially affected by the control design.

Journal ArticleDOI
TL;DR: A qualitative study of navigation, wayfinding, and place experience within a virtual city and the implications for the construction of virtual environments modeled on real-world forms are considered.
Abstract: We report a qualitative study of navigation, wayfinding, and place experience within a virtual city. “Cityscape” is a virtual environment (VE), partially algorithmically generated and intended to be redolent of the aggregate forms of real cities. In the present study, we observed and interviewed participants during and following exploration of a desktop implementation of Cityscape. A number of emergent themes were identified and are presented and discussed. Observing the interaction with the virtual city suggested a continuous relationship between real and virtual worlds. Participants were seen to attribute real-world properties and expectations to the contents of the virtual world. The implications of these themes for the construction of virtual environments modeled on real-world forms are considered.

Journal ArticleDOI
TL;DR: The results of the research project include the development of the novel FSC algorithm, data on how time delays degrade performance of surgical tasks, and recommendations on how telesurgery should be performed to accommodate telecommunication time delays.
Abstract: This paper describes the testbed telesurgery system that was developed in MIT's Human Machine Systems Laboratory. This system was used to investigate the effects of communication time delays on controller stability and on the performance of surgical tasks. The system includes a bilateral force-reflecting teleoperator system, interchangeable surgical tools, audio and video communication between the master and slave sites, and methods to generate time delays between the sites. To compensate for the time delays, various control schemes were investigated, leading to the development and selection of fuzzy sliding control (FSC). With a stable teleoperator system, experiments in performing a variety of surgical exercises were conducted. These looked at the performance of a team of a telesurgeon and local assistant given a number of different time-delay scenarios, including synchronous and asynchronous force and audio/video feedback. The results of the research project include the development of the novel FSC algorithm, data on how time delays degrade performance of surgical tasks, and recommendations on how telesurgery should be performed to accommodate telecommunication time delays.

Journal ArticleDOI
Michael Cohen1
TL;DR: A taxonomy of modal narrowcasting functions is proposed, and an audibility protocol is described, comprising revoke, renounce, grant, and claim methods, invocable by these narrowcasting commands to control superposition of soundscapes.
Abstract: Non-immersive perspectives in virtual environments enable flexible paradigms of perception, especially in the context of frames of reference for conferencing and musical audition. Traditional mixing idioms for enabling and disabling various audio sources employ mute and solo functions, that, along with cue, selectively disable or focus on respective channels. Exocentric interfaces which explicitly model not only sources but also sinks, motivate the generalization of mute and solo (or cue) to exclude and include, manifested for sinks as deafen and attend (confide and harken). Such functions, which narrow stimuli by explicitly blocking out and/or concentrating on selected entities, can be applied not only to other users' sinks for privacy, but also to one's own sinks for selective attendance or presence. Multiple sinks are useful in groupware, where a common environment implies social inhibitions to rearranging shared sources like musical voices or conferees, as well as individual sessions in which spatial arrangement of sources, like the configuration of a concert orchestra, has mnemonic value. A taxonomy of modal narrowcasting functions is proposed, and an audibility protocol is described, comprising revoke, renounce, grant, and claim methods, invocable by these narrowcasting commands to control superposition of soundscapes.

Journal ArticleDOI
TL;DR: Methods for displaying complex, texturemapped environments with four or more spatial dimensions that allow for real-time interaction and can be used in conjunction with either 2-D or 3-D texture mapping are described.
Abstract: We describe methods for displaying complex, texturemapped environments with four or more spatial dimensions that allow for real-time interaction. At any one moment in time, a three-dimensional cross section of the high-dimensional environment is rendered using techniques that have been implemented in OpenGL. The position and orientation of the user within the environment determine the 3-D cross section. A variety of interfaces can be used to control position and orientation in 4-D, including a mouse “freelook” interface for use with a computer monitor display, and an interface that uses a head-tracking system with three degrees of freedom and PINCH gloves in combination with a head-mounted display. The methods avoid the use of projections that require depth buffering in greater than three dimensions and can be used in conjunction with either 2-D or 3-D texture mapping. A computer graphic engine that displays 4-D virtual environments interactively uses these methods, as does a level editor and modeling program that can be used to create 4-D environments.

Journal ArticleDOI
TL;DR: An integrated model system on human immunology is created to demonstrate the application of virtual reality to education, and a modular software framework is developed to facilitate the further extension of the Virtual Explorer model to other fields.
Abstract: The Virtual Explorer project of the Senses Bureau at the University of California, San Diego, focuses on creating immersive, highly interactive environments for education and scientific visualization which are designed to be educational---and exciting, playful, and enjoyable, as well. We have created an integrated model system on human immunology to demonstrate the application of virtual reality to education, and we've also developed a modular software framework to facilitate the further extension of the Virtual Explorer model to other fields. The system has been installed internationally in numerous science museums, and more than 7,000 individuals have participated in demonstrations. The complete source code---which runs on a variety of Silicon Graphics computers---is available on CD-ROM from the authors.

Journal ArticleDOI
TL;DR: Results showed that the initial level of encoding affects the construction of spatial knowledge, whose exploration is then constrained mostly by the imagery perspective that has been adopted, and the spatial arrangement of the environment.
Abstract: Cognitive repositioning is crucial for anticipating the content of the visual scene from new vantage points in virtual environments (VE). This repositioning may be performed using either a first-(immersive-like) or a third-person imagery perspective (via an imaginary avatar). A three-phase study examined the effect of mental representation richness and imagery perspective on the anticipation of new vantage points and their associated objects inside an unfamiliar but meaningfully organized VE. Results showed that the initial level of encoding affects the construction of spatial knowledge, whose exploration is then constrained mostly by the imagery perspective that has been adopted, and the spatial arrangement of the environment. A third-person perspective involves mental extrapolation of directions with the help of a scanning process whose rate of processing is faster than the process used to generate the missing 3-D representation of first-person perspectives. Finally, anticipation of a new vantage point precedes access to its associated object mainly when adopting a first-person perspective for exploring the environment. These findings may prove to be of potential interest when defining cognitively valid rules for real-time automatic camera control in VEs.

Journal ArticleDOI
TL;DR: This paper presents a method and algorithms for automatic modeling of anatomical joint motion that relies on collision detection to achieve stable positions and orientations of the knee joint by evaluating the relative motion of the tibia with respect to the femur.
Abstract: This paper presents a method and algorithms for automatic modeling of anatomical joint motion. The method relies on collision detection to achieve stable positions and orientations of the knee joint by evaluating the relative motion of the tibia with respect to the femur (for example, flexion-extension). The stable positions then become the basis for a look-up table employed in the animation of the joint. The strength of this method lies in its robustness to animate any normal anatomical joint. It is also expandable to other anatomical joints given a set of kinematic constraints for the joint type as well as a high-resolution, static, 3-D model of the joint. The demonstration could be patient specific if a person's real anatomical data could be obtained from a medical imaging modality such as computed tomography or magnetic resonance imaging. Otherwise, the demonstration requires the scaling of a generic joint based on patient characteristics. Compared with current teaching strategies, this Virtual Reality Dynamic Anatomy (VRDA) tool aims to greatly enhance students' understanding of 3-D human anatomy and joint motions. A preliminary demonstration of the optical superimposition of a generic knee joint on a leg model is shown.