scispace - formally typeset
Search or ask a question

Showing papers in "Presence: Teleoperators & Virtual Environments in 2007"


Journal ArticleDOI
TL;DR: The results suggest that to have an almost perfectly realistic human appearance is a necessary but not a sufficient condition for the uncanny valley, which emerges only when there is also an abnormal feature.
Abstract: Roboticists believe that people will have an unpleasant impression of a humanoid robot that has an almost, but not perfectly, realistic human appearance. This is called the uncanny valley, and is not limited to robots, but is also applicable to any type of human-like object, such as dolls, masks, facial caricatures, avatars in virtual reality, and characters in computer graphics movies. The present study investigated the uncanny valley by measuring observers' impressions of facial images whose degree of realism was manipulated by morphing between artificial and real human faces. Facial images yielded the most unpleasant impressions when they were highly realistic, supporting the hypothesis of the uncanny valley. However, the uncanny valley was confirmed only when morphed faces had abnormal features such as bizarre eyes. These results suggest that to have an almost perfectly realistic human appearance is a necessary but not a sufficient condition for the uncanny valley. The uncanny valley emerges only when there is also an abnormal feature.

476 citations


Journal ArticleDOI
TL;DR: It is hypothesized that force feedback is helpful in this blunt dissection task because the artery is stiffer than the surrounding tissue, which serves to constrain the subject's hand from commanding inappropriate motions that generate large forces.
Abstract: Force feedback is widely assumed to enhance performance in robotic surgery, but its benefits have not yet been systematically assessed. In this study we examine the effects of force feedback on a blunt dissection task. Twenty subjects used a telerobotic system to expose an artery in a synthetic model while viewing the operative site with a video laparoscope. Subjects were drawn from a range of surgical backgrounds, from inexperienced to attending surgeons. Performance was compared between three force feedback gains: 0% (no force feedback), 37%, and 75%. The absence of force feedback increased the average force magnitude applied to the tissue by at least 50%, and increased the peak force magnitude by at least 100%. The number of errors that damage tissue increased by over a factor of 3. The rate and precision of dissection were not significantly enhanced with force feedback. These results hold across all levels of previous surgical experience. We hypothesize that force feedback is helpful in this blunt dissection task because the artery is stiffer than the surrounding tissue. This mechanical contrast serves to constrain the subject's hand from commanding inappropriate motions that generate large forces.

173 citations


Journal ArticleDOI
TL;DR: This evaluation suggests that involving users and designers from the beginning improves the effectiveness of the VE in the context of the real world urban planning project, and demonstrates that appropriate levels of realism are significant for the design process and for communicating about designs.
Abstract: In this paper we present a user-centered design approach to the development of a Virtual Environment (VE), by utilizing an iterative, user-informed process throughout the entire design and development cycle. A preliminary survey was first undertaken with end users, that is, architects, chief engineers, and decision makers of a real-world architectural and urban planning project, followed by a study of the traditional workflow employed. We then determined the elements required to make the VE useful in the real-world setting, choosing appropriate graphical and auditory techniques to develop audiovisual VEs with a high level of realism. Our user-centered design approach guided the development of an appropriate interface and an evaluation methodology to test the overall usability of the system. The VE was evaluated both in the laboratory and, most importantly, in the users' natural work environments. In this study we present the choices we made as part of the design and evaluation methodologies employed, which successfully combined research goals with those of a real-world project. Among other results, this evaluation suggests that involving users and designers from the beginning improves the effectiveness of the VE in the context of the real world urban planning project. Furthermore, it demonstrates that appropriate levels of realism, in particular spatialized 3D sound, high-detail vegetation, and shadows, as well as the presence of rendered crowds, are significant for the design process and for communicating about designs; they enable better appreciation of overall ambience of the VE, perception of space and physical objects, as well as the sense of scale. We believe this study is of interest to VE researchers, designers, and practitioners, as well as professionals interested in using VR in their workplace.

125 citations


Journal ArticleDOI
TL;DR: This literature review provides an overview of studies that have attempted to use vibrotactile interfaces to convey information to human operators and the results obtained are described, and their implications for haptic/tactile interface design elucidated.
Abstract: The suggestion that the body surface might be used as an additional means of presenting information to human-machine operators has been around in the literature for nearly 50 years Although recent technological advances have made the possibility of using the body as a receptive surface much more realistic, the fundamental limitations on the human information processing of tactile stimuli presented across the body surface are, however, still largely unknown This literature review provides an overview of studies that have attempted to use vibrotactile interfaces to convey information to human operators The importance of investigating any possible central cognitive limitations (ie, rather than the peripheral limitations, such as related to sensory masking, that were typically addressed in earlier research) on tactile processing for the most effective design of body interfaces is highlighted The applicability of the constraints emerging from studies of tactile processing under conditions of unisensory (ie, purely tactile) stimulus presentation, to more ecologically valid conditions of multisensory stimulation, is also discussed Finally, the results obtained from recent studies of tactile information processing under conditions of multisensory stimulation are described, and their implications for haptic/tactile interface design elucidated

120 citations


Journal ArticleDOI
TL;DR: A brain-computer interface is set up to be used as an input device to a highly immersive virtual reality CAVE-like system and the interrelations between BCI and presence are studied.
Abstract: We have set up a brain-computer interface (BCI) to be used as an input device to a highly immersive virtual reality CAVE-like system. We have carried out two navigation experiments: three subjects were required to rotate in a virtual bar room by imagining left or right hand movement, and to walk along a single axis in a virtual street by imagining foot or hand movement. In this paper we focus on the subjective experience of navigating virtual reality "by thought," and on the interrelations between BCI and presence.

79 citations


Journal ArticleDOI
TL;DR: An ad hoc method is suggested to deal with lumpy Likert scaled data which can involve a further lumping of the results followed by the application of nonparametric statistics.
Abstract: Likert scaled data, which are frequently collected in studies of interaction in virtual environments, demand specialized statistical tools for analysis. The routine use of statistical methods appropriate for continuous data in this context can lead to significant inferential flaws. Likert scaled data are ordinal rather than interval scaled and need to be analyzed using rank based statistical procedures that are widely available. Likert scores are “lumpy” in the sense that they cluster around a small number of fixed values. This lumpiness is made worse by the tendency for subjects to cluster towards either the middle or the extremes of the scale. We suggest an ad hoc method to deal with such data which can involve a further lumping of the results followed by the application of nonparametric statistics. Averaging Likert scores over several different survey questions, which is sometimes done in studies of interaction in virtual environments, results in a different sort of lumpiness. The lumped variables which are obtained in this manner can be quite murky and should be used with great caution, if at all, particularly if the number of questions over which such averaging is carried out is small.

64 citations


Journal ArticleDOI
TL;DR: A scalable network framework for DVEs, ATLAS is proposed, which meets the scalability of a system as a whole and integration experiences of ATLAS with several virtual reality systems ensure the versatility of the proposed solution.
Abstract: A distributed virtual environment (DVE) is a software system that allows users in a network to interact with each other by sharing a common view of their states. As users are geographically distributed over large networks like the internet and the number of users increases, scalability is a key aspect to consider for real-time interaction. Various solutions have been proposed to improve the scalability in DVE systems but they are either focused on only specific aspects or customized to a target application. In this paper, we classify the approaches for improving scalability of DVE into five categories: communication architecture, interest management, concurrency control, data replication, and load distribution. We then propose a scalable network framework for DVEs, ATLAS. Incorporated with our various scalable schemes, ATLAS meets the scalability of a system as a whole. The integration experiences of ATLAS with several virtual reality systems ensure the versatility of the proposed solution.

61 citations


Journal ArticleDOI
TL;DR: It is argued that in general questionnaire data is treated far too seriously, and that a different paradigm is needed for presence researchone where multivariate physiological and behavioral data is used alongside subjective and questionnaire data, with the latter not having any specially privileged role.
Abstract: The problems of valid design of questionnaires and analysis of ordinal response data from questionnaires have had a long history in the psychological and social sciences. Gardner and Martin (2007, this issue) illustrate some of these problems with reference to an earlier paper (Garau, Slater, Pertaub, & Razzaque, 2005) that studied copresence with virtual characters within an immersive virtual environment. Here we review the critique of Gardner and Martin supporting their main arguments. However, we show that their critique could not take into account the historical circumstances of the experiment described in the paper, and moreover that a reanalysis using more appropriate statistical methods does not result in conclusions that are different from those reported in the original paper. We go on to argue that in general such questionnaire data is treated far too seriously, and that a different paradigm is needed for presence research---one where multivariate physiological and behavioral data is used alongside subjective and questionnaire data, with the latter not having any specially privileged role.

56 citations


Journal ArticleDOI
TL;DR: A novel approach to reduce the network traffic in haptic telepresence systems with constant (unknown) time delay is presented with the proposed deadband control approach, and a well-known time delay approach, the scattering transformation, is extended.
Abstract: Two of the major challenges in networked haptic telepresence and teleaction systems are the time delay associated with the data transmission over the network and the limited communication resources. Sophisticated control methods are available for the stabilization in the presence of time delay. The reduction of haptic network traffic, however, is only poorly treated in the known literature. Data reduction approaches for time delayed haptic telepresence are not available at all. This article presents a novel approach to reduce the network traffic in haptic telepresence systems with constant (unknown) time delay. With the proposed deadband control approach data are sent only if the signal to transmit changes more than a given threshold value. In order to guarantee stability with time delay and data reduction a well-known time delay approach, the scattering transformation, is extended. Experimental user studies show that an average network traffic reduction up to 96% is achieved without significantly impairing the perception of the remote environment compared to the standard approach with time delay.

54 citations


Journal ArticleDOI
TL;DR: This article introduces a novel approach to reduce network traffic in haptic telepresence systems exploiting limits in human haptic perception with the proposed deadband control approach.
Abstract: Limited communication resources represent a major challenge in networked tele-presence and teleaction systems. Video and audio compression schemes are well advanced employing models of human perception. In contrast to that haptic data reduction schemes are rather poorly treated in the known literature. This article introduces a novel approach to reduce network traffic in haptic telepresence systems exploiting limits in human haptic perception. With the proposed deadband control approach, data packets are transmitted only if the signal change exceeds a signal amplitude dependent perception threshold. Experimental user studies show that an average network traffic reduction of up to 85% can be achieved without significantly impairing the perception of the remote environment. The assumption throughout this article is that there is no communication time delay.

52 citations


Journal ArticleDOI
TL;DR: An experimental telemanipulator for endoscopic surgery is built that provides both force-feedback and Cartesian control as a prerequisite for automation and the hypothesis that haptic feedback in the form of sensory substitution facilitates performance of surgical tasks was evaluated.
Abstract: The implementation of telemanipulator systems for cardiac surgery enabled heart surgeons to perform delicate minimally invasive procedures with high precision under stereoscopic view. At present, commercially available systems do not provide force-feedback or Cartesian control for the operating surgeon. The lack of haptic feedback may cause damage to tissue and can cause breaks of suture material. In addition, minimally invasive procedures are very tiring for the surgeon due to the need for visual compensation for the missing force feedback. While a lack of Cartesian control of the end effectors is acceptable for surgeons (because every movement is visually supervised), it prevents research on partial automation. In order to improve this situation, we have built an experimental telemanipulator for endoscopic surgery that provides both force-feedback (in order to improve the feeling of immersion) and Cartesian control as a prerequisite for automation. In this article, we focus on the inclusion of force feedback and its evaluation. We completed our first bimanual system in early 2003 (EndoPAR Endoscopic Partial Autonomous Robot). Each robot arm consists of a standard robot and a surgical instrument, hence providing eight DOF that enable free manipulation via trocar kinematics. Based on the experience with this system, we introduced an improved version in early 2005. The new ARAMIS system (Autonomous Robot Assisted Minimally Invasive Surgery) has four multi-purpose robotic arms mounted on a gantry above the working space. Again, the arms are controlled by two force-feedback devices, and 3D vision is provided. In addition, all surgical instruments have been equipped with strain gauge force sensors that can measure forces along all translational directions of the instrument's shaft. Force-feedback of this system was evaluated in a scenario of robotic heart surgery, which offers an impression very similar to the standard, open procedures with high immersion. It enables the surgeon to palpate arteriosclerosis, to tie surgical knots with real suture material, and to feel the rupture of suture material. Therefore, the hypothesis that haptic feedback in the form of sensory substitution facilitates performance of surgical tasks was evaluated on the experimental platform described in the article (on the EndoPAR version). In addition, a further hypothesis was explored: The high fatigue of surgeons during and after robotic operations may be caused by visual compensation due to the lack of force-feedback (Thompson, J., Ottensmeier, M., & Sheridan, T. 1999. Human Factors in Telesurgery, Telmed Journal, 5 (2) 129--137.).

Journal ArticleDOI
TL;DR: It is suggested that it is necessary to provide surrounding objects to aid in the determination of an object's depth and to elicit size-constancy in VE.
Abstract: The use of virtual environments (VE) for many research and commercial purposes relies on its ability to generate environments that faithfully reproduce the physical world. However, due to its limitations the VE can have a number of flaws that adversely affect its use and believability. One of the more important aspects of this problem is whether the size of an object in the VE is perceived as it would be in the physical world. One of the fundamental phenomena for correct size is size-constancy, that is, an object is perceived to be the same size regardless of its distance from the observer. This is in spite of the fact that the retinal size of the object shrinks with increasing distance from the observer. We examined size-constancy in the CAVE and found that size-constancy is a strong and dominant perception in our subject population when the test object is accompanied by surrounding environmental objects. Furthermore, size-constancy changes to a visual angle performance (i.e., object size changed with distance from the subject) when these surrounding objects are removed from the scene. As previously described for the physical world, our results suggest that it is necessary to provide surrounding objects to aid in the determination of an object's depth and to elicit size-constancy in VE. These results are discussed regarding their implications for viewing objects in projection-based VE and the environments that play a role in the perception of object size in the CAVE.

Journal ArticleDOI
TL;DR: Two strategies are suggested for solving the motion mapping problem for the single user case and the resulting solutions are extended to the multiuser case where several local users share a local environment to control different remote agents.
Abstract: A telepresence system enables a user in a local environment to maneuver in a remote or virtual space through a robotic operator (agent). In order to ensure a high degree of telepresence realism, it is critical that the local user has the ability to control the remote agent's movement through the user's own locomotion. The required motion of the remote agent is determined according to its environment and the specific task it is to perform. The local user's environment is usually different from that of the remote agent in terms of the shapes and dimensions. A motion mapping is needed from the remote agent to the local user to ensure the similarity of the paths in the two environments. In particular, the terminal position of the local user after a segment of movement is also an important portion in such a motion mapping. This paper progressively addresses these issues from the optimization point of view. Two strategies are suggested for solving the motion mapping problem for the single user case. The resulting solutions are then extended to the multiuser case where several local users share a local environment to control different remote agents. Extensive simulations and comparisons show the feasibility and effectiveness of the proposed approaches.

Journal ArticleDOI
TL;DR: Viewing a moving hand results in a stronger desynchronization of the central beta rhythm than viewing a moving cube, which provides further evidence for some extent of motor processing related to visual presentation of objects and implies a greater involvement of motor areas in the brain with the observation of action of different body parts.
Abstract: We studied the impact of different visual objects such as a moving hand and a moving cube on the bioelectrical brain activity (i.e., electroencephalogram; EEG). The moving objects were presented in a virtual reality (VR) system via a head mounted display (HMD). Nine healthy volunteers were confronted with 3D visual stimulus presentations in four experimental conditions: (i) static hand, (ii) dynamic hand, (iii) static cube, and (iv) dynamic cube. The results reveal that the processing of moving visual stimuli depends on the type of object: viewing a moving hand results in a stronger desynchronization of the central beta rhythm than viewing a moving cube. This provides further evidence for some extent of motor processing related to visual presentation of objects and implies a greater involvement of motor areas in the brain with the observation of action of different body parts than with the observation of non-body part movements.

Journal Article
TL;DR: This study investigates the effect of update rate on the quality of haptic virtual textures, with the goal to develop a guideline for choosing an optimal update rate for haptic texture rendering.
Abstract: This study investigates the effect of update rate on the quality of haptic virtual textures, with the goal to develop a guideline for choosing an optimal update rate for haptic texture rendering. T...

Journal ArticleDOI
TL;DR: A physically-based virtual hair salon system that simulates and renders hair at accelerated rates, enabling users to interactively style virtual hair and create hairstyles more intuitively than previous techniques.
Abstract: User interaction with animated hair is desirable for various applications but difficult because it requires real-time animation and rendering of hair. Hair modeling, in cluding styling, simulation, and rendering, is computationally challenging due to the enormous number of deformable hair strands on a human head, elevating the computational complexity of many essential steps, such as collision detection and self-shadowing for hair. Using simulation localization techniques, multi-resolution representations, and graphics hardware rendering acceleration, we have developed a physically-based virtual hair salon system that simulates and renders hair at accelerated rates, enabling users to interactively style virtual hair. With a 3D haptic interface, users can directly manipulate and position hair strands, as well as employ real-world styling applications (cutting, blow-drying, etc.) to create hairstyles more intuitively than previous techniques.

Journal ArticleDOI
TL;DR: A novel binaural sound source localizer is built using generic Head Related Transfer Functions (HRTFs) and the comparison with existing interpolation methods reveals that the new method offers superior performance and is capable of achieving high-fidelity reconstructions of HRTFs.
Abstract: Telepresence is generally described as the feeling of being immersed in a remote environment, be it virtual or real. A multimodal telepresence environment, equipped with modalities such as vision, audition, and haptic, improves immersion and augments the overall perceptual presence. The present work focuses on acoustic telepresence at both the teleoperator and operator sites. On the teleoperator side, we build a novel binaural sound source localizer using generic Head Related Transfer Functions (HRTFs). This new localizer provides estimates for the direction of a single sound source given in terms of azimuth and elevation angles in free space by using only two microphones. It also uses an algorithm that is efficient compared to the currently known algorithms used in similar localization processes. On the operator side, the paper addresses the problem of spatially interpolating HRTFs for densely sampled high-fidelity 3D sound synthesis. In our telepresence application scenario the synthesized 3D sound is presented to the operator over headphones and shall achieve a high-fidelity acoustic immersion. Using measured HRTF data, we create interpolated HRTFs between the existing functions using a matrix-valued interpolation function. The comparison with existing interpolation methods reveals that our new method offers superior performance and is capable of achieving high-fidelity reconstructions of HRTFs.

Journal ArticleDOI
TL;DR: A parameter-fitting algorithm is presented that recovers the parameters from a large set of sound recordings around objects and creates a continuous timbre field by interpolation that is rendered in a real-time simulation with integrated haptic, graphic, and audio display.
Abstract: We describe a methodology for virtual reality designers to capture and resynthesize the variations in sound made by objects when we interact with them through contact such as touch. The timbre of contact sounds can vary greatly, depending on both the listener's location relative to the object, and the interaction point on the object itself. We believe that an accurate rendering of this variation greatly enhances the feeling of immersion in a simulation. To do this, we model the variation with an efficient algorithm based on modal synthesis. This model contains a vector field that is defined on the product space of contact locations and listening positions around the object. The modal data are sampled on this high dimensional space using an automated measuring platform. A parameter-fitting algorithm is presented that recovers the parameters from a large set of sound recordings around objects and creates a continuous timbre field by interpolation. The model is subsequently rendered in a real-time simulation with integrated haptic, graphic, and audio display. We describe our experience with an implementation of this system and an informal evaluation of the results.

Journal ArticleDOI
TL;DR: This paper presents a hybrid approach to the simulation of surgical cutting procedures by combining a node-snapping technique with a physically based meshfree computational scheme, the point-associated finite field (PAFF) approach, and empirical data obtained from controlled cutting experiments.
Abstract: In this paper, we present some recent advances in realistic surgery simulation including novel algorithms for simulating surgical cutting and techniques of improving visual realism of the simulated scenarios using images. Simulation of surgical cutting is one of the most challenging tasks in the development of a surgery simulator. Changes in topology during simulation render precomputed data unusable. Moreover, the process is nonlinear and the underlying physics is complex. Therefore, fully realistic simulation of surgical cutting at real-time rates on single processor machines is possibly out of reach at the present time. In this paper, we present a hybrid approach to the simulation of surgical cutting procedures by combining a node-snapping technique with a physically based meshfree computational scheme, the point-associated finite field (PAFF) approach, and empirical data obtained from controlled cutting experiments. To enhance the realism of the rendered scenarios, we propose an innovative way of using images obtained from videos acquired during actual surgical processes. Using a combination of techniques such as image mosaicing and view-dependent texture-mapping, we have been able to achieve excellent realistic effects with desired tissue glistening as the camera position is changed. Realistic examples are presented to showcase the results.

Journal ArticleDOI
TL;DR: A computer based design guide is suggested at the end to provide some guidelines for the design of telepresence systems from a human factors point of view and a meta-analytical study is presented to describe the relationship between presence and performance more precisely.
Abstract: The overall aim of this work is to provide some guidelines for the design of tele-presence systems from a human factors point of view. Developers of such human-machine systems face at least two major problems: There are hardly any standard input devices, and guiding design principles are almost missing. Further, most often telepresence systems should enable both a high degree of performance and a high sensation of presence, and yet the relationship between these two variables is still a subject of research. To cope with some of the problems, two experimental studies are presented. Each focuses on a different aspect of interface design, which is of widespread interest in the field of telepresence systems. The first is related to the control of multiple degrees of freedom and the second refers to bimanual input control. Beyond this work, a meta-analytical study is presented to describe the relationship between presence and performance more precisely. Certainly there are more issues that have to be studied (e.g., perceptual aspects) to guide the design of telepresence systems. To provide a framework for these and further human factor aspects, a computer based design guide is suggested at the end. This tool addresses system developers and assists in realizing new interfaces more effectively.

Journal ArticleDOI
TL;DR: It is found that update rates much higher than the conventional 1 kHz are needed in order to achieve a stable rendering of clean and hard textured surfaces and that the ability to distinguish textures rendered with different update rates depends on whether the virtual textures contain perceived instability.
Abstract: This study investigates the effect of update rate on the quality of haptic virtual textures, with the goal to develop a guideline for choosing an optimal update rate for haptic texture rendering. Two metrics, control stability and perceived quality of the virtual haptic texture, were used. For control stability, we examined the effect of update rate on the “buzzing” of virtual haptic textures. For perceived quality, we measured the discriminability of virtual haptic textures rendered at different update rates. Our study indicates that update rates much higher than the conventional 1 kHz are needed in order to achieve a stable rendering of “clean and hard” textured surfaces. We also found that our ability to distinguish textures rendered with different update rates depends on whether the virtual textures contain perceived instability. Based on these results, we provide a general guideline for selecting an optimal update rate for rendering virtual textured surfaces.

Journal ArticleDOI
TL;DR: This project employed fuzzy set theory to design a classifier for performance of a subject training on a surgical simulator, using three categories: novice, intermediate, and expert, which was able to classify the users of the system.
Abstract: Increasing interest in computer-based surgical simulators as time-and cost-efficient training tools has introduced a new problem: objective evaluation of surgical performance based on scoring metrics provided by surgical simulators. This project employed fuzzy set theory to design a classifier for performance of a subject training on a surgical simulator, using three categories: novice, intermediate, and expert. The MIST-VR simulator was used in a user study of 26 subjects with three different surgical skill levels: 8 experienced laparoscopic surgeons (experts), 8 surgical assistants (intermediates), and 10 nurses (novices). Subjects were required to perform four trials of a suturing task and a knot-tying task on the simulator. The performance data were then used to train and test two fuzzy classifiers for each task. The fuzzy classifier was able to classify the users of the system. The models presented a highly nonlinear relationship between the inputs (performance metrics) and output (fuzzy score) of the system, which may not be effectively captured with classical classification approaches. Fuzzy classifiers, however, can offer effective tools to handle the complexity and fuzziness of objective evaluation of surgical performances.

Journal ArticleDOI
TL;DR: A method for interpolating rotation corrections that has not previously been used in this context and is rooted in the geometry of the space of rotations to enable correction based on scattered data samples is described.
Abstract: We describe a method for calibrating an electromagnetic motion tracking device. Algorithms for correcting both location and orientation data are presented. In particular, we use a method for interpolating rotation corrections that has not previously been used in this context. This method, unlike previous methods, is rooted in the geometry of the space of rotations. This interpolation method is used in conjunction with Delaunay tetrahedralization to enable correction based on scattered data samples. We present measurements that support the assumption that neither location nor orientation errors are dependent on sensor orientation. We give results showing large improvements in both location and orientation errors. The methods are shown to impose a minimal computational burden.

Journal ArticleDOI
TL;DR: It is concluded that immersive VR is a suitable tool to investigate perception-action coupling during walking, allowing for a systematic manipulation of optic flow parameters.
Abstract: The present study focused on the impact of immersive virtual reality (VR) technology on the coordination dynamics of walking, because of VR-induced symptoms and effects (e.g., motion sickness, postural instability, and disorientation) reported in the literature. Subjects were instructed to walk on a treadmill in a virtual and a real environment, while walking speeds were systematically varied. The virtual laboratory environment closely resembled the real laboratory environment. A third experimental condition was included controlling for the restricted view of a head mounted display (HMD) of the VR system. Movement of arms and legs were recorded with an Optotrak system. The main finding was that, for all speed conditions, there was an increased stride frequency in the VR environment compared to the other conditions. At the lower walking speeds, this coincided with a stronger locking of the arm movements on the stride frequency, and an increased mean relative phase between left arm and right arm movements as well as between ipsilateral arm and leg movements. No significant differences in the stability of the walking patterns were observed. Most importantly though, the impact of VR immersion was not large, was primarily limited to the lower walking velocity range, and could be further reduced by correcting for the effects of increased stride frequency by applying dimensionless analysis. The restricted view of the HMD did not significantly influence walking coordination. On the basis of these findings, it is concluded that immersive VR is a suitable tool to investigate perception-action coupling during walking, allowing for a systematic manipulation of optic flow parameters.

Journal ArticleDOI
TL;DR: A mobile agent based framework for large-scale CVE, MACVE, is proposed to support a large number of concurrent participants in a CVE with a large amount of evolving virtual entities.
Abstract: The Collaborative Virtual Environment (CVE) is a promising technology which provides an online shared virtual world for geographically dispersed people to interact with each other. However, the scalability of existing CVE systems is limited due to the constraints in processing power and network speed of each participating host. In this paper, a mobile agent based framework for large-scale CVE, MACVE, is proposed to support a large number of concurrent participants in a CVE with a large amount of evolving virtual entities. In MACVE, the CVE system is decomposed into a group of collaborative mobile agents, each of which is responsible for an independent system task. These agents can migrate or clone dynamically at any suitable participating host including traditional servers and qualified user hosts to avoid the potential bottleneck, which can improve the scalability of CVE. Our prototype system has demonstrated the feasibility of the proposed framework.

Journal ArticleDOI
TL;DR: In this paper, a CAVE-like virtual environment spatial audio is typically reproduced with amplitude panning on loudspeakers behind the screens, and the subjects' task was to point to the perceived location of a sound source.
Abstract: In a CAVE-like virtual environment spatial audio is typically reproduced with amplitude panning on loudspeakers behind the screens. We arranged a localization experiment where the subjects' task was to point to the perceived location of a sound source. Measured accuracy for a static source was as good as the accuracy in previous headphone experiments using head-related transfer functions. We also measured the localization accuracy of a moving auditory stimulus. The accuracy was decreased by an amount comparable to the minimum audible movement angle.

Journal ArticleDOI
TL;DR: Two integrated multiuser multiperspective stereographic stereographic browsers, respectively featuring IBR-generated egocentric and CG exocentric perspectives are developed, respectively.
Abstract: To support multiperspective and stereographic image display systems intended for multiuser applications, we have developed two integrated multiuser multiperspective stereographic browsers, respectively featuring IBR-generated egocentric and CG exocentric perspectives. The first one described, “VR4U2C” ('virtual reality for you to see'), uses Apple's QuickTime VR technology and the Java programming language together with the support of the QuickTime for Java library. This unique QTVR browser allows coordinated display of multiple views of a scene or object, limited only by the size and number of monitors or projectors assembled around or among users (for panoramas or turnoramas) in various viewing locations. The browser also provides a novel solution to limitations associated with display of QTVR imagery: its multinode feature provides interactive stereographic QTVR (dubbed SQTVR) to display dynamically selected pairs of images exhibiting binocular parallax, the stereoscopic depth percept enhanced by motion parallax from displacement of the viewpoint through space coupled with rotation of the view through a 360° horizontal panorama. This navigable approach to SQTVR allows proper occlusion/disocclusion as the virtual standpoint shifts, as well as natural looming of closer objects compared to more distant ones. We have integrated this stereographic panoramic browsing application in a client/server architecture with a sibling client, named “Just Look at Yourself!” which is built with Java3D and allows realtime visualization of the dollying and viewpoint adjustment as well as juxtaposition and combination of stereographic CG and IBR displays. “Just Look at Yourself!” visualizes and emulates VR4U2C, embedding avatars associated with cylinder pairs wrapped around the stereo standpoints texture-mapped with a set of panoramic scenes into a 3D CG model of the same space as that captured by the set of panoramas. The transparency of the 3D CG polygon space and the photorealistic stereographic 360° scenes, as well as the size of the stereo goggles through which the CG space is conceptually viewed and upon which the 360° scenes are texture-mapped, can be adjusted at runtime to understand the relationship of the spaces.

Journal ArticleDOI
TL;DR: The development of a virtual environment supporting experiments with causal perception that can provide an interesting experimental setting for some presence determinants and the elicitation of causal impressions can become part of VR technologies to provide new forms of VR experiences are described.
Abstract: Causality is an important aspect of how we construct reality. Yet, while many psychological phenomena have been studied in their relation to virtual reality (VR), very little work has been dedicated specifically to causal perception, despite its potential relevance for user interaction and presence. In this paper, we describe the development of a virtual environment supporting experiments with causal perception. The system, inspired from psychological data, operates by intercepting events in the virtual world, so as to create artificial co-occurrences between events and their subsequent effects. After recognizing high-level events and formalizing them with a symbolic representation inspired from robotics planning, it modifies the events' effects using knowledge-based operators. The re-activation of the modified events creates co-occurrences inducing causal impressions in the user. We conducted experiments with fifty-three subjects who had to interact with virtual world objects and were presented with alternative consequences for their actions, generated by the system using various levels of plausibility. At the same time, these subjects had to answer ten items from the Presence Questionnaire corresponding mainly to control and realism factors: causal perception appears to have a positive impact on these items. The implications of this work are twofold: first, causal perception can provide an interesting experimental setting for some presence determinants, and second, the elicitation of causal impressions can become part of VR technologies to provide new forms of VR experiences.

Journal ArticleDOI
TL;DR: An interactive algorithm for continuous collision detection between a moving avatar and its surrounding virtual environment is presented, able to compute the first time of contact between the avatar and the environment interactively, and also guarantees within a user-provided error threshold that no collision ever happens before the first contact occurs.
Abstract: We present an interactive algorithm for continuous collision detection between a moving avatar and its surrounding virtual environment. Our algorithm is able to compute the first time of contact between the avatar and the environment interactively, and also guarantees within a user-provided error threshold that no collision ever happens before the first contact occurs. We model the avatar as an articulated body using line skeletons with constant offsets and the virtual environment as a collection of polygonized objects. Given the position and orientation of the avatar at discrete time steps, we use an arbitrary in-between motion to interpolate the path for each link between discrete instances. We bound the swept space of each link using interval arithmetic and dynamically compute a bounding volume hierarchy (BVH) to cull links that are not in close proximity to the objects in the virtual environment. The swept volumes (SVs) of the remaining links are used to check for possible interference and estimate the time of collision between the surface of the SV and the rest of the objects. Furthermore, we use graphics hardware to accelerate collision queries on the dynamically generated swept surfaces. Our approach requires no precomputation and is applicable to general articulated bodies that do not contain a loop. We have implemented the algorithm on a 2.8 GHz Pentium IV PC with an NVIDIA GeForce 6800 Ultra graphics card and applied it to an avatar with 16 links, moving in a virtual environment composed of hundreds of thousands of polygons. Our prototype system is able to detect all contacts between the moving avatar and the environment in 10--30 ms.

Journal ArticleDOI
TL;DR: A new isometric input device for multi-fingered grasping in virtual environments designed to simultaneously assess forces applied by the thumb, index, and middle finger is presented.
Abstract: In this article we present a new isometric input device for multi-fingered grasping in virtual environments. The device was designed to simultaneously assess forces applied by the thumb, index, and middle finger. A mathematical model of grasping, adopted from the analysis of multi-fingered robot hands, was applied to achieve multi-fingered interaction with virtual objects. We used the concept of visual haptic feedback where the user was presented with visual cues to acquire haptic information from the virtual environment. The virtual object corresponded dynamically to the forces and torques applied by the three fingers. The application of the isometric finger device for multi-fingered interaction is demonstrated in four tasks aimed at the rehabilitation of hand function in stroke patients. The tasks include opening the combination lock on a safe, filling and pouring water from a glass, muscle strength training with an elastic torus, and a force tracking task. The training tasks were designed to train patients' grip force coordination and increase muscle strength through repetitive exercises. The presented virtual reality system was evaluated in a group of healthy subjects and two post-stroke patients (early post-stroke and chronic) to obtain overall performance results. The healthy subjects demonstrated consistent performance with the finger device after the first few trials. The two post-stroke patients completed all four tasks, however, with much lower performance scores as compared to healthy subjects. The results of the preliminary assessment suggest that the patients could further improve their performance through virtual reality training.