scispace - formally typeset
Search or ask a question

Showing papers in "Presence: Teleoperators & Virtual Environments in 2010"


Journal ArticleDOI
TL;DR: The OpenViBE software platform is described which enables researchers to design, test, and use braincomputer interfaces (BCIs) and its suitability for the design of VR applications controlled with a BCI is shown.
Abstract: This paper describes the OpenViBE software platform which enables researchers to design, test, and use brain--computer interfaces (BCIs). BCIs are communication systems that enable users to send commands to computers solely by means of brain activity. BCIs are gaining interest among the virtual reality (VR) community since they have appeared as promising interaction devices for virtual environments (VEs). The key features of the platform are (1) high modularity, (2) embedded tools for visualization and feedback based on VR and 3D displays, (3) BCI design made available to non-programmers thanks to visual programming, and (4) various tools offered to the different types of users. The platform features are illustrated in this paper with two entertaining VR applications based on a BCI. In the first one, users can move a virtual ball by imagining hand movements, while in the second one, they can control a virtual spaceship using real or imagined foot movements. Online experiments with these applications together with the evaluation of the platform computational performances showed its suitability for the design of VR applications controlled with a BCI. OpenViBE is a free software distributed under an open-source license.

687 citations


Journal ArticleDOI
TL;DR: Current approaches to immersive journalism and the theoretical background supporting claims regarding avatar experience in immersive systems are surveyed and a specific demonstration is provided: giving participants the experience of being in an interrogation room in an offshore prison.
Abstract: This paper introduces the concept and discusses the implications of immersive journalism, which is the production of news in a form in which people can gain first-person experiences of the events or situation described in news stories. The fundamental idea of immersive journalism is to allow the participant, typically represented as a digital avatar, to actually enter a virtually recreated scenario representing the news story. The sense of presence obtained through an immersive system (whether a Cave or head-tracked head-mounted displays [HMD] and online virtual worlds, such as video games and online virtual worlds) affords the participant unprecedented access to the sights and sounds, and possibly feelings and emotions, that accompany the news. This paper surveys current approaches to immersive journalism and the theoretical background supporting claims regarding avatar experience in immersive systems. We also provide a specific demonstration: giving participants the experience of being in an interrogation room in an offshore prison. By both describing current approaches and demonstrating an immersive journalism experience, we open a new avenue for research into how presence can be utilized in the field of news and nonfiction.

350 citations


Journal ArticleDOI
TL;DR: Using an action-based response measure, it is found that participants who explored near space while seeing a fully-articulated and tracked visual representation of themselves subsequently made more accurate judgments of absolute egocentric distance to locations ranging from 4 m to 6 m away from where they were standing than did participants who saw no avatar.
Abstract: Few HMD-based virtual environment systems display a rendering of the user's own body. Subjectively, this often leads to a sense of disembodiment in the virtual world. We explore the effect of being able to see one's own body in such systems on an objective measure of the accuracy of one form of space perception. Using an action-based response measure, we found that participants who explored near space while seeing a fully-articulated and tracked visual representation of themselves subsequently made more accurate judgments of absolute egocentric distance to locations ranging from 4 m to 6 m away from where they were standing than did participants who saw no avatar. A nonanimated avatar also improved distance judgments, but by a lesser amount. Participants who viewed either animated or static avatars positioned 3 m in front of their own position made subsequent distance judgments with similar accuracy to the participants who viewed the equivalent animated or static avatar positioned at their own location. We discuss the implications of these results on theories of embodied perception in virtual environments.

194 citations


Journal ArticleDOI
Eric D. Ragan1
TL;DR: The results suggest that, for procedure memorization tasks, increasing the level of immersion even to moderate levels, such as those found in head mounted displays (HMDs) and display walls, can improve performance significantly compared to lower levels of immersion.
Abstract: Researchers have proposed that immersion could have advantages for tasks involving abstract mental activities, such as conceptual learning; however, there are few empirical results that support this idea. We hypothesized that higher levels of immersion would benefit such tasks if the mental activity could be mapped to objects or locations in a 3D environment. To investigate this hypothesis, we performed an experiment in which participants memorized procedures in a virtual environment and then attempted to recall those procedures. We aimed to understand the effects of three components of immersion on performance. The results demonstrate that a matched software field of view (SFOV), a higher physical field of view (FOV), and a higher field of regard (FOR) all contributed to more effective memorization. The best performance was achieved with a matched SFOV and either a high FOV or a high FOR, or both. In addition, our experiment demonstrated that memorization in a virtual environment could be transferred to the real world. The results suggest that, for procedure memorization tasks, increasing the level of immersion even to moderate levels, such as those found in head mounted displays (HMDs) and display walls, can improve performance significantly compared to lower levels of immersion. Hypothesizing that the performance improvements provided by higher levels of immersion can be attributed to enhanced spatial cues, we discuss the values and limitations of supplementing conceptual information with spatial information in educational VR.

127 citations


Journal ArticleDOI
TL;DR: This paper presents results and experiences coming from 10 years of development and use of XVR, a flexible, general-purpose framework for virtual reality VR development, showing how inhomogeneous needs and technologies can be effectively covered by using a single, rather simple, system organization.
Abstract: This paper presents results and experiences coming from 10 years of development and use of XVR, a flexible, general-purpose framework for virtual reality (VR) development. The resulting architecture, that comes under the form of a self-sufficient integrated development environment (IDE) organized around a dedicated scripting language and a virtual machine, is able to accommodate a wide range of applications needs, ranging from simple Web3D applications to motion-based simulators or complex cluster-based immersive visualization systems. Within the framework a common, archetypical structure is used for any application, showing how inhomogeneous needs and technologies can be effectively covered by using a single, rather simple, system organization. We also show how the framework flexibility allows for innovative development techniques such as multiple frameworks coexisting within a single, tightly integrated, VR application.

89 citations


Journal ArticleDOI
TL;DR: A reusable, highly configurable application framework is presented that seamlessly integrates SSVEP stimuli within a desktop-based virtual environment (VE) on standard PC equipment and could lead to vastly improved immersive VEs that allow both disabled and healthy users to seamlessly communicate or interact through an intuitive, natural, and friendly interface.
Abstract: This paper presents a reusable, highly configurable application framework that seamlessly integrates SSVEP stimuli within a desktop-based virtual environment (VE) on standard PC equipment. Steady-state visual evoked potentials (SSVEPs) are brain signals that offer excellent information transfer rates (ITR) within brain--computer interface (BCI) systems while requiring only minimal training. Generating SSVEP stimuli in a VE allows for an easier implementation of motivating training paradigms and more realistic simulations of real-world applications. EEG measurements on seven healthy subjects within three scenarios (Button, Slalom, and Apartment) showed that moving and static software generated SSVEP stimuli flickering at frequencies of up to 29 Hz proved suitable to elicit SSVEPs. This research direction could lead to vastly improved immersive VEs that allow both disabled and healthy users to seamlessly communicate or interact through an intuitive, natural, and friendly interface.

77 citations


Journal ArticleDOI
TL;DR: An overview of the state of the art in character engines is given, and a taxonomy of the features that are commonly found in them is proposed, which can be used as a tool for comparison and evaluation of different engines.
Abstract: As animated characters increasingly become vital parts of virtual environments, then the engines that drive these characters increasingly become vital parts of virtual environment software. This paper gives an overview of the state of the art in character engines, and proposes a taxonomy of the features that are commonly found in them. This taxonomy can be used as a tool for comparison and evaluation of different engines. In order to demonstrate this we use it to compare three engines. The first is Cal3D, the most commonly used open source engine. We also introduce two engines created by the authors, Piavca and HALCA. The paper ends with a brief discussion of some other popular engines.

72 citations


Journal ArticleDOI
TL;DR: The results collected with oscillatory movements performed at different frequencies indicate that for some VR systems, the end-to-end delay might not be constant but could vary as a function of the oscillation frequency.
Abstract: A virtual reality (VR) system tracks one or more objects to generate the depiction of a virtual environment from the user's vantage point. No system achieves this instantaneously: changes in the depicted virtual environment are delayed from changes in the position of the objects being tracked. In this paper, a method is proposed to quantify this time difference, the end-to-end delay of the VR system. Two light-sensing devices and two luminance gradients are used to simultaneously encode the position of one tracked object and its virtual counterpart. One light-sensing device is attached to the tracked object and it captures light from the gradient in the physical environment. The other device captures light from the gradient in the virtual environment. A measurement is obtained by moving the tracked object repetitively (by hand) across the gradient. The end-to-end delay is the asynchrony between the signals generated by the two light-sensing devices. The results collected with oscillatory movements performed at different frequencies indicate that for some VR systems, the end-to-end delay might not be constant but could vary as a function of the oscillation frequency.

72 citations


Journal ArticleDOI
TL;DR: Comparison of the results of Experiments II and III confirmed that the psychophysical magnitude function can reliably predict changing trends in the perceived intensity of mobile device vibration.
Abstract: Vibrotactile rendering is one of the most popular means for improving the user interface of a mobile device, but the availability of related perceptual data that can aid vibrotactile effect design is not currently sufficient. The present paper reports data from a series of psychophysical studies designed to fill this gap. In Experiment I, we measured the absolute detection thresholds of sinusoidal vibrotactile stimuli transmitted to the hand through a mobile phone. Stimuli were generated by a mechanical shaker system that can produce vibrations over a broad frequency and amplitude range. The detection thresholds reported here are a new addition to the literature, and can serve as a baseline for vibrotactile stimulus design. In Experiment II, we estimated the perceived intensities of mobile device vibrations for various frequencies and amplitudes using the same shaker system. We also determined a form of parametric nonlinear function based on Stevens' power law and fit the function to the measured data. This psychophysical magnitude function, which maps vibration frequency and amplitude to a resulting perceived intensity, can be used to predict the perceived intensity of a mobile device vibration from its physical parameter values. In Experiment III, we measured another set of perceived intensities using two commercial miniature vibration actuators (vibration motor and voice-coil actuator) in place of the mechanical shaker. The purpose of this experiment was to evaluate the utility of the psychophysical magnitude function obtained in Experiment II, as vibrotactile stimuli produced by miniature actuators may have different physical characteristics, such as vibration direction and ground condition. Comparison of the results of Experiments II and III confirmed that the psychophysical magnitude function can reliably predict changing trends in the perceived intensity of mobile device vibration. We also discuss further research issues encountered during the investigation. The results presented in this paper may be instrumental in the design of effective vibrotactile actuators and perceptually-salient rendering algorithms for mobile devices.

63 citations


Journal ArticleDOI
TL;DR: A new interaction technique to enable users to perform complex interaction tasks and to navigate within large virtual environments (VE) by using only a BCI based on imagined movements (motor imagery) is proposed.
Abstract: Brain--computer interfaces (BCI) are interaction devices that enable users to send commands to a computer by using brain activity only. In this paper, we propose a new interaction technique to enable users to perform complex interaction tasks and to navigate within large virtual environments (VE) by using only a BCI based on imagined movements (motor imagery). This technique enables the user to send high-level mental commands, leaving the application in charge of most of the complex and tedious details of the interaction task. More precisely, it is based on points of interest and enables subjects to send only a few commands to the application in order to navigate from one point of interest to the other. Interestingly enough, the points of interest for a given VE can be generated automatically thanks to the processing of this VE geometry. As the navigation between two points of interest is also automatic, the proposed technique can be used to navigate efficiently by thoughts within any VE. The input of this interaction technique is a newly-designed self-paced BCI which enables the user to send three different commands based on motor imagery. This BCI is based on a fuzzy inference system with reject options. In order to evaluate the efficiency of the proposed interaction technique, we compared it with the state of the art method during a task of virtual museum exploration. The state of the art method uses low-level commands, which means that each mental state of the user is associated with a simple command such as turning left or moving forward in the VE. In contrast, our method based on high-level commands enables the user to simply select its destination, leaving the application performing the necessary movements to reach this destination. Our results showed that with our interaction technique, users can navigate within a virtual museum almost twice as fast as with low-level commands, and with nearly half the commands, meaning with less stress and more comfort for the user. This suggests that our technique enables efficient use of the limited capacity of current motor imagery-based BCI in order to perform complex interaction tasks in VE, opening the way to promising new applications.

63 citations


Journal ArticleDOI
TL;DR: This article presents a method for passivating the communication channel of a symmetric position-position teleoperation architecture on the time domain by means of so-called passivity observers and PC, and identifies which networks are susceptible to become nonpassive due to the channel characteristics.
Abstract: This article presents a method for passivating the communication channel of a symmetric position-position teleoperation architecture on the time domain. The time domain passivity control approach has recently gained appeal in the context of timedelayed teleoperation because passivity is not established as a design constraint, which often forces conservative rules, but rather as a property which the system must preserve during operation. Since passivity is a network property, the first design rule within this framework is to represent consistent and comprehensible circuit (i.e., network) representations of the mechanical teleoperation system. In particular, the energetic behavior of these networks is interesting because it allows straightforward conclusions about system stability. By means of so-called passivity observers (PO) and passivity controllers (PC) (Hannaford & Ryu, 2001), the energetic response of a delayed communication channel is captured and modulated over time so that the network in question never becomes nonpassive. The case analyzed in this paper tackles a communication channel that conveys position data back and forth. This type of channel does not offer intuitive network representation since only flows are actually being transmitted. Although energy clearly travels from one side to the other, port power identification, as defined by the correlated pair flow and effort, is not evident. This work first investigates how this kind of channel can be represented by means of circuit networks even with the lack of physical effort being transmitted through the channel, and identifies which networks are susceptible to become nonpassive due to the channel characteristics (i.e., time delay, discretization or package loss). Once achieved, a distributed control structure is presented based on a PC series that keeps the system at the verge of passivity (and therefore stability) independent from the channel properties. The results obtained by the simulation and by experiment sustain the presented approach.

Journal ArticleDOI
Seung-A Annie Jin1
TL;DR: The results revealed that people with high interdependent self-construals experience closer PSI with a recommendation avatar and feel stronger social presence in SL than people with low inter dependent self- construals.
Abstract: 3D virtual environments (VEs) can induce parasocial interaction (PSI) and strong feelings of social presence through interactive communication among avatars. Throughout this research, PSI was operationally defined as the extent of VE users' interpersonal involvement with other avatars and perception of themselves as interacting with the other virtual actors in the environment. Self-construal refers to an individual's view of self. Self-construals play an important role in shaping PSI in interactive media environments. After proposing a typology of the self, the experiment in this study empirically examined the influence of users' interdependent self-construals on their feelings of social presence and PSI with a recommendation avatar in avatar-based communication within the 3D VE of Second Life (SL). The results revealed that people with high interdependent self-construals experience closer PSI with a recommendation avatar and feel stronger social presence in SL than people with low interdependent self-construals. A path analysis also demonstrated that social presence mediates the effects of users' self-construals on their PSI with a recommendation avatar in VEs.

Journal ArticleDOI
TL;DR: The results suggest that the detection of delay in force feedback depends on the movement frequency and amplitude, while variation of the absolute feedback force level does not influence the detection threshold.
Abstract: Time delay is recognized as an important issue in haptic telepresence systems as it is inherent to long-distance data transmission. What factors influence haptic delay perception in a time-delayed environment are, however, largely unknown. In this article, we examine the impact of manual movement frequency and amplitude in a sinusoidal exploratory movement as well as the stiffness of the haptic environment on the detection threshold for delay in haptic feedback. The results suggest that the detection of delay in force feedback depends on the movement frequency and amplitude, while variation of the absolute feedback force level does not influence the detection threshold. A model based on the exploration movement is proposed and guidelines for system design with respect to the time delay in haptic feedback are provided.

Journal ArticleDOI
TL;DR: It is suggested that passive haptic feedback improves both presence and task performance, however, small but significant differences related to the interaction metaphor were only apparent when haptic Feedback was not provided.
Abstract: This paper explores the influence of passive haptic feedback on presence and task performance using two important interaction metaphors. We compared direct interaction with the user's hand with interaction using a stylus. Twenty-four participants performed a simple selection task consisting of pressing buttons while playing a memory game, with haptic feedback and interaction metaphor as the independent variables. We measured task performance by computing errors and time between button presses. We measured presence with questionnaires and through a new method based on users' involuntary movements. Our results suggest that passive haptic feedback improves both presence and task performance. However, small but significant differences related to the interaction metaphor were only apparent when haptic feedback was not provided.

Journal ArticleDOI
TL;DR: The results indicate that the stimulus condition had no significant effect on female participants, while male participants were significantly more likely to rule against the character when her visual appearance was computer generated and her movements were jerky.
Abstract: Simulated humans in computer interfaces are increasingly taking on roles that were once reserved for real humans. The presentation of simulated humans is affected by their appearance, motion quality, and interactivity. These presentational factors can influence the decisions of those who interact with them. This is of concern to interface designers and users alike, because these decisions often have moral and ethical consequences. However, the impact of presentational factors on decisions in ethical dilemmas has not been explored. This study is intended as a first effort toward filling this gap. In a between-groups experiment, a female character presented participants with an ethical dilemma. The character's human photorealism and motion quality were varied to generate four stimulus conditions: real human versus computer-generated character × fluid versus jerky movement. The results indicate that the stimulus condition had no significant effect on female participants, while male participants were significantly more likely to rule against the character when her visual appearance was computer generated and her movements were jerky.

Journal ArticleDOI
TL;DR: This paper presents a system for the evaluation of the shape of aesthetic products based on a haptic strip that conforms to a curve that the designer wishes to feel, explore, and analyze by physically touching it.
Abstract: This paper presents a system for the evaluation of the shape of aesthetic products. The evaluation of shapes is based on characteristic curves, which is a typical practice in the industrial design domain. The system, inspired by characteristic curves, is based on a haptic strip that conforms to a curve that the designer wishes to feel, explore, and analyze by physically touching it. The haptic strip is an innovative solution in the haptics domain, although it has some limitations concerning the domain of curves that can be actually represented. In order to extend this domain and make users feel the various curve features, for example curvature discontinuities, sound has been exploited as an additional information modality.

Journal ArticleDOI
TL;DR: A candid reflection on the issues surrounding virtual environment design and implementation (VEDI) is presented in order to motivate the topic as a researchworthy undertaking and attempt a comprehensive listing of impeding VEDI issues so they can be addressed.
Abstract: We present a candid reflection on the issues surrounding virtual environment design and implementation (VEDI) in order to: (1) motivate the topic as a researchworthy undertaking, and (2) attempt a comprehensive listing of impeding VEDI issues so they can be addressed. In order to structure this reflection, an idealized model of VEDI is presented. This model, investigated using mixed methods, resulted in 67 distinct issues along the model's transitions and pathways. These were clustered into 11 themes and used to support five VEDI research challenges.

Journal ArticleDOI
TL;DR: The underlying perceptual model of the deadband approach for haptic signals is extended by incorporating psychophysical findings on human force-feedback discrimination during operators' relative hand movements to observe further improvement in efficiency and performance due to improved adaption to human haptic perception thresholds.
Abstract: In telepresence and teleaction (TPTA) systems, the transmission of haptic signals puts high demands on the applied signal processing and communication procedures. When running a TPTA session across a packet-based communication network (e.g., the Internet), minimizing the end-to-end delay results in packet rates of up to the applied sampling rate of the local control loops at the human system interface and the teleoperator. The perceptual deadband data reduction approach for haptic signals successfully addresses the challenge of high packet rates in networked TPTA systems and satisfies the strict delay constraints. In this paper, we extend the underlying perceptual model of the deadband approach by incorporating psychophysical findings on human force-feedback discrimination during operators' relative hand movements. By applying velocity-dependent perception thresholds to the deadband approach, we observe further improvement in efficiency and performance due to improved adaption to human haptic perception thresholds. The psychophysical experiments conducted reveal improved data reduction performance of our proposed haptic perceptual coding scheme without impairing the user experience. Our results show a high data reduction ability of up to 96%% without affecting system transparency or the operator's task performance.

Journal ArticleDOI
TL;DR: An asynchronous braincomputer interface is presented that enables the control of a wheelchair in virtual environments using only one motor imagery task and can voluntarily switch from this interface to a noncontrol interface when they do not want to generate any command.
Abstract: In this paper, an asynchronous brain--computer interface is presented that enables the control of a wheelchair in virtual environments using only one motor imagery task. The control is achieved through a graphical intentional control interface with three navigation commands (move forward, turn right, and turn left) which are displayed surrounding a circle. A bar is rotating in the center of the circle, so it points successively to the three possible commands. The user can, by motor imagery, extend this bar length to select the command at which the bar is pointing. Once a command is selected, the virtual wheelchair moves in a continuous way, so the user controls the length of the advance or the amplitude of the turns. Users can voluntarily switch from this interface to a noncontrol interface (and vice versa) when they do not want to generate any command. After performing a cue-based feedback training, three subjects carried out an experiment in which they had to navigate through the same fixed path to reach an objective. The results obtained support the viability of the system.

Journal ArticleDOI
TL;DR: The results show that judgments made by goalkeepers in the L4 condition are significantly less accurate than in all the other conditions, which means that the goalkeepers' perception of the movement is influenced more by the size of the ball during the judgment task than the graphical LOD of the throwing action.
Abstract: Virtual reality has a number of advantages for analyzing sports interactions such as the standardization of experimental conditions, stereoscopic vision, and complete control of animated humanoid movement. Nevertheless, in order to be useful for sports applications, accurate perception of simulated movement in the virtual sports environment is essential. This perception depends on parameters of the synthetic character such as the number of degrees of freedom of its skeleton or the levels of detail (LOD) of its graphical representation. This study focuses on the influence of this latter parameter on the perception of the movement. In order to evaluate it, this study analyzes the judgments of immersed handball goalkeepers that play against a graphically modified virtual thrower. Five graphical representations of the throwing action were defined: a textured reference level (L0), a nontextured level (L1), a wire-frame level (L2), a moving point light display (MLD) level with a normal-sized ball (L3), and a MLD level where the ball is represented by a point of light (L4). The results show that judgments made by goalkeepers in the L4 condition are significantly less accurate than in all the other conditions (p <.001). This finding means that the goalkeepers' perception of the movement is influenced more by the size of the ball during the judgment task than the graphical LOD of the throwing action. The MLD representation of the movement thus appears to be sufficient for a sports duel analysis in virtual environments.

Journal ArticleDOI
TL;DR: It is argued that application modularity is a key concept to help the developer handle the complexity of these applications and led to the development of the FlowVR middleware that associates a data-flow model with a hierarchical component model.
Abstract: This paper focuses on the design of high performance VR applications. These applications usually involve various I/O devices and complex simulations. A parallel architecture or grid infrastructure is required to provide the necessary I/O and processing capabilities. Developing such applications faces several difficulties, two important ones being software engineering and performance issues. We argue that application modularity is a key concept to help the developer handle the complexity of these applications. We discuss how various approaches borrowed from other existing works can be combined to significantly improve the modularity of VR applications. This led to the development of the FlowVR middleware that associates a data-flow model with a hierarchical component model. Different case studies are presented to discuss the benefits of the approach proposed.

Journal ArticleDOI
TL;DR: It is proposed that future P300-based BCIs in VR are set up so as require users to make some inference about the virtual space so that they become aware of it, which is likely to lead to higher reported presence.
Abstract: Brain--computer interfaces (BCIs) are becoming more and more popular as an input device for virtual worlds and computer games. Depending on their function, a major drawback is the mental workload associated with their use and there is significant effort and training required to effectively control them. In this paper, we present two studies assessing how mental workload of a P300-based BCI affects participants' reported sense of presence in a virtual environment (VE). In the first study, we employ a BCI exploiting the P300 event-related potential (ERP) that allows control of over 200 items in a virtual apartment. In the second study, the BCI is replaced by a gaze-based selection method coupled with wand navigation. In both studies, overall performance is measured and individual presence scores are assessed by means of a short questionnaire. The results suggest that there is no immediate benefit for visualizing events in the VE triggered by the BCI and that no learning about the layout of the virtual space takes place. In order to alleviate this, we propose that future P300-based BCIs in VR are set up so as require users to make some inference about the virtual space so that they become aware of it, which is likely to lead to higher reported presence.

Journal ArticleDOI
TL;DR: It is suggested that P300 BCIs can be used successfully in a 3D environment, and this suggests some novel ways of using BCIs in real world environments.
Abstract: Brain--computer interfaces (BCIs) provide a novel form of human--computer interaction. The purpose of these systems is to aid disabled people by affording them the possibility of communication and environment control. In this study, we present experiments using a P300 based BCI in a fully immersive virtual environment (IVE). P300 BCIs depend on presenting several stimuli to the user. We propose two ways of embedding the stimuli in the virtual environment: one that uses 3D objects as targets, and a second that uses a virtual overlay. Both ways have been shown to work effectively with no significant difference in selection accuracy. The results suggest that P300 BCIs can be used successfully in a 3D environment, and this suggests some novel ways of using BCIs in real world environments.

Journal ArticleDOI
TL;DR: Results indicate that providing the full range of movement, even though this range is not necessary to accomplish a task, has a beneficial effect on the feeling of telepresence and task performance in terms of measured interaction forces.
Abstract: One of the main objectives in telerobotics is the development of a telemanipulation system that allows a high task performance to be achieved by simultaneously providing a high degree of telepresence. Specific mechatronic design guidelines and appropriate control algorithms as well as augmented visual, auditory, and haptic feedback systems are typical approaches adopted in this context. This work aims at formulating new design guidelines by incorporating human factors in the development process and analyzing the effects of varied human movement control on task performance and on the feeling of telepresence. While it is well known that humans are able to coordinate and integrate multiple degrees of freedom (DOF), the focus of this work is on how humans utilize rotational degrees of freedom provided by a human-system interface and if and how varied human movement control affects task performance and the feeling of telepresence. For this analysis, a telemanipulation experiment with varying degrees of freedom has been conducted. The results indicate that providing the full range of movement, even though this range is not necessary to accomplish a task, has a beneficial effect on the feeling of telepresence and task performance in terms of measured interaction forces. Further, increasing visual depth cues provided to the human operator also had a positive effect.

Journal ArticleDOI
M. Wellner1, Robert Riener1
TL;DR: A rowing simulator with a CAVE setup was used to test the influence of virtual competitors on 10 experienced rowers and indicated a high degree of presence for most participants, but was compromised by perceptive and subjective factors.
Abstract: Highly immersive environments for sports simulation can help elucidate if and how athletes perform under high pressure situations. We used a rowing simulator with a CAVE setup to test the influence of virtual competitors on 10 experienced rowers. All participants were using the simulator for the first time. The objective was to assess the degree of presence by quantifying how the actions of the virtual competitors triggered behavioral changes in the experienced rowers. The participants completed a virtual 2000 m race with two competing boats, one being behind and one ahead of the participant. For two trials, each boat would come closer to the participant without overtaking, resulting in four experimental conditions. The behavior of the participants was assessed with biomechanical variables, questionnaires, and an interview after the race. Behavioral changes were detected with statistically significant differences in the extracted variables of oar angles, timing variables, velocities, and work. The results for biomechanical variables indicate individual response patterns depending on perception of competitors and self-confidence. Self-reporting indicated a high degree of presence for most participants. Overall, the experimental paradigm worked but was compromised by perceptive and subjective factors. In future, the setup will be used to investigate rowing performance further with a focus on motor learning and training of pressure situations.

Journal ArticleDOI
TL;DR: Kinematics analysis was combined with virtual reality to determine why such differences occur depending on the ball landing zone and consequently how it can clarify the role of different sources of visual information on the motor behavior of an athlete immersed in a virtual environment.
Abstract: In order to use virtual reality as a sport analysis tool, we need to be sure that an immersed athlete reacts realistically in a virtual environment. This has been validated for a real handball goalkeeper facing a virtual thrower. However, we currently ignore which visual variables induce a realistic motor behavior of the immersed handball goalkeeper. In this study, we used virtual reality to dissociate the visual information related to the movements of the player from the visual information related to the trajectory of the ball. Thus, the aim is to evaluate the relative influence of these different visual information sources on the goalkeeper's motor behavior. We tested 10 handball goalkeepers who had to predict the final position of the virtual ball in the goal when facing the following: only the throwing action of the attacking player (TA condition), only the resulting ball trajectory (BA condition), and both the throwing action of the attacking player and the resulting ball trajectory (TB condition). Here we show that performance was better in the BA and TB conditions, but contrary to expectations, performance was substantially worse in the TA condition. A significant effect of ball landing zone does, however, suggest that the relative importance between visual information from the player and the ball depends on the targeted zone in the goal. In some cases, body-based cues embedded in the throwing actions may have a minor influence on the ball trajectory and vice versa. Kinematics analysis was then combined with these results to determine why such differences occur depending on the ball landing zone and consequently how it can clarify the role of different sources of visual information on the motor behavior of an athlete immersed in a virtual environment.

Journal ArticleDOI
TL;DR: Topics are organized under the major headings of 3D space management, supporting display hardware, interaction, event management, time management, computation, portability, and the observation that less can be better.
Abstract: What are desirable and undesirable features of virtual environment (VE) software architectures? What should be present (and absent) from such systems if they are to be optimally useful? How should they be structured? In order to help answer these questions, we present experience from application designers, toolkit designers, and VE system architects along with examples of useful features from existing systems. Topics are organized under the major headings of 3D space management, supporting display hardware, interaction, event management, time management, computation, portability, and the observation that less can be better. Lessons learned are presented as discussion of the issues, field experiences, nuggets of knowledge, and case studies.

Journal ArticleDOI
TL;DR: MiroSurge's new user interaction modalities are introduced, including haptic feedback with software-based preservation of the fulcrum point, an ultrasound-based approach to the quasi-tactile detection of pulsating vessels, and a contact-free interface between surgeon and telesurgery system, where stereo vision is augmented with force vectors at the tool tip.
Abstract: This paper presents MiroSurge, a telepresence system for minimally invasive surgery developed at the German Aerospace Center (DLR), and introduces MiroSurge's new user interaction modalities: (1) haptic feedback with software-based preservation of the fulcrum point, (2) an ultrasound-based approach to the quasi-tactile detection of pulsating vessels, and (3) a contact-free interface between surgeon and telesurgery system, where stereo vision is augmented with force vectors at the tool tip. All interaction modalities aim to increase the user's perception beyond stereo imaging by either augmenting the images or by using haptic interfaces. MiroSurge currently provides surgeons with two different interfaces. The first option, bimanual haptic interaction with force and partial tactile feedback, allows for direct perception of the remote environment. Alternatively, users can choose to control the surgical instruments by optically tracked forceps held in their hands. Force feedback is then provided in augmented stereo images by constantly updated force vectors displayed at the centers of the teleoperated instruments, regardless of the instruments' position within the video image. To determine the centerpoints of the instruments, artificial markers are attached and optically tracked. A new approach to detecting pulsating vessels beneath covering tissue with an omnidirectional ultrasound Doppler sensor is presented. The measurement results are computed and can be provided acoustically (by displaying the typical Doppler sound), optically (by augmenting the endoscopic video stream), or kinesthetically (by a gentle twitching of the haptic input devices). The control structure preserves the fulcrum point in minimally invasive surgery and user commands are followed by the surgical instrument. Haptic feedback allows the user to distinguish between interaction with soft and hard environments. The paper includes technical evaluations of the features presented, as well as an overview of the system integration of MiroSurge.

Journal ArticleDOI
TL;DR: This work highlights several issues involved in deploying industrial telepresence systems to manipulate and assemble microparts as well as heavy objects.
Abstract: In contrast to automated production, human intelligence is deemed necessary for successful execution of assembly tasks that are difficult or expensive to automate in small and medium lots. However, human ability is hindered in some cases by physical barriers such as miniaturization or in contrast, very heavy components. Telepresence technology can be considered a solution for performing a wide variety of assembly tasks where human intelligence and haptic sense are needed. This work highlights several issues involved in deploying industrial telepresence systems to manipulate and assemble microparts as well as heavy objects. Two sets of experiments are conducted to investigate telepresence related aspects in an industrial setting. The first experiment evaluates the usefulness of haptic feedback for a human operator in a standard pick-and-place task. Three operation modes were considered: visual feedback, force feedback, and force assistance (realized as vibration). In the second experiment, two different guidance strategies for the teleoperator were tested. The comparison between a position and a velocity scheme in terms of task completion time and subjective preferences is presented.

Journal ArticleDOI
TL;DR: Olfactory adaptation can be alleviated by modulating the odor concentration randomly to mimic the random fluctuations of the turbulent flow fields in real environments, and the discrepancy can be attributed to the convection caused by the human body temperature.
Abstract: This paper describes some fluid dynamic considerations for attaining realistic odor presentation using an olfactory display. Molecular diffusion is an extremely slow process and odor molecules released from their source are spread by being carried off by airflow. Therefore, we propose to use a computational fluid dynamics (CFD) simulation in conjunction with the olfactory display. The CFD solver is employed to calculate the turbulent airflow field in the given environment and the dispersal of odor molecules from their source. The simulation result is used to reproduce at the nose the realistic change in the odor concentration with time and space. However, our initial sensory test for evaluating the proposed method was not completely successful, and we also found some discrepancies between our real-life olfactory sensation and the experience of the CFD-based olfactory display. Here we report some insights to overcome these problems. In the initial sensory test, a nontrivial portion of the subjects did not properly recognize the spatial variation in the odor intensity. The result of our recent sensory test is presented in this paper to show that better contrast in the perceived odor intensity can be provided when the concentration range of the released odor is adjusted for the variation in the olfactory sensitivity of individual subjects. We noted that olfactory adaptation occurred more quickly in the initial sensory test of the CFD-based olfactory display than in real environments. In this paper, we show that olfactory adaptation can be alleviated by modulating the odor concentration randomly to mimic the random fluctuations of the turbulent flow fields in real environments. We also noted in our initial sensory test that there were sometimes discrepancies between our olfactory sensation in real environments and the simulated odor distribution. We show in this paper that the discrepancy can be attributed to the convection caused by the human body temperature that brings an odor vapor that is drifting around our feet up to our noses.