scispace - formally typeset
Search or ask a question

Showing papers in "Presence: Teleoperators & Virtual Environments in 2014"


Journal ArticleDOI
TL;DR: The integration of a handshake within the HRI system illustrates the effectiveness of the proposed online generation method, and the experimental results demonstrate that the approach can recognize the upper body gestures with high accuracy in real time.
Abstract: In this paper, a human—robot interaction system based on a novel combination of sensors is proposed. It allows one person to interact with a humanoid social robot using natural body language. The robot understands the meaning of human upper body gestures and expresses itself by using a combination of body movements, facial expressions, and verbal language. A set of 12 upper body gestures is involved for communication. This set also includes gestures with human—object interactions. The gestures are characterized by head, arm, and hand posture information. The wearable Immersion CyberGlove II is employed to capture the hand posture. This information is combined with the head and arm posture captured from Microsoft Kinect. This is a new sensor solution for human-gesture capture. Based on the posture data from the CyberGlove II and Kinect, an effective and real-time human gesture recognition method is proposed. The gesture understanding approach based on an innovative combination of sensors is the main contribution of this paper. To verify the effectiveness of the proposed gesture recognition method, a human body gesture data set is built. The experimental results demonstrate that our approach can recognize the upper body gestures with high accuracy in real time. In addition, for robot motion generation and control, a novel online motion planning method is proposed. In order to generate appropriate dynamic motion, a quadratic programming (QP)-based dual-arms kinematic motion generation scheme is proposed, and a simplified recurrent neural network is employed to solve the QP problem. The integration of a handshake within the HRI system illustrates the effectiveness of the proposed online generation method.

64 citations


Journal ArticleDOI
TL;DR: It is shown that it is possible to induce the full-body ownership illusion over a remote robotic body with a highly robotic appearance, and that both methods are tractable for immersive control of a humanoid robot in a social telepresence setting.
Abstract: Recent advances in humanoid robot technologies have made it possible to inhabit a humanlike form located at a remote place. This allows the participant to interact with others in that space and experience the illusion that the participant is actually present in the remote space. Moreover, with these humanlike forms, it may be possible to induce a full-body ownership illusion, where the robot body is perceived to be one's own. We show that it is possible to induce the full-body ownership illusion over a remote robotic body with a highly robotic appearance. Additionally, our results indicate that even with nonmanual control of a remote robotic body, it is possible to induce feelings of agency and illusions of body ownership. Two established control methods, an SSVEP-based BCI and eye tracking, were tested as a means of controlling the robot's gesturing. Our experience and the results indicate that both methods are tractable for immersive control of a humanoid robot in a social telepresence setting.

54 citations


Journal ArticleDOI
TL;DR: This paper presents a framework to interactively control avatars in remote environments, and demonstrates effectiveness by describing an instantiation of AMITIES, called TeachLivE, that is widely used by colleges of education to prepare new teachers and provide continuing professional development to existing teachers.
Abstract: This paper presents a framework to interactively control avatars in remote environments. The system, called AMITIES, serves as the central component that connects people controlling avatars (inhabiters), various manifestations of these avatars (surrogates), and people interacting with these avatars (participants). A multiserver—client architecture, based on a low-demand network protocol, connects the participant environment(s), the inhabiter station(s), and the avatars. A human-in-the-loop metaphor provides an interface for remote operation, with support for multiple inhabiters, multiple avatars, and multiple participants. Custom animation blending routines and a gesture-based interface provide inhabiters with an intuitive avatar control paradigm. This gesture control is enhanced by genres of program-controlled behaviors that can be triggered by events or inhabiter choices for individual or groups of avatars. This mixed (agency and gesture-based) control paradigm reduces the cognitive and physical loads on the inhabiter while supporting natural bidirectional conversation between participants and the virtual characters or avatar counterparts, including ones with physical manifestations, for example, robotic surrogates. The associated system affords the delivery of personalized experiences that adapt to the actions and interactions of individual users, while staying true to each virtual character's personality and backstory. In addition to its avatar control paradigm, AMITIES provides processes for character and scenario development, testing, and refinement. It also has integrated capabilities for session recording and event tagging, along with automated tools for reflection and after-action review. We demonstrate effectiveness by describing an instantiation of AMITIES, called TeachLivE, that is widely used by colleges of education to prepare new teachers and provide continuing professional development to existing teachers. Finally, we show the system's flexibility by describing a number of other diverse applications, and presenting plans to enhance capabilities and application areas.

48 citations


Journal ArticleDOI
TL;DR: Control interface, therefore, matters greatly to the route by which cognitive processing of games takes place and how enjoyment is produced.
Abstract: In three experiments with U.S. undergraduates, effects of three levels of naturally mapped control interfaces were compared on a player's sense of presence, interactivity, realism, and enjoyment in video games. The three levels of naturally mapped control interfaces were: kinesic natural mapping (using the player's body as a game controller), incomplete tangible mapping (using a controller in a way similar to a real object), and realistic tangible mapping (using a controller or an object that directly relates to the real-life activity the game simulates). The results show that levels of interactivity, realism, spatial presence, and enjoyment were consistent across all conditions. However, when performing activities such as table tennis or lightsaber dueling with objects in-hand (incomplete tangible or realistic tangible conditions), perceived reality was a more important predictor of spatial presence. When performing the same activities with empty hands, interactivity emerged as the more important direct predictor of spatial presence. Control interface, therefore, matters greatly to the route by which cognitive processing of games takes place and how enjoyment is produced.

39 citations


Journal ArticleDOI
TL;DR: This study investigates how the sense of embodiment in virtual environments can be enhanced by multisensory feedback related to body movements and suggests that vestibular and proprioceptive feedback can improve the participant's sense of embodied in the virtual experience.
Abstract: This study investigates how the sense of embodiment in virtual environments can be enhanced by multisensory feedback related to body movements. In particular, we analyze the effect of combined vestibular and proprioceptive afferent signals on the perceived embodiment within an immersive walking scenario. These feedback signals were applied by means of a motion platform and by tendon vibration of lower limbs, evoking illusory leg movements. Vestibular and proprioceptive feedback were provided congruently with a rich virtual scenario reconstructing a real city, rendered on a head-mounted display (HMD). The sense of embodiment was evaluated through both self-reported questionnaires and physiological measurements in two experimental conditions: with all active sensory feedback (highly embodied condition), and with visual feedback only. Participants' self-reports show that the addition of both vestibular and proprioceptive feedback increases the sense of embodiment and the individual's feeling of presence associated with the walking experience. Furthermore, the embodiment condition significantly increased the measured galvanic skin response and respiration rate. The obtained results suggest that vestibular and proprioceptive feedback can improve the participant's sense of embodiment in the virtual experience.

37 citations


Journal ArticleDOI
TL;DR: Results showed that an induced angry state can degrade driver situation awareness as well as driving performance as compared to a neutral state, but the angry state did not have an impact on participants' subjective judgment or perceived workload.
Abstract: Research has suggested that emotional states have critical effects on various cognitive processes, which are important components of situation awareness Endsley, 1995b. Evidence from driving studies has also emphasized the importance of driver situation awareness for performance and safety. However, to date, little research has investigated the relationship between emotional effects and driver situation awareness. In our experiment, 30 undergraduates drove in a simulator after induction of either anger or neutral affect. Results showed that an induced angry state can degrade driver situation awareness as well as driving performance as compared to a neutral state. However, the angry state did not have an impact on participants' subjective judgment or perceived workload, which might imply that the effects of anger occurred below their level of conscious awareness. One of the reasons participants showed a lack of compensation for their deficits in performance might be that they were not aware of severe impacts of emotional effects on driving performance.

31 citations


Journal ArticleDOI
TL;DR: The challenges in autonomous multi-party interaction among virtual characters, human-like robots, and real participants are discussed, and a prototype system to study these challenges is described.
Abstract: 3D virtual humans and physical human-like robots can be used to interact with people in a remote location in order to increase the feeling of presence. In a telepresence setup, their behaviors are driven by real participants. We envision that in the absence of the real users, when they have to leave or they do not want to do a repetitive task, the control of the robots can be handed to an artificial intelligence component to sustain the ongoing interaction. At the point when human-mediated interaction is required again, control can be returned to the real users. One of the main challenges in telepresence research is the adaptation of 3D position and orientation of the remote participants to the actual physical environment to have appropriate eye contact and gesture awareness in a group conversation. In case the human behind the robot and/or virtual human leaves, multi-party interaction should be handed to an artificial intelligence component. In this paper, we discuss the challenges in autonomous multi-party interaction among virtual characters, human-like robots, and real participants, and describe a prototype system to study these challenges.

26 citations


Journal ArticleDOI
TL;DR: It is shown that multiple iterations of masked objects within a trial, as well as the speeding of selection choices, can substantially reinforce the impact of subliminal cues, consistent with previous findings suggesting that the effect of sub Liminal stimuli fades rapidly.
Abstract: The performance of current graphics engines makes it possible to incorporate subliminal cues within virtual environments VEs, providing an additional way of communication, fully integrated with the exploration of a virtual scene. In order to advance the application of subliminal information in this area, it is necessary to explore in the psychological literature how techniques previously reported as rendering information subliminal can be successfully implemented in VEs. Previous literature has also described the effects of subliminal cues as quantitatively modest, which raises the issue of their inclusion in practical tasks. We used a 3D rendering engine Unity3D to implement a masking paradigm within the context of a realistic scene and a familiar kitchen environment. We report significant effects of subliminal cueing on the selection of objects in a virtual scene, demonstrating the feasibility of subliminal cueing in VEs. Furthermore, we show that multiple iterations of masked objects within a trial, as well as the speeding of selection choices, can substantially reinforce the impact of subliminal cues. This is consistent with previous findings suggesting that the effect of subliminal stimuli fades rapidly. We conclude by proposing, as part of further work, possible mechanisms for the inclusion of subliminal cueing in intelligent interfaces to maximize their effects.

23 citations


Journal ArticleDOI
TL;DR: In this study, real-time functional magnetic resonance imaging is used as an input device to identify a subject's intentions and convert them into actions performed by a humanoid robot.
Abstract: We present a robotic embodiment experiment based on real-time functional magnetic resonance imaging (rt-fMRI). In this study, fMRI is used as an input device to identify a subject's intentions and convert them into actions performed by a humanoid robot. The process, based on motor imagery, has allowed four subjects located in Israel to control a HOAP3 humanoid robot in France, in a relatively natural manner, experiencing the whole experiment through the eyes of the robot. Motor imagery or movement of the left hand, the right hand, or the legs were used to control the robotic motions of left, right, or walk forward, respectively.

23 citations


Journal ArticleDOI
TL;DR: Four design choices that can be used to classify mid-air manipulation techniques are gathered and three manipulation techniques selected for studying the implications of the design choices are developed, adapted, and compared.
Abstract: Manipulation is one of the most important tasks required in virtual environments and thus it has been thoroughly studied for widespread input devices such as mice or multi-touch screens. Nowadays, the Kinect sensor has turned mid-air interaction into another affordable and popular way of interacting. Mid-air interaction enables the possibility of interacting remotely without any physical contact and in a more natural manner. Nonetheless, although some scattered manipulation techniques have been proposed for mid-air interaction, there is a lack of evaluations and comparisons that hinders the selection and development of these techniques. To solve this issue, we gathered four design choices that can be used to classify mid-air manipulation techniques. Namely, choices are based on the required number of hands, separation of translation-rotation, decomposition of rotation, and interaction metaphors. Furthermore, we developed, adapted, and compared three manipulation techniques selected for studying the implications of the design choices. These implications are useful to select among already existing techniques as well as to inform technique developers.

22 citations


Journal ArticleDOI
TL;DR: This work analyzes the influence of four different physical modalities (vision, hearing, haptics, and olfaction) on the sense of presence on a virtual journey through the sea and the Laurissilva Forest of Funchal, Portugal.
Abstract: Outdoor virtual environments OVEs are becoming increasingly popular, as they allow a sense of presence in places that are inaccessible or protected from human intervention. These virtual environments VEs need to address physical modalities other than vision and hearing. We analyze the influence of four different physical modalities vision, hearing, haptics, and olfaction on the sense of presence on a virtual journey through the sea and the Laurissilva Forest of Funchal, Portugal. We applied Slater et al.'s 2010 method together with data gathered by the Emotiv EPOC EEG in an OVE setting. In such a setting, the combination of haptics and hearing are more important than the typical virtual environment vision and hearing in terms of place and plausibility illusions. Our analysis is particularly important for designers interested in crafting similar VEs because we classify different physical modalities according to their importance in enhancing presence.

Journal ArticleDOI
TL;DR: The results show that a virtual physician can conduct a very simple interview to evaluate EDS with very similar results to those obtained by a questionnaire administered by a real physician.
Abstract: Excessive daytime somnolence EDS is defined as the inability to stay awake in daily life activities. Several scales have been used to diagnose excessive daytime sleepiness, the most widely used being the Epworth Sleepiness Scale ESS. Sleep disorders and EDS are very common in the general population. It is therefore important to be able to screen patients for this symptom in order to obtain an accurate diagnosis of sleep disorders. Embodied Conversational Agents ECA have been used in the field of affective computing and human interactions but up to now no software has been specifically designed to investigate sleep disorders. We created an ECA able to conduct an interview based on the ESS and compared it to an interview conducted by a sleep specialist. We recruited 32 consecutive patients and a group of 30 healthy volunteers free of any sleep complaints. The ESS is a self-administered questionnaire that asks the subject to rate with a pen and paper paradigm his or her probability of falling asleep. For the purpose of our study, the ECA or real-doctor questionnaire was modified as follows: Instead of the "I" formulate, questions were asked as "Do you." Our software is based on a common 3D game engine and several commercial software libraries. It can run on standard and affordable hardware products. The sensitivity and specificity of the interview conducted by the ECA were measured. The best results sensibility and specificity >98% were obtained to discriminate the sleepiest patients ESS ≥16 but very good scores sensibility and specificity >80% were also obtained for alert subjects ESS<8. ESS scores obtained in the interview conducted by the physician were significantly correlated with ESS scores obtained in the interview the ECA conducted. Most of the subjects had a positive perception of the virtual physician and considered the interview with the ECA as a good experience. Sixty-five percent of the participants felt that the virtual doctor could significantly help real physicians. Our results show that a virtual physician can conduct a very simple interview to evaluate EDS with very similar results to those obtained by a questionnaire administered by a real physician. The expected massive increase in sleep complaints in the near future likely means that more and more physicians will be looking for computerized systems to help them to diagnose their patients.

Journal ArticleDOI
TL;DR: A study involving participants who were paired up for a collaborative assessment task, interacting via voice only, videoconference, or a visual representation of the physiological measurements shows that the visual representation significantly increases the positive affect score during remote collaboration.
Abstract: Empathic communication allows individuals to perceive and understand the feeling and emotion of the person with whom they are interacting. This could be particularly important during remote collaboration such as remote assistance or distance learning to enhance the social and emotional understanding of geographically distributed partners. However, supporting awareness in remote collaboration is very challenging especially when the interaction with the remote parties results in less information that can be communicated than in a physical interaction. We explore the effect of visualization using physiological cues that allow users to interpret emotional behaviors of remote parties with whom they are interacting in real time. The proposed visual representation allows users to infer emotional patterns from physiological cues that can potentially influence their communication approach toward a more aggressive style or maintain passive and peaceful interaction. We conducted a study involving participants who were paired up for a collaborative assessment task, interacting via voice only, videoconference, or a visual representation of the physiological measurements. Participants perceived the usage of our visual representation with higher group cohesiveness than using voice-only interaction. Further analysis shows that the visual representation significantly increases the positive affect score i.e., participants are perceived to be more alert and demonstrate less distress during remote collaboration. We discuss the possibilities of the proposed visual representation to support empathic communication during remote collaboration, and the benefits to the remote partners of having positive affect and group cohesiveness.

Journal ArticleDOI
TL;DR: The impact of subliminal cues in an immersive navigation task using the so-called eXperience Induction Machine (XIM), a human accessible mixed-reality system, indicates that a subliminals channel of interaction exists between the user and the XIM and is relevant in the understanding of the bandwidth of communication that can be established between humans and their physical and social environment.
Abstract: Subliminal stimuli can affect perception, decision-making, and action without being accessible to conscious awareness. Most evidence supporting this notion has been obtained in highly controlled laboratory conditions. Hence, its generalization to more realistic and ecologically valid contexts is unclear. Here, we investigate the impact of subliminal cues in an immersive navigation task using the so-called eXperience Induction Machine XIM, a human accessible mixed-reality system. Subjects were asked to navigate through a maze at high speed. At irregular intervals, one group of subjects was exposed to subliminal aversive stimuli using the masking paradigm. We hypothesized that these stimuli would bias decision-making. Indeed, our results confirm this hypothesis and indicate that a subliminal channel of interaction exists between the user and the XIM. These results are relevant in our understanding of the bandwidth of communication that can be established between humans and their physical and social environment, thus opening up to new and powerful methods to interface humans and artefacts.

Journal ArticleDOI
TL;DR: Findings suggest that the multisensory feedback associated with a subject's own actions and the physical plausibility of the environment both act as determinant factors, influencing and modulating the vividness of the VHI.
Abstract: It is well known by the virtual hand illusion VHI that simultaneous and synchronous visuotactile sensory feedback within a virtual environment elicits the feeling of ownership of a virtual hand, by observing for some seconds in a scene a virtual hand being touched while at the same time receiving tactile stimulation on the real hand in the corresponding positions. In this paper, we investigate possible modulations in the feeling of ownership sensation of owning a virtual hand and of agency sensation of owning virtual movements and actions according to whether or not the participant's own motor acts 1 induce coherent self-activated visuotactile sensory stimulations; and 2 generate plausible consequences in the simulated environment. For this purpose, we elicited the VHI within a group of participants through a cross-modal integration of visuo-tactile sensory stimulations within a dynamic and physically plausible immersive virtual environment, where they were able to perform natural tasks in both passive and active agency conditions. Our results indicate that both feelings of ownership and agency can be achieved in immersive virtual environments, when the subject is realistically interacting and performing natural upper limb movements. We did not observe any significant difference in the VHI in terms of ownership and agency between the active and passive conditions, but we observed that a physically incongruent simulated interaction with the virtual world can lead to a significant disruption of ownership. Moreover, in the passive agency condition, a plausible physical behavior of the virtual hand was sufficient to elicit a partially complete sense of ownership, if measured in terms of proprioceptive drift, even in the presence of an asynchronous visuotactile sensory feedback. All these findings suggest that the multisensory feedback associated with a subject's own actions and the physical plausibility of the environment both act as determinant factors, influencing and modulating the vividness of the VHI.

Journal ArticleDOI
TL;DR: Responses to display format were moderated by frequency of game play, with stereoscopic 3D presentation eliciting reduced presence and increased arousal among weekly game players, but the reverse pattern among non-weekly game players.
Abstract: Recent advances in commercial gaming technology include stereoscopic 3D presentation. This experiment employed a mixed factorial design to explore the effects of game display format 2D; 3D, frequency of game play weekly; non-weekly, and participant gender male; female on feelings of presence and arousal among participants playing a handheld racing video game. Responses to display format were moderated by frequency of game play, with stereoscopic 3D presentation eliciting reduced presence and increased arousal among weekly game players, but the reverse pattern among non-weekly game players. Theoretical and practical implications of the moderating role of game play frequency in effects of 3D presentation are discussed.

Journal ArticleDOI
TL;DR: This study experimentally created special interaction situations to examine the perceptual conflicts generated by the dual-presence of the real and virtual visual and audio stimuli and studied the influence of such perceptual conflicts on participants' choice of collaborator and on their task efficiency.
Abstract: With multi-stereoscopy technology, novel projection-based immersive systems now can support multiple users by providing each one with an independent stereoscopic view of the virtual scene. When users work face-to-face, they may have an incorrect view if objects are located between them. In this case, avatars can be introduced to enable face-to-face interaction in the virtual world, whereas they are side-by-side in the real device. As a consequence, such multi-user systems provide the users with a new kind of perceptual immersion and related cognitive experiences, because users must handle both information from the real world i.e., other users' bodies and those from the virtual scene i.e., other users' avatars at the same time. In this study, we experimentally created special interaction situations to examine the perceptual conflicts generated by the dual-presence of the real and virtual visual and audio stimuli. In a two-user scenario, participants performed an object-picking task according to three types of instructions verbal, gestural, or multimodal instructions given by an experimenter. This co-located experimenter was also virtually present by an avatar in the virtual world to enable face-to-face interactions with the participants. Our goal was to observe to what extent the perceptual conflicts induced by the dual-presence of the experimenter can be integrated without significantly altering the performance of the participants. For that, we studied the influence of such perceptual conflicts on participants' choice of collaborator whether they interacted with the avatar or the real experimenter and on their task efficiency. As the results showed, first users had an a priori choice of collaborator avatar or real person and this choice did not change under different experimental conditions. Second, perceptual conflicts had an impact on users' performance in terms of task completion time. We discuss the implications of these results for designing a better immersive system for co-located collaboration between multiple users.

Journal ArticleDOI
TL;DR: The results show that reasonably physically similar avatars can be expected to be perceived as similar by participants, and are perceived to be similar to the self, rated at 7.5/10.
Abstract: An experiment was carried out to examine the extent to which an avatar can be perceived by people as similar to themselves, including their face and body. The avatar was judged by the participants themselves rather than by third parties. The experiment was organized in two phases. The initial phase consisted of a forced-choice, paired comparison method used to create a ranking of 10 virtual faces in order of preference. This set of faces included a facial mesh, created by a custom software pipeline to rapidly generate avatars that resembled the experimental participants. Six more faces, derived from the participants' own face, were also shown in order to gain insight into the acceptance of a variety of facial similarities. In the second phase, full-body avatars with the most and least preferred faces were presented along with the direct pipeline output. Participants rated their level of satisfaction with those avatars as virtual self-representations and provided the level of perceived resemblance to themselves. The results show that our avatars are perceived to be similar to the self, rated at 7.5/10. Those avatars with faces derived from the participants' face mixed with an ethnically similar face were also rated with high scores. These results differ significantly from how arbitrary avatars are perceived. Therefore, reasonably physically similar avatars can also be expected to be perceived as similar by participants.

Journal ArticleDOI
TL;DR: The findings expanded current research on virtual social influence by considering the effects of the clothing color of virtual characters, along with how cognitive load and avatar appearance can modify perceived avatar trustworthiness when combined.
Abstract: This study investigated how avatar appearance and cognitive load affect virtual interactions. Avatar salespeople dressed in black were perceived as unpersuasive and untrustworthy, and were offered less money compared to avatars in white clothes. Moreover, participants stood closer to avatars in white clothes compared to avatars dressed in black. Contrary to the traditional prediction i.e., cognitively busy participants would trust avatars in white clothes the most but avatars in dark clothes the least, cognitively nonbusy participants expressed less trust towards avatar salespeople dressed in black instead of white clothes, while cognitively busy participants trusted both characters equally. The findings expanded current research on virtual social influence by considering the effects of the clothing color of virtual characters, along with how cognitive load and avatar appearance can modify perceived avatar trustworthiness when combined.

Journal ArticleDOI
TL;DR: This experiment exposes the benefits of conveying microexpressions in computer graphics characters, as they may visually enhance a character's emotional depth through subliminal microexpression cues, and consequently increase the perceived social complexity and believability.
Abstract: Due to varied personal, social, or even cultural situations, people sometimes conceal or mask their true emotions. These suppressed emotions can be expressed in a very subtle way by brief movements called microexpressions. We investigate human subjects' perception of hidden emotions in virtual faces, inspired by recent psychological experiments. We created animations with virtual faces showing some facial expressions and inserted brief secondary expressions in some sequences, in order to try to convey a subtle second emotion in the character. Our evaluation methodology consists of two sets of experiments, with three different sets of questions. The first experiment verifies that the accuracy and concordance of the participant's responses with synthetic faces matches the empirical results done with photos of real people in the paper by X.-b. Shen, Q. Wu, and X.-l. Fu, 2012, "Effects of the duration of expressions on the recognition of microexpressions," Journal of Zhejiang University Science B, 13(3), 221—230. The second experiment verifies whether participants could perceive and identify primary and secondary emotions in virtual faces. The third experiment tries to evaluate the participant's perception of realism, deceit, and valence of the emotions. Our results show that most of the participants recognized the foreground (macro) emotion and most of the time they perceived the presence of the second (micro) emotion in the animations, although they did not identify it correctly in some samples. This experiment exposes the benefits of conveying microexpressions in computer graphics characters, as they may visually enhance a character's emotional depth through subliminal microexpression cues, and consequently increase the perceived social complexity and believability.

Journal ArticleDOI
TL;DR: A framework enabling navigational autonomy for a mobile platform with application scenarios specifically requiring a humanoid telepresence system is presented, promising a reduced operator workload and safety during robot motion and enabling the inhabitor to provide inputs for head and arm gesticulation.
Abstract: This paper presents a framework enabling navigational autonomy for a mobile platform with application scenarios specifically requiring a humanoid telepresence system. The proposal promises a reduced operator workload and safety during robot motion. In addition, the framework enables the inhabitor (human controlling the platform) to provide inputs for head and arm gesticulation. This allows the inhabitor to focus on interactions at the remote environment, rather than being engrossed in controlling robot navigation. This paper discusses the development of higher-level, human-like navigational behaviors such as following, accompanying, and guiding a person autonomously. A color histogram comparison and position matching algorithm has been developed to track the person using the Kinect sensors. In addition to providing a safe and easy-to-use system, the high-level behaviors are also required to be human-like in that the mobile platform obeys the laws of proxemics and other human interaction norms such as walking speed. This facilitates a higher level of experience for other humans interacting with the robotic platform. An obstacle avoidance function has also been implemented using the virtual potential field method. A preliminary evaluation was also conducted to validate the algorithm and to support the claim of reducing operator cognitive load due to navigation. In general, it was shown that navigation over a given route was accomplished at a faster pace with no instances of collision with the environment.

Journal ArticleDOI
TL;DR: The development of a CNVSS implementing a hybrid client–server architecture and two statistical designs of experiments (DOE) is described by using a fractional factorial DOE and a central composite DOE, to determine the most influential factors and how these factors affect the collaboration in a C NVSS.
Abstract: Currently, surgical skills teaching in medical schools and hospitals is changing, requiring the development of new tools to focus on i the importance of the mentor's role, ii teamwork skills training, and iii remote training support. Collaborative Networked Virtual Surgical Simulators CNVSS allow collaborative training of surgical procedures where remotely located users with different surgical roles can take part in the training session. To provide successful training involving good collaborative performance, CNVSS should guarantee synchronicity in time of the surgical scene viewed by each user and a quick response time which are affected by factors such as users' machine capabilities and network conditions. To the best of our knowledge, the impact of these factors on the performance of CNVSS implementing hybrid client-server architecture has not been evaluated. In this paper the development of a CNVSS implementing a hybrid client-server architecture and two statistical designs of experiments DOE is described by using i a fractional factorial DOE and ii a central composite DOE, to determine the most influential factors and how these factors affect the collaboration in a CNVSS. From the results obtained, it was concluded that packet loss, bandwidth, and delay have a larger effect on the consistency of the shared virtual environment, whereas bandwidth, server machine capabilities, and delay and interaction between factors bandwidth and packet loss have a larger effect on the time difference and number of errors of the collaborative task.

Journal ArticleDOI
TL;DR: Results showed that the technological features of stereoscopic 3D cannot predict enjoyment, however, the feeling of presence, the appeal of the special effects, and fanship are predictors of enjoyment.
Abstract: As 3D movie screenings have recently seen an increase in popularity, it would appear that 3D is finally ready to stand the test of time. To examine the effect of 3D on the experience of enjoyment, we refer to the model of entertainment by Vorderer, Klimmt, and Ritterfeld 2004, according to which both technological and personal prerequisites can induce enjoyment. The model was further adapted for the cinema context by including the appeal of special effects, fanship, age, and gender. To ascertain the impact of the suggested prerequisites, we conducted a field study comparing the enjoyment experiences of 2D and 3D audiences watching the same fantasy movies in a between-subjects design N = 289. Results showed that the technological features of stereoscopic 3D cannot predict enjoyment. However, the feeling of presence, the appeal of the special effects, and fanship are predictors of enjoyment.

Journal ArticleDOI
TL;DR: The results suggest that subliminal approaches are indeed feasible to provide drivers with added driving support without dissipating attention resources and firmly believe that such interfaces are valuable since they may eventually prevent accidents, save lives, and even reduce fuel costs and CO2 emissions for some drivers.
Abstract: In the long history of subliminal messages and perception, many contradictory results have been presented. One group of researchers suggests that subliminal interaction techniques improve human--computer interaction by reducing sensory workload, whereas others have found that subliminal perception does not work. In this paper, we want to challenge this prejudice by first defining a terminology and introducing a theoretical taxonomy of mental processing states, then reviewing and discussing the potential of subliminal approaches for different sensory channels, and finally recapitulating the findings from our studies on subliminally triggered behavior change. Our objective is to mitigate driving problems caused by excessive information. Therefore, this work focuses on subliminal techniques applied to driver--vehicle interaction to induce a nonconscious change in driver behavior. Based on a survey of related work which identified the potential of subliminal cues in driving, we conducted two user studies assessing their applicability in real-world situations. The first study evaluated whether subtle subliminal vibrations could promote economical driving, and the second exposed drivers to very briefly flashed visual stimuli to assess their potential to improve steering behavior. Our results suggest that subliminal approaches are indeed feasible to provide drivers with added driving support without dissipating attention resources. Despite the lack of general evidence for uniform effectiveness of such interfaces in all driving circumstances, we firmly believe that such interfaces are valuable since they may eventually prevent accidents, save lives, and even reduce fuel costs and CO2 emissions for some drivers. For all these reasons, we are confident that subliminally driven interfaces will find their way into cars of the near future.

Journal ArticleDOI
TL;DR: A perception-based traffic control scheme to reduce the number of object- state update packets by allowing a variable but not perceivable object-state error at the client, which outperforms well-known dead reckoning, commonly used in visual-only distributed applications.
Abstract: Shared Haptic Virtual Environments (SHVEs) are often realized using a client—server communication architecture. In this case, a centralized physics engine, running on the server, is used to simulate the object-states in the virtual environment (VE). At the clients, a copy of the VE is maintained and used to render the interaction forces locally, which are then displayed to the human through a haptic device. While this architecture ensures stability in the coupling between the haptic device and the virtual environment, it necessitates a high number of object-state update packets transmitted from the server to the clients to achieve satisfactory force feedback quality. In this paper, we propose a perception-based traffic control scheme to reduce the number of object-state update packets by allowing a variable but not perceivable object-state error at the client. To find a balance between packet rate reduction and force rendering fidelity, our approach uses different error thresholds for the visual and haptic modality, where the haptic thresholds are determined by psychophysical experiments in this paper. Force feedback quality is evaluated with subjective tests for a variety of different traffic control parameter settings. The results show that the proposed scheme reduces the packet rate by up to 97%, compared to communication approaches that work without data reduction. At the same time, the proposed scheme does not degrade the haptic feedback quality significantly. Finally, it outperforms well-known dead reckoning, commonly used in visual-only distributed applications.

Journal ArticleDOI
TL;DR: Users of this adaptive interface demonstrated better performance than users of a baseline interface on several movement metrics, indicating that the adaptive interface helped users manage the demands of concurrent spatial tasks in a virtual environment.
Abstract: Many interfaces exist for locomotion in virtual reality, although they are rarely considered fully natural. Past research has found that using such interfaces places cognitive demands on the user, with unnatural actions and concurrent tasks competing for finite cognitive resources. Notably, using semi-natural interfaces leads to poor performance on concurrent tasks requiring spatial working memory. This paper presents an adaptive system designed to track a user's concurrent cognitive task load and adjust interface parameters accordingly, varying the extent to which movement is fully natural. A fuzzy inference system is described and the results of an initial validation study are presented. Users of this adaptive interface demonstrated better performance than users of a baseline interface on several movement metrics, indicating that the adaptive interface helped users manage the demands of concurrent spatial tasks in a virtual environment. However, participants experienced some unexpected difficulties when faced with a concurrent verbal task.

Journal ArticleDOI
TL;DR: A framework to interactively control avatars in remote environments and the system, called AMITIES, serves as the central component that connects people controlling avatars inhabit remote environments.
Abstract: This paper presents a framework to interactively control avatars in remote environments. The system, called AMITIES, serves as the central component that connects people controlling avatars inhabit...

Journal ArticleDOI
TL;DR: This issue features a collection of current, state-of-the-art research results discussing the effects of subliminal stimuli and behavior in virtual/augmented reality, teleoperation, and automotive contexts.
Abstract: The term ‘‘subliminal perception’’ has been around for many years and is generally understood as perception that can occur without conscious awareness. The idea that subliminal perception provokes a significant impact on thoughts or behaviors strikes many people as counterintuitive. Recent findings support the notion of perception without conscious awareness, and oppose the intuitive notion that consciousness is necessary for perception (Ramsoy & Overgaard, 2004). The discussion whether subliminal perception actually changes thoughts/behavior traces back to the question whether a stimulus is perceived even when there is no awareness of it. Rapid advance of new technologies, such as EEG and fMRI, provides a way to measure directly effects of subliminal stimuli, which leads to a revival of studies on the link between subliminal stimuli and behavior, thought, and emotional changes. The central objective of this special issue is to provoke an active debate on the impact, role and adequacy of using information below threshold in virtual environments, teleoperation, or augmented reality. The papers in this issue deal with the concept of subliminal perception, its basic characteristics, and appropriate research methodologies using neural, physiological, cognitive, or behavioral responses to subliminal cues in virtual/augmented reality, teleoperation, and automotive contexts. It features a collection of current, state-of-the-art research results discussing the effects of subliminal stimuli and behavior.

Journal ArticleDOI
TL;DR: This special section of Presence devoted to robots, virtual reality, and brain–computer interfaces in telepresence presents three papers that address these two themes, separately or in conjunction.
Abstract: Some recent science fiction movies, most notably James Cameron’s Avatar (2009) or Jonathan Mostow’s Surrogates (2009), portray what could possibly be regarded as the ultimate telepresence system, whereby people are remotely embodied in surrogate robotic bodies. The users of these systems control these representations as naturally as they control their own bodies, and experience the world through sensors mounted on these remote representations. At the same time that Hollywood was producing such fantasy movies (and with a fraction of the budget!), the scientific and technological communities have been making impressive progress in realizing such scenarios in practice, with the goals of helping disabled people and advancing the possibilities of telepresence. Such tele-embodiment scenarios pose two major challenges, each of which is both scientific and technological. The first is the science and technology of embodiment. The scientific question is: how does our brain represent our own body? Research in the last few years has shown that this representation is rich yet flexible; for example, there has been a wealth of neuroscientific studies on the rubber arm illusion (Botvinick & Cohen, 1998; Ehrsson, Spence, & Passingham, 2004) and many variations were reconstructed in virtual reality (Banakou, Groten, & Slater, 2013; Kilteni, Normand, Sanchez-Vives, & Slater, 2012; Slater, Perez-Marcos, Ehrsson, & SanchezVives, 2008). Moreover, there has been research extending this arm illusion to the illusion of controlling a whole remote body (Ehrsson, 2007; Slater, Marcos, Ehrsson, & Sanchez, 2009). The second breakthrough is required in brain– computer interfaces (BCIs)—the science and technology of controlling remote devices ‘‘by thought.’’ Early work with BCIs focused largely on fundamental issues such as developing and validating simple BCI spellers in field settings, especially with patients (Wolpaw, Birbaumer, McFarland, Pfurtscheller, & Vaughan, 2002). However, the last several years have seen huge improvements in BCI performance and flexibility, including an emerging interest in combining BCIs with virtual reality (VR) and robotics (Allison et al., 2012; Pfurtscheller et al., 2006; J. Wolpaw & E. W. Wolpaw, 2012) and even controlling avatar and humanoid robotic representations using BCI (Bell, Shenoy, Chalodhorn, & Rao, 2008; Cohen, Koppel, Malach, & Friedman, 2014; Friedman et al., 2007; Kapeller et al., 2013). These developments have created a myriad of new research questions. Which types of brain imaging methods can be used for such BCI control? Which mental activities can people use to direct an avatar or robot with a BCI? How can these mental activities best be mapped to control different actions, such as movements or gestures? Do users experience a strong sense of immersion, presence and/or ownership of a surrogate (virtual or robotic) body? How does BCI performance when controlling such surrogate bodies compare with other control methods? This special section of Presence devoted to robots, virtual reality, and brain–computer interfaces in telepresence presents three papers that address these two themes, separately or in conjunction. Notably, these articles assessed both objective measures such as task completion time and subjective measures (via questionnaires) that evaluated subjects’ feelings of ownership, embodiment, presence or control with these novel interaction environments. Subjects were generally able to effect real-time control, and felt a strong sense of immersion in different tasks, even when controlling a surrogate body in another country. Indeed, these three articles also show the breadth of this emerging research interaction. Authors of these three articles come from Austria, Italy, Israel, Spain, France and Japan. The authors include top experts from commercial and academic sectors and from different disciplines, including neurobiology, VR programming,