scispace - formally typeset
Search or ask a question

Showing papers in "Presence: Teleoperators & Virtual Environments in 2011"



Journal ArticleDOI
TL;DR: The results demonstrate that Astrojumper effectively motivates both children and adults to exercise through immersive virtual reality technology and a simple, yet engaging, game design.
Abstract: We present the design and evaluation of Astrojumper, an immersive virtual reality exergame developed to motivate players to engage in rigorous, full-body exercise. We performed a user study with 30 people between the ages of 6 and 50 who played the game for 15 min. Regardless of differences in age, gender, activity level, and video game experience, participants rated Astrojumper extremely positively and experienced a significant increase in heart rate after gameplay. Additionally, we found that participants' ratings of perceived workout intensity positively correlated with their level of motivation. Overall, our results demonstrate that Astrojumper effectively motivates both children and adults to exercise through immersive virtual reality technology and a simple, yet engaging, game design.

128 citations


Journal ArticleDOI
TL;DR: The results showed that perceived video game realism is a predictor of spatial presence and enjoyment, and supported predictions that controller naturalness would influence perceivedVideo game realism of graphics and sound.
Abstract: The introduction and popularity of the Nintendo Wii home console has brought attention to the natural mapping motion capturing controller. Using a sample that identified sports as their most frequently played video games, a mental models approach was used to test the impact that perceived controller naturalness (traditional controller vs. natural mapping motion capturing controller) had on perceptions of spatial presence, realism, and enjoyment. The results showed that perceived video game realism is a predictor of spatial presence and enjoyment. Furthermore, the results supported predictions that controller naturalness would influence perceived video game realism of graphics and sound. Future research should investigate whether or not these controllers lead to greater presence and enjoyment in different genres of games (e.g., first-person shooters). In addition, future research should consider whether or not these controllers have the ability to prime violent mental models.

94 citations


Journal ArticleDOI
TL;DR: The results indicate that, in the context of golf, racing, and boxing games, the higher technological interactivity of motion-based systems (particularly Kinect) increases feelings of spatial presence, perceived reality, and enjoyment.
Abstract: This study investigated the impact of new motion-based video game control systems on spatial presence, perceived reality, and enjoyment of video games. In two experiments, university students played video games on either new motion-based (Sony's Move, Microsoft's Kinect, and Nintendo's Wii), or standard video game systems (PS3 and XBOX 360 with gamepads). The results indicate that, in the context of golf, racing, and boxing games, the higher technological interactivity of motion-based systems (particularly Kinect) increases feelings of spatial presence, perceived reality, and enjoyment. Perceived reality predicted spatial presence; and spatial presence, in turn, was a significant predictor of enjoyment. Moving toward a more natural user interface (NUI) between the player and the game world can create a more immersive, realistic, and fun experience for the player. A new model for enjoyment of motion-based video games is proposed.

89 citations


Journal ArticleDOI
TL;DR: Testing the effects of sensory input (music and video feedback) during physical training on performance, enjoyment, and attentional focus by means of a computerized ergometer coupled with VR software suggests that gaze analysis is one promising way to access attention allocation and its relationships with performance.
Abstract: The present study aimed at testing the general assumption that virtual reality can enhance the experience of exercising. More specifically, we tested the effects of sensory input (music and video feedback) during physical training on performance, enjoyment, and attentional focus by means of a computerized ergometer coupled with VR software. Twelve university students participated in the study. The experimental procedure consisted in a 2 ×ii¾— 3 ×ii¾— 4 mixed design, with two types of feedback (video feedback vs. video feedback and music), three course phases (e.g., flat, uphill, and downhill) and four sessions (task repetition). The virtual feedback was a video film of the course that participants had to complete. Video display speed was proportional to the participant's pedaling speed. Force feedback, applied to the real bicycle wheel, was proportional to the instantaneous course slope. The results showed a positive effect of task repetition on participants' performance only when video feedback was associated with listening to music. In an attempt to objectively assess attentional focus, we analyzed participants' gaze orientation. Gaze analysis showed a reduction in the time spent gazing at video feedback across sessions. Associating video feedback with freely chosen music led to a differential use of video feedback as a function of exercise intensity. Finally, sensory stimulation appeared to have a dissociative role on participants' attentional focus during exercise, but adding music listening to video feedback appears to be necessary to maintain (long term) the participants' commitment to the task. The results are discussed in terms of the functional status of sensory stimulation during exercise, and its interactions with exercise intensity, participants' performance, and attentional focus. They also suggest that gaze analysis is one promising way to access attention allocation and its relationships with performance.

84 citations


Journal ArticleDOI
TL;DR: The results of this study show that appropriate emotions lead to higher perceived believability, the notion of believable is closely correlated with the two major socio-cognitive variables, namely competence and warmth, and considering an agent as believable can be different from having a human-like attitude toward it.
Abstract: The term "believability" is often used to describe expectations concerning virtual agents. In this paper, we analyze which factors influence the believability of the agent acting as the software assistant. We consider several factors such as embodiment, communicative behavior, and emotional capabilities. We conduct a perceptive study where we analyze the role of plausible and/or appropriate emotional displays in relation to believability. We also investigate how people judge the believability of the agent, and whether it provokes social reactions of humans toward it. Finally, we evaluate the respective impact of embodiment and emotion over believability judgments. The results of our study show that (a) appropriate emotions lead to higher perceived believability, (b) the notion of believability is closely correlated with the two major socio-cognitive variables, namely competence and warmth, and (c) considering an agent as believable can be different from having a human-like attitude toward it. Finally, a primacy of emotion behavior over embodiment while judging believability is also hypothesized from free responses given by the participants of this experiment.

69 citations


Journal ArticleDOI
TL;DR: It is argued that it is important to make a close comparison of task behavior in VR with that outside of VR, but conclude having great expectations of the role of VR in perception-action research.
Abstract: Virtual reality (VR) holds great promise for the study of perception-action. The case of studying the outfielder problem is presented as an example of how VR has contributed to our understanding of perception-action, and of the potential and pitfalls of using VR in such a task. The outfielder problem refers to the situation in a baseball game (and analogous situations) in which an outfielder has to run to get to the right location at the right time to make a catch. Several experimental studies are discussed in which participants had to intercept real or virtual balls. The biggest added value of using VR is the fact that the virtual world is completely in the hands of the experimenter, which allows studying situations that do not exist outside of VR, thus enabling strong hypothesis testing. A number of factors related to the success of the VR experiments are identified, such as the lack of haptic feedback in VR setups used in this paradigm until now, the specifics of the optics presented to the participants, and the available space for locomotion. We argue that it is important to make a close comparison of task behavior in VR with that outside of VR, but conclude having great expectations of the role of VR in perception-action research.

59 citations


Journal ArticleDOI
TL;DR: The findings indicate that VR can be used to provide a useful platform for teaching real-world motor skills, and that this may be achieved by its ability to direct the learner's attention to the key anatomical features of a to-be-learned action.
Abstract: Does virtual reality (VR) represent a useful platform for teaching real-world motor skills? In domains such as sport and dance, this question has not yet been fully explored. The aim of this study was to determine the effects of two variations of real-time VR feedback on the learning of a complex dance movement. Novice participants (n == 30) attempted to learn the action by both observing a video of an expert's movement demonstration and physically practicing under one of three conditions. These conditions were: full feedback (FULL-FB), which presented learners with real-time VR feedback on the difference between 12 of their joint center locations and the expert's movement during learning; reduced feedback (REDUCED-FB), which provided feedback on only four distal joint center locations (end-effectors); and no feedback (NO-FB), which presented no real-time VR feedback during learning. Participants' kinematic data were gathered before, immediately after, and 24 hr after a motor learning session. Movement error was calculated as the difference in the range of movement at specific joints between each learner's movement and the expert's demonstrated movement. Principal component analysis was also used to examine dimensional change across time. The results showed that the REDUCED-FB condition provided an advantage in motor learning over the other conditions: it achieved a significantly greater reduction in error across five separate error measures. These findings indicate that VR can be used to provide a useful platform for teaching real-world motor skills, and that this may be achieved by its ability to direct the learner's attention to the key anatomical features of a to-be-learned action.

52 citations


Journal ArticleDOI
TL;DR: The results suggest that the quality of virtual environments has an impact on distance estimation within reaching space and confirm the use of vergence as an absolute distance cue in virtual environments within the arm's reaching space.
Abstract: In this paper, we address depth perception in the peripersonal space within three virtual environments: poor environment (dark room), reduced cues environment (wireframe room), and rich cues environment (a lit textured room). Observers binocularly viewed virtual scenes through a head-mounted display and evaluated the egocentric distance to spheres using visually open-loop pointing tasks. We conducted two different experiments within all three virtual environments. The apparent size of the sphere was held constant in the first experiment and covaried with distance in the second one. The results of the first experiment revealed that observers more accurately estimated depth in the rich virtual environment compared to the visually poor and the wireframe environments. Specifically, observers' pointing errors were small in distances up to 55 cm, and increased with distance once the sphere was further than 55 cm. Individual differences were found in the second experiment. Our results suggest that the quality of virtual environments has an impact on distance estimation within reaching space. Also, manipulating the targets' size cue led to individual differences in depth judgments. Finally, our findings confirm the use of vergence as an absolute distance cue in virtual environments within the arm's reaching space.

45 citations


Journal ArticleDOI
TL;DR: Analyzing feedback in terms of information exchange, this work discusses different feedback combinations and their application to virtual reality training of rowing skills.
Abstract: The use of virtual environments (VE) for training sports is quite natural when considering strategic or cognitive aspects. Using VE for sensorimotor training is more challenging, in particular with the difficulty of transferring the task learned in the virtual world to the real. Of special concern for the successful transfer is the adequate combination of training experience protocols and the delivery modes of multimodal feedback. Analyzing feedback in terms of information exchange, this work discusses different feedback combinations and their application to virtual reality training of rowing skills.

43 citations


Journal ArticleDOI
TL;DR: A complete set of algorithms for contact detection, deformation estimation, force rendering, and force control is developed, demonstrating that the system can provide accurate stiffness modulation with perceptually insignificant errors.
Abstract: Haptic augmented reality (AR) mixes a real environment with computer-generated virtual haptic stimuli, enabling the system to modulate the haptic attributes of a real object to desired values. This paper reports our second study on this functionality, with stiffness as a goal modulation property. Our first study explored the potential of haptic AR by presenting an effective stiffness modulation system for simple 1D interaction. This paper extends the system so that a user can interact with a real object in any 3D exploratory pattern while perceiving its augmented stiffness. We develop a complete set of algorithms for contact detection, deformation estimation, force rendering, and force control. The core part is the deformation estimation where the magnitude and direction of real object deformation are estimated using a contact dynamics model identified in a preprocessing step. All algorithms are designed in a way that maximizes the efficiency and usability of the system while maintaining convincing perceptual quality. In particular, the need for a large amount of preprocessing such as geometry modeling is avoided to improve the usability. The physical performance of each algorithm is thoroughly evaluated with real samples. Each algorithm is experimentally verified to satisfy the physical performance requirements that need to be satisfied to achieve convincing rendering quality. The final perceptual quality of stiffness rendering is assessed in a psychophysical experiment where the difference in the perceived stiffness between augmented and virtual objects is measured. The error is less than the human discriminability of stiffness, demonstrating that our system can provide accurate stiffness modulation with perceptually insignificant errors. The limitations of our AR system are also discussed along with a plan for future work.

Journal ArticleDOI
TL;DR: This research proposes a novel approach for SSVEP-based BCI controller used here for navigation within a 3D virtual environment and finds that the usage of a controller integrated within the virtual scene along with the feedback seems to improve subjective preference and feeling of presence, despite reduced performance in terms of speed.
Abstract: An open question in research nowadays is the usability of brain-computer interfaces (BCI) conceived to extend human capabilities of interaction within a virtual environment. Several paradigms are used for BCI, but the steady-state visual-evoked potential (SSVEP) stands out as it provides a higher information transfer rate while requiring less training. It is an electroencephalographic response detectable when the user looks at a flickering visual stimulus. This research proposes a novel approach for SSVEP-based BCI controller used here for navigation within a 3D virtual environment. For the first time, the flickering stimuli were integrated into virtual objects as a part of the virtual scene in a more transparent and ecological way. As an example, when navigating inside a virtual natural outdoor scene, we could embed the SSVEP flashes in the wings of virtual butterflies surrounding the user. We could also introduce the use of animated and moving stimulations when using SSVEP-based BCI, as the virtual butterflies were left with the possibility of moving and flying in front of the user. Moreover, users received real-time feedback of their mental activity and were thus aware of their detected SSVEP directly and continuously. An experiment has been conducted to assess the influence of both the feedback and the integrated controller on navigation performance and subjective preference. We found that the usage of a controller integrated within the virtual scene along with the feedback seems to improve subjective preference and feeling of presence, despite reduced performance in terms of speed. This suggests that SSVEP-based BCI interfaces for virtual environments could move on from static targets and use integrated and animated stimuli presented in an ecological way for controls in systems where performance demands could be relaxed to benefit an improvement in interaction naturalness.

Journal ArticleDOI
TL;DR: The PhyNNeSS method distinguishes itself from previous efforts in that a systematic physics-based precomputational step allows training of neural networks which may be used in real-time simulations, and is scalable, with the accuracy being controlled by the number of neurons used in the simulation.
Abstract: While an update rate of 30 Hz is considered adequate for real-time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real-time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. In this work we present PhyNNeSS-a Physics-driven Neural Networks-based Simulation System-to address this long-standing technical challenge. The first step is an offline precomputation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function Network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. We present realistic simulation examples from interactive surgical simulation with real-time force feedback. As an example, we have developed a deformable human stomach model and a Penrose drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based precomputational step allows training of neural networks which may be used in real-time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal Interactive Simulation) for general use.

Journal ArticleDOI
TL;DR: It is proposed that people are able to identify the cooperator by inferring, from the emotion displays, the agent's goals through their patterns of facial displays, which reverses the usual process in which appraising relevant events with respect to one's goals leads to specific emotion displays.
Abstract: Acknowledging the social functions of emotion in people, there has been growing interest in the interpersonal effect of emotion on cooperation in social dilemmas. This paper explores whether and how facial displays of emotion in embodied agents impact cooperation with human users. The paper describes an experiment where participants play the iterated prisoner's dilemma against two different agents that play the same strategy (tit-for-tat), but communicate different goal orientations (cooperative vs. individualistic) through their patterns of facial displays. The results show that participants are sensitive to differences in the displays of emotion and cooperate significantly more with the cooperative agent. The results also reveal that cooperation rates are only significantly different when people play first with the individualistic agent. This is in line with the well-known black-hat/white-hat effect from the negotiation literature. However, this study emphasizes that people can discern a cooperator (white-hat) from a noncooperator (black-hat) based only on emotion displays. We propose that people are able to identify the cooperator by inferring, from the emotion displays, the agent's goals. We refer to this as reverse appraisal, as it reverses the usual process in which appraising relevant events with respect to one's goals leads to specific emotion displays. We discuss implications for designing human-computer interfaces and understanding human-human interaction.

Journal ArticleDOI
TL;DR: Evaluated auditory feedback designs supporting a three-dimensional rowing-type movement indicated that the practicability of the auditory designs depends on the polarity of the mapping functions, and both visual and auditory concurrent feedback designs were practical to immediately support multidimensional movement.
Abstract: In general, concurrent augmented feedback has been shown to effectively enhance learning in complex motor tasks. However, to optimize technical systems that are intended to reinforce motor learning, a systematic evaluation of different augmented feedback designs is required. Until now, mainly visual augmented feedback has been applied to enhance learning of complex motor tasks. Since most complex motor tasks are mastered in response to information visually perceived, providing augmented concurrent feedback in a visual manner may overload the capacities of visual perception and cognitive processing. Thus, the aim of this work was to evaluate the practicability of auditory feedback designs supporting a three-dimensional rowing-type movement in comparison with visual feedback designs. We term a feedback design practical if the provided information can easily be perceived and interpreted, and immediately be used to support the movement. In a first experiment, it became evident that participants could interpret three-dimensional auditory feedback designs based on stereo balance, pitch, timbre, and/or volume. Eleven of 12 participants were able to follow the different target movements using auditory feedback designs as accurately as with a very abstract visual feedback design. Visual designs based on superposition of actual and target oar orientation led to the most accurate performance. Considering the first experimental results, the feedback designs were further developed and again evaluated. It became evident that a permanent visual display of the target trajectories could further enhance movement accuracy. Moreover, results indicated that the practicability of the auditory designs depends on the polarity of the mapping functions. In general, both visual and auditory concurrent feedback designs were practical to immediately support multidimensional movement. In a next step, the effectiveness to enhance motor learning will be systematically evaluated.

Journal ArticleDOI
TL;DR: The development of the haptic rendering algorithm for the handshaking system, its integration with visual and haptic cues, and reports about the results of subjective evaluation experiments that were carried out are discussed.
Abstract: This paper focuses on the development and evaluation of a haptic enhanced virtual reality system which allows a human user to make physical handshakes with a virtual partner through a haptic interface. Multimodal feedback signals are designed to generate the illusion that a handshake with a robotic arm is a handshake with another human. Advanced controllers of the haptic interface are developed to respond to user behaviors online. Techniques to achieve online behavior generation are presented, such as a hidden-Markov-model approach to human interaction strategy estimation. Human-robot handshake experiments were carried out to evaluate the performance of the system. Two different approaches to haptic rendering were compared in experiments: a controller in basic mode with an embedded curve in the robot that disregards the human partner, and an interactive robot controller for online behavior generation. The two approaches were compared with the ground truth of another human driving the robot via teleoperation instead of the controller implementing a virtual partner. In the evaluation results, the human approach is rated to be most human-like, with the interactive controller following closely behind, followed by the controller in basic mode. This paper mainly concentrates on discussing the development of the haptic rendering algorithm for the handshaking system, its integration with visual and haptic cues, and reports about the results of subjective evaluation experiments that were carried out.

Journal ArticleDOI
Seung-A Annie Jin1
TL;DR: Results from a between-subjects full-factorial experiment demonstrated that the regulatory fit between regulatory focus state and means for goal pursuit in computer-mediated communication within 3D VEs increases users' enjoyment, feelings of presence, and postexperimental healthy eating intentions.
Abstract: This research examined the effects of regulatory fit on media users' enjoyment of interactions with a virtual interlocutor and feelings of social presence and self-presence in the 3D virtual environments (VEs) of Second Life. Results from a two (regulatory focus state: promotion vs. prevention) ii¾— two (regulatory strategy: eagerness means vs. vigilance means) between-subjects full-factorial experiment demonstrated that the regulatory fit between regulatory focus state and means for goal pursuit in computer-mediated communication (CMC) within 3D VEs increases users' enjoyment, feelings of presence, and postexperimental healthy eating intentions. A path analysis further revealed the mediating roles of social presence and self-presence. Theoretical and methodological contributions as well as practical implications are discussed.

Journal ArticleDOI
TL;DR: The results of these studies provide evidence that the AMB can be used to manipulate beliefs and perceptions and alter the reported experience of pain and it is concluded that the system has potential for use in experimental and in clinical settings.
Abstract: Video mediated and augmented reality technologies can challenge our sense of what we perceive and believe to be real. Applied appropriately, the technology presents new opportunities for understanding and treating a range of human functional impairments as well as studying the underling psychological bases of these phenomena. This paper describes our augmented mirror box (AMB) technology which builds on the potential of optical mirror boxes by adding functions that can be applied in therapeutic and scientific settings. Here we test hypotheses about limb presence and perception, belief, and pain using laboratory studies to demonstrate proof of concept. The results of these studies provide evidence that the AMB can be used to manipulate beliefs and perceptions and alter the reported experience of pain. We conclude that the system has potential for use in experimental and in clinical settings.

Journal ArticleDOI
TL;DR: An implementation of a embodied conversational agents as companions is presented showing the development of individual modules that attempt to address challenges in generating appropriate affective responses, selecting the overall shape of the dialogue, providing prompt system response times, and handling interruptions.
Abstract: The development of embodied conversational agents (ECA) as companions brings several challenges for both affective and conversational dialogue. These include challenges in generating appropriate affective responses, selecting the overall shape of the dialogue, providing prompt system response times, and handling interruptions. We present an implementation of such a companion showing the development of individual modules that attempt to address these challenges. Further, to resolve resulting conflicts, we present encompassing interaction strategies that attempt to balance the competing requirements along with dialogues from our working prototype to illustrate these interaction strategies in operation. Finally, we provide the results of an evaluation of the companion using an evaluation methodology created for conversational dialogue and including analysis using appropriateness annotation.

Journal ArticleDOI
TL;DR: It is concluded that low-cost nonvestibular motion cueing may be a welcome alternative for improving in-simulator performance so that it better matches real-world driving performance.
Abstract: Motion platforms can be used to provide vestibular cues in a driving simulator, and have been shown to reduce driving speed and acceleration. However, motion platforms are expensive devices, and alternatives for providing motion cues need to be investigated. In independent experiments, the following eight low-cost nonvestibular motion cueing systems were tested by comparing driver performance to control groups driving with the cueing system disengaged: (1) seat belt tensioning system, (2) vibrating steering wheel, (3) motion seat, (4) screeching tire sound, (5) beeping sound, (6) road noise, (7) vibrating seat, and (8) pressure seat. The results showed that these systems are beneficial in reducing speed and acceleration and that they improve lane-keeping and/or stopping accuracy. The seat belt tensioning system had a particularly large influence on driver braking performance. This system reduced driving speed, increased stopping distance, reduced maximum deceleration, and increased stopping accuracy. It is concluded that low-cost nonvestibular motion cueing may be a welcome alternative for improving in-simulator performance so that it better matches real-world driving performance.

Journal ArticleDOI
TL;DR: Investigation of the role of spatial abilities on uninhabited ground vehicle (UGV) performance under two different viewing conditions showed that participants with higher spatial abilities exhibited superior performance in both direct line of sight and teleoperation.
Abstract: Two experiments investigated the role of spatial abilities on uninhabited ground vehicle (UGV) performance under two different viewing conditions: direct line of sight and teleoperation. The ability to operate a mobile robot was indexed by task completion time and total number of course collisions. Results showed that participants with higher spatial abilities exhibited superior performance in both direct line of sight and teleoperation. Performance under direct line of sight was correlated with both spatial relations and spatial visualization, whereas performance during teleoperation was only correlated with spatial relations ability. Understanding the roles of spatial abilities under different viewing conditions will aid in the advancement of selection criteria and training paradigms for robot operators.

Journal ArticleDOI
TL;DR: Examination of the influence of affective state on cue utilization in novel virtual environments indicates that low relative to high arousal states promote global cue utilization during navigation through novel environments.
Abstract: Humans navigate complex environments effectively by identifying and monitoring environmental spatial cues (i.e., landmarks). Previous research has shown that affective states modulate cue utilization, attentional focus, and memory. Like other human behaviors, navigation is performed within an affective context and thus may fall under its influence. The present study examines the influence of affective state on cue utilization in novel virtual environments. Employing a within-participants factorial design, we manipulated participants' affect, crossing valence (happy, sad) and arousal (high, low), with available cue type (global cues: present, absent; and local cues: present, absent) within a desktop virtual environment. Results indicated that low relative to high arousal states promote global cue utilization during navigation through novel environments; there were no effects of affective valence. Arousal effects decreased with environmental familiarity, indicating its influence on cue utilization during the initial learning of novel environments. The results are discussed with regard to theories of affect, spatial cognition, and navigation.

Journal ArticleDOI
TL;DR: It is found that different turn-taking strategies indeed influence the user's perception and also influence the subjects' speaking behavior.
Abstract: Different turn-taking strategies of an agent influence the impression that people have of it and the behaviors that they display in response. To study these influences, we carried out several studies. In the first study, subjects listened as bystanders to computer-generated, unintelligible conversations between two speakers. In the second study, subjects talked to an artificial interviewer which was controlled by a human in a Wizard of Oz setting. Questionnaires with semantic differential scales concerning personality, emotion, social skill, and interviewing skills were used in both studies to assess the impressions that the subjects have of the agents that carried out different turn-taking strategies. In addition, in order to assess the effects of these strategies on the subjects' behavior, we measured several aspects in the subjects' speech, such as speaking rate and turn length. We found that different turn-taking strategies indeed influence the user's perception. Starting too early (interrupting the user) is mostly associated with negative and strong personality attributes and is perceived as less agreeable and more assertive. Leaving pauses between turns is perceived as more agreeable, less assertive, and creates the feeling of having more rapport. Finally, we found that turn-taking strategies also influence the subjects' speaking behavior.

Journal ArticleDOI
TL;DR: In this research a passive haptic interface is explored as a surgical aid for dental implant surgery and the innovative new MR-brake actuators, inherent safety of the system, and simplicity of control make it a viable option for further exploration.
Abstract: In this research a passive haptic interface is explored as a surgical aid for dental implant surgery. The placement of a dental implant is critical since positioning mistakes can lead to permanent damage in the nerves controlling the lips, long lasting numbness, and failure of the implant and the crown on it. Haptic feedback to the surgeon in real time can decrease dependence on the surgeon's skill and experience for accurate implant positioning and increase the overall safety of the procedure. The developed device is a lightweight mechanism with weight compensation. Rotary magnetorheological (MR) brakes were custom designed for this application using the serpentine flux path concept. The resulting MR-brakes are 33% smaller in diameter than the only commercially available such brakes, yet produce 2.7 times more torque at 10.9 Nm. Another contribution of this research was a ferro-fluidic sealing technique which decreased the off-state torque. The control system implemented the passive force manipulability ellipsoid algorithm for force rendering of rigid wall-following tasks. Usability experiments were conducted to drill holes with haptic feedback. The maximum average positioning error was 2.88 mm along the x axis. The errors along the y and z axes were 1.9 mm and 1.16 mm, respectively. The results are on the same order of magnitude as other dental robotic systems. The innovative new MR-brake actuators, inherent safety of the system, and simplicity of control make this passive haptic interface a viable option for further exploration.

Journal ArticleDOI
TL;DR: It is hypothesized that driving behavior in a simulator changes when motion cues are present in extreme maneuvers, and a comparison between No-Motion and Motion car driving simulation was done, demonstrating a significant change in driving behavior.
Abstract: In advanced driving maneuvers, such as a slalom maneuver, it is assumed that drivers use all the available cues to optimize their driving performance. For example, in curve driving, drivers use lateral acceleration to adjust car velocity. The same result can be found in driving simulation. However, for comparable curves, drivers drove faster in fixed-base simulators than when actually driving a car. This difference in driving behavior decreases with the use of inertial motion feedback in simulators. The literature suggests that the beneficial effect of inertial cues in driving behavior increases with the difficulty of the maneuver. Therefore, for an extreme maneuver such as a fast slalom, a change in driving behavior is expected when a fixed-base condition is compared to a condition with inertial motion. It is hypothesized that driving behavior in a simulator changes when motion cues are present in extreme maneuvers. To test the hypothesis, a comparison between No-Motion and Motion car driving simulation was done, by measuring driving behavior in a fast slalom. A within-subjects design was used, with 20 subjects driving the fast slalom in both conditions. The average speed during the Motion condition was significantly lower than the average speed during the No-Motion condition. The same was found for the peak lateral acceleration generated by the car model. A power spectral density analysis performed on the steering wheel angle signal showed different control input behavior between the two experimental conditions. In addition, the results from a paired comparison showed that subjects preferred driving with motion feedback. From the lower driving speed and different control input on the steering wheel, we concluded that motion feedback led to a significant change in driving behavior.

Journal ArticleDOI
TL;DR: Methods that allow measuring the human-likeness of haptic interaction partners on a continuous scale are introduced and two subjective rating methods are proposed and correlated with a task performance measure.
Abstract: In the past, working spaces of humans and robots were strictly separated, but recent developments have sought to bring robots into closer interaction with humans. In this context, physical human-robot interaction represents a major challenge, as it is based on continuous bilateral information and energy exchanges which result in a mutual adaptation of the partners. To address the challenge of designing robot collaboration partners, making them as human-like as possible is an approach often adopted. In order to compare different implementations with each other, their degree of human-likeness on a continuous scale is required. So far, the human-likeness of haptic interaction partners has only been studied in the form of binary choices. In this paper, we first introduce methods that allow measuring the human-likeness of haptic interaction partners on a continuous scale. In doing so, two subjective rating methods are proposed and correlated with a task performance measure. To demonstrate the applicability and validity of the proposed measures, they are applied to a joint kinesthetic manipulation task and used to compare two different implementations of a haptic interaction partner: a feedforward model based on force replay, and a feedback model. This experiment demonstrates the use of the proposed measures in building a continuous human-likeness scale and the interpretation of the scale values achieved for formulating guidelines for future robot implementations.

Journal ArticleDOI
TL;DR: A validation study of a race car simulator concluded that the racing simulator is a valuable tool for driver assessment and for testing adoptations to the humanmachine interface.
Abstract: Car racing is a mentally and physically demanding sport. The track time available to train drivers and test car setups is limited. Race car simulators offer the possibility of safe, efficient, and standardized human-in-the-loop training and testing. We conducted a validation study of a race car simulator by correlating the fastest lap times of 13 drivers during training events in the simulator with their fastest lap times during real-world race events. The results showed that the overall correlation was.57 (p ==.044). Next, the effect of brake pedal stiffness (soft: 5.8 N/mm vs. hard: 53.0 N/mm) on racing performance was investigated in the simulator. Brake pedal stiffness may have an important effect on drivers' lap times, but it is impractical to manipulate this variable on a race car during a real-world test session. Two independent experiments were conducted using different cars and tracks. In each experiment, participants (N == 6 in Experiment 1 and N == 9 in Experiment 2) drove alternately with the soft and hard pedal in eight 20-min sessions (Experiment 1) or six 15-min sessions (Experiment 2). Two hypotheses were tested: (1) the soft pedal yields faster cornering times for corners that include a long brake zone, and (2) the hard pedal yields more high-frequency brake forces. Experiments 1 and 2 confirmed the second hypothesis but not the first. Drivers were highly adaptable to brake pedal stiffness, and the stiff pedal elicited higher pedal forces and more high-frequent brake pedal inputs. It is concluded that the racing simulator is a valuable tool for driver assessment and for testing adoptations to the human---machine interface.

Journal ArticleDOI
TL;DR: This study investigates the time taken to fuse a pair of stereoscopic images displayed on an HMD when the accommodative demand is matched to the vergence demand, and evaluates the potential benefits of using a dynamically adjustable lens focus in future designs of HMDs.
Abstract: Current head-mounted displays (HMDs) provide only a fixed lens focus. Viewers have to decouple their accommodation and vergence responses when viewing stereoscopic images presented on an HMD. This study investigates the time taken to fuse a pair of stereoscopic images displayed on an HMD when the accommodative demand is matched to the vergence demand. Four testing conditions exhausting the factorial combinations of accommodative demands (2.5 D and 0.5 D) and vergence demands (2.5 MA and 0.5 MA) were investigated. The results indicate that viewers take a significantly shorter amount of time to fuse a pair of stereoscopic images (i.e., fusion time) when the accommodative demand and the stereoscopic depth cues match. Further analysis suggests that an unnatural demand for the eyes to verge toward stereoscopic images whose stereo depth is farther than the accommodative demand is associated with significantly longer fusion time. This study evaluates the potential benefits of using a dynamically adjustable lens focus in future designs of HMDs.

Journal ArticleDOI
TL;DR: Blood flow velocity (BFV) in middle cerebral arteries (MCAs) has been monitored using transcranial Doppler ultrasound during the exposure to a virtual environment and an increasing trend was observed during the recovery periods.
Abstract: One of the techniques used to monitor variations in presence during a virtual reality experience is the analysis of breaks in presence (BIPs). Previous studies have monitored peripheral physiological responses during BIPs in order to find a characteristic physiological response. In this work, blood flow velocity (BFV) in middle cerebral arteries (MCAs) has been monitored using transcranial Doppler ultrasound during the exposure to a virtual environment. Two BIPs of different intensity were forced during the virtual reality experience. Variations in BFV during each BIP and during the recovery periods that followed them have been analyzed. A decreasing trend was observed in BFV signal during the most intense BIP in most subjects. However, during the less intense BIP an oscillating behavior was observed. Significant differences have been found between the maximum percentage variations observed in each BIP. During the recovery periods, an increasing trend was observed. The mean response times (time elapsed since the beginning of the period until the maximum percentage variation in the period occured) ranged between 10.116 s and 12.774 s during the BIPs, and between 11.025 s and 13.345 during the recovery periods, depending on the vessel and on the kind of BIP.

Journal ArticleDOI
TL;DR: This paper deals with the haptic rendering of the catching and throwing of objects by means of this type of interface, and presents a working system, showing the possibility of effectively performing basic juggling patterns with two balls.
Abstract: Haptic interaction in a virtual world can be tool mediated or direct; and, among direct interactions, the encountered haptic interfaces provide physical contact only when there is contact with a virtual object. This paper deals with the haptic rendering of the catching and throwing of objects by means of this type of interface. A general model for the rendering of the impact is discussed with the associated formalism for managing multiple objects and multiple devices. Next, a key parameter for simulating the impact is selected by means of a psychophysical test. Finally, a working system is presented with the application of the rendering strategy to the case of haptic juggling, showing the possibility of effectively performing basic juggling patterns with two balls.