scispace - formally typeset
Open AccessBook ChapterDOI

Combining BCI with Virtual Reality: Towards New Applications and Improved BCI

TLDR
VE can provide an excellent testing ground for procedures that could be adapted to real world scenarios, especially patients with disabilities can learn to control their movements or perform specific tasks in a VE.
Abstract
Brain–Computer Interfaces (BCI) are communication systems which can convey messages through brain activity alone. Recently BCIs were gaining interest among the virtual reality (VR) community since they have appeared as promising interaction devices for virtual environments (VEs). Especially these implicit interaction techniques are of great interest for the VR community, e.g., you are imaging the movement of your hand and the virtual hand is moving, or you can navigate through houses or museums by your thoughts alone or just by looking at some highlighted objects. Furthermore, VE can provide an excellent testing ground for procedures that could be adapted to real world scenarios, especially patients with disabilities can learn to control their movements or perform specific tasks in a VE. Several studies will highlight these interactions.

read more

Content maybe subject to copyright    Report

Combining BCI with Virtual Reality:
Towards New Applications and Improved BCI
Fabien Lotte, Josef Faller, Christoph Guger, Yann Renard, Gert Pfurtscheller,
Anatole L
´
ecuyer, Robert Leeb
1 Introduction
Historically, the main goal of Brain-Computer Interface (BCI) research was, and
still is, to design communication, control and motor substitution applications for
patients with severe disabilities [75]. These last years have indeed seen tremendous
advances in these areas with a number of groups having achieved BCI control of
prosthetics, wheelchairs and spellers, among other [49]. More recently, new appli-
cations of BCI have emerged that can be of benefit to both patients and healthy users
alike, notably in the areas of multimedia and entertainment [51]. In this context,
Fabien Lotte
INRIA Bordeaux Sud-Ouest, 351 cours de la lib
´
eration, F-33405, Talence, France. e-mail:
fabien.lotte@inria.fr
Josef Faller
Insitute for Knowledge Discovery, Laboratory of Brain-Computer Interfaces, Graz University of
Technology, Krenngasse 37, A-8010 Graz, Austria. e-mail: josef.faller@tugraz.at
Christoph Guger
g.tec medical engineering, Sierningstrasse 14, A-4521 Schiedlberg, Austria. e-mail:
guger@gtec.at
Yann Renard
Independant Brain-Computer Interfaces Consultant, France e-mail: yann.renard@
aliceadsl.fr
Gert Pfurtscheller
Insitute for Knowledge Discovery, Laboratory of Brain-Computer Interfaces, Graz University of
Technology, Krenngasse 37, A-8010 Graz, Austria. e-mail: pfurtscheller@tugraz.at
Anatole L
´
ecuyer
INRIA Rennes Bretagne-Atlantique, Campus Universitaire de Beaulieu, F-35042 Rennes Cedex,
France. e-mail: anatole.lecuyer@inria.fr
Robert Leeb
Chair in Non-Invasive Brain-Machine Interface,
´
Ecole Polytechnique F
´
ed
´
erale de Lausanne,
Station 11, CH-1015 Lausanne, Switzerland. e-mail: robert.leeb@epfl.ch
1

2 Lotte & Leeb et al
combining BCI with Virtual Reality (VR) technologies has rapidly been envisioned
as very promising [37, 40]. Such a combination is generally achieved by design-
ing a system that provides the user with immersive 3D graphics and feedback with
which it can interact in real-time by using the BCI. The promising potential of this
BCI-VR combination is visible at two levels. On one hand, BCI is seen by the VR
community as a new input device that may completely change the way to interact
with Virtual Environments (VE) [37]. Moreover, BCI might also be more intuitive
to use than traditional devices. In this sense, BCI can be seen as following a path
similar to that of haptic devices a few years ago [7], that led to new ways of con-
ceiving VR interaction. On the other hand, VR technologies also appear as useful
tools for BCI research. VE can indeed be a richer and more motivating feedback
for BCI users than traditional feedbacks that are usually in the form of a simple 2D
bar displayed on screen. Therefore a VR feedback could enhance the learnability of
the system, i.e., reduce the amount of time needed to learn the BCI skill as well as
increase the mental state classification performance [40, 64]. VE can also be used as
a safe, cost-effective and flexible training and testing ground for prototypes of BCI
applications. For instance, it could be used to train a patient to control a wheelchair
with a BCI [39] and to test various designs for the wheelchair control, all of this
without any physical risk and with a very limited cost. As such, VR can be used as
an intermediary step before using BCI applications in real-life. Finally, VR could be
the basis of new applications of BCI, such as 3D video games and artistic creation
for both patients and healthy users, as well as virtual visits (cities, museums . ..)
and virtual online communities for patients, in order to address their social needs
1
.
Designing a system combining BCI and VR comes with several important chal-
lenges. First, the BCI being used as an input device, it should be, ideally, as con-
venient and intuitive to use as other VR input devices. This means that (1) the BCI
should provide the user with several commands for the application; (2) the user
should be able to send these commands at anytime, at will, i.e., the BCI should be
self-paced (a.k.a. asynchronous); (3) the mapping between the mental states used
and the commands (i.e., the interaction technique) should be intuitive, efficient, and
not lead to too much fatigue for the user. This last point is particularly challenging
since current BCI are usually based on a very small number of mental states, typ-
ically only 2 or 3, whereas the number of interaction tasks that can be performed
on a typical VE is very large, usually much larger than 3. From the point of view
of the VE design and rendering, the challenges include (1) to provide a meaningful
VR feedback to the user, in order to enable him to control the BCI; (2) to integrate
the stimuli needed for BCI based on evoked potentials as tightly and seamlessly as
possible in order not to deteriorate the credibility and thus the immersiveness of the
VE and (3) to design a VR application that is useful and usable despite the huge
differences between a typical VE and the standard BCI training protocols.
This chapter presents an overview of the research works that have combined BCI
and VR and addressed these challenges. As such, (1) it surveys recent works that use
BCI to interact with VE, (2) it highlights the critical aspects and solutions for the
1
See, for instance, the work achieved as part of the BrainAble project: http://www.
brainable.org/

Combining BCI with Virtual Reality 3
design of BCI-based VR applications and (3) it discusses the related perspectives. It
is organized as follows: Section 2 provides some introductory material on VR and
the way to interact with VE using a BCI. Then, Section 3 reviews existing BCI-
based VR applications according to the different neurophysiological signals used to
drive the BCI. More particularly, Section 3.1 discusses VR applications controlled
with a motor imagery (MI)-based BCI, Section 3.2 those based on Steady State
Visual Evoked Potentials (SSVEP) and Section 3.3 those exploiting a P300-based
BCI. Then, Section 4 elaborates on the impact of VR on BCI use, notably in terms
of BCI performance and user experience. Finally, Section 5 concludes the chapter.
2 Basic principles behind VR and BCI control
This section gives some insights about how VE can be controlled with a BCI. In
the first subsection, VR is defined and the typical interaction tasks are described.
The suitability of the different BCI neurophysioligical signals (MI, P300, SSVEP)
for each interaction task is also briefly mentioned. In the second subsection, a gen-
eral architecture for BCI-based VR applications is proposed. This architecture is
illustrated with examples of existing VR applications using BCI as input device.
2.1 Definition of Virtual Reality
A VR environment can be defined as an immersive system that provides the user
with a sense of presence (the feeling of “being there” in the virtual world [8]) by
means of plausible interactions with a real-time simulated synthetic world [36].
Such plausible interaction is made possible thanks to two categories of devices:
input and output devices. First, the user must be able to interact with the virtual
world in real time. This is achieved by using input devices such as game pads, data
gloves, motion tracking systems or, as described in this chapter, BCI. Second, the
user must be provided with real time feedback about the virtual world state. To this
end, various output devices are generally used to render the virtual world content,
such as visual displays, spatial sound systems or haptic devices.
According to Bowman et al [6] typical interaction tasks with a 3D-VE can be
described as belonging to one of the following categories:
Object selection: it consists in selecting an object among those available in the
virtual world, typically in order to subsequently manipulate it.
Object manipulation: it consists in changing attributes of an object in the virtual
world, typically its position and orientation or other properties such as appear-
ance and size.
Navigation: it consists in modifying the user’s own position and orientation in
the virtual world in order to explore it. In other words, navigation can be defined
as moving around the VE and changing the current point of view.

4 Lotte & Leeb et al
Application control: it consists in issuing commands to the application, to
change the system mode or to activate various functionalities, for instance.
All these categories of interaction tasks can be performed with a BCI. However,
each BCI paradigm is more or less suitable for each category of interaction task.
For instance, MI and SSVEP-based BCI are more suitable for navigation tasks and
possibly object manipulation because they can issue commands continuously and
potentially in a self-paced way. On the other hand, P300-based BCI let the user
pick one item among a list of usually at least four, such command being issued in
a discrete and synchronous way. For this reason, they are more suitable for object
selection tasks. The suitability of each BCI paradigm is discussed more in details
and illustrated in Sections 3.1, 3.2 and 3.3 respectively.
2.2 General architecture of BCI-based VR applications
Implementing a BCI control for a VR system can be seen as using the BCI as an
input device to interact with the VE. Therefore, it consists in providing the user
with a way to act on the virtual world only by means of brain activity, and using the
available output devices to provide a meaningful feedback to the user. So far, only
visual feedback has been deeply investigated in the context of BCI-based VR appli-
cations, but other modalities, in particular audio and haptics, would also be worth
studying in the future. A BCI-based VR setup typically involves two independent
softwares: 1) a BCI software to record brain signals, process them to extract relevant
features and classify mental states in real-time in order to generate commands, and
2) a VR software to simulate and render a virtual world, provide feedback to user
and process the received commands. Therefore, these two softwares must be able to
communicate in order to exchange information and commands. Figure 1 provides a
schematic representation of BCI control of a VR application.
In addition to these software considerations, there are also several hardware re-
lated issues that must be considered when using a BCI system in a VE: (1) the
biosignal amplifiers must be able to work in such a noisy environment, (2) the
recordings should ideally be done without wires to avoid collisions and irritations
within the environment, (3) the BCI system must be coupled with the VR system to
exchange information fast enough for real-time experiments and (4) in the case of
CAVE systems, users mostly want to move around and therefore active EEG elec-
trodes should be used to avoid movement artifacts.
In order to illustrate this general architecture implementation and propose a com-
plete setup, we can mention two softwares which are devoted to BCI and VR as an
example: OpenViBE and Ogre3D. OpenViBE
2
is a free software platform to de-
sign, test and use BCI [63]. OpenViBE has been successfully used for the three
major families of BCI: Motor Imagery [46], P300 [10] and SSVEP [44]. Ogre3D
3
2
http://openvibe.inria.fr/
3
http://www.ogre3d.org

Combining BCI with Virtual Reality 5
Fig. 1 General architecture of a BCI-based VR application: the user generates specific brain ac-
tivity patterns that are processed by the BCI system and sent as command to the VR application.
In return, the VR application provides meaningful feedback to the user, this feedback being poten-
tially any combination of visual, audio or haptic feedback forms. This combination of “control on
the VE” and “feedback from the VE” can elicit the sense of presence.
is a scene-oriented, flexible 3D engine that is capable of producing realistic repre-
sentations of virtual worlds in real time. Ogre3D also includes extensions for spatial
sound, physics simulation, etc. Moreover, it has been successfully used to simu-
late VE on equipments ranging from basic laptops to fully immersive systems such
as CAVE systems [11]. These two softwares can communicate and exchange in-
formation, commands and responses using the Virtual Reality Peripheral Network
(VRPN), a widely used library proposing an abstraction of VR devices [69]. Since
both OpenViBE and Ogre3D have VRPN support, either natively or through contri-
butions, they are able to communicate efficiently in order to design BCI-based VR
applications. Those softwares have been used to design BCI-based VR applications
such as those described in [46, 44, 47]. Generally not only VRPN, but any other in-
terface (like proprietary TCP, UDP connections) can be used to communicate with
existing VR systems.
Naturally, various other software and hardware can also be used to design BCI-
based VR applications, such as Matlab/Simulink for real-time EEG signal pro-
cessing and XVR (eXtremeVR 3D software, VRMedia, Italy) for VE design and
applications [28, 27]. Furthermore, simple projection walls with Qt
4
application
framework (Nokia Corporation, Finland), or stereoscopic presentation techniques
such as head-mounted display (HMD) with VRjuggler, or even in fully-immersive
multi-projection stereo-based and head tracked VE systems (commonly known as a
“CAVE” [11] using DIVE software, or “DAVE” [21]) with the scene graph library
OpenSG were already used and combined with a MATLAB based BCI [38]. On the
4
http://qt.nokia.com/

Figures
Citations
More filters
Journal ArticleDOI

Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design

TL;DR: This literature study highlights that current spontaneous BCI user training procedures satisfy very few of these requirements and hence are likely to be suboptimal and proposes new research directions that are theoretically expected to address some of these flaws and to help users learn the BCI skill more efficiently.
Journal ArticleDOI

20 years of research on virtual reality and augmented reality in tourism context: a text-mining approach

TL;DR: An overview of the VR and AR-related tourism studies network is provided to present the most important topics and studies emerging from this literature, and suggest avenues for further research.
Journal ArticleDOI

Performance variation in motor imagery brain-computer interface: a brief review.

TL;DR: A possible strategic approach to deal with performance variation in brain-computer interface systems is proposed, which may contribute to improving the reliability of BCI.
Journal ArticleDOI

A research agenda for augmented and virtual reality in architecture, engineering and construction

TL;DR: This is a foundational study that formalises and categorises the existing usage of AR and VR in the construction industry and provides a roadmap to guide future research efforts.
Journal ArticleDOI

Noninvasive Brain-Computer Interfaces Based on Sensorimotor Rhythms

TL;DR: The work indicates that the sensorimotor-rhythm-based noninvasive BCI has the potential to provide communication and control capabilities as an alternative to physiological motor pathways.
References
More filters
Journal ArticleDOI

Brain-computer interfaces for communication and control.

TL;DR: With adequate recognition and effective engagement of all issues, BCI systems could eventually provide an important new communication and control option for those with motor disabilities and might also give those without disabilities a supplementary control channel or a control channel useful in special circumstances.
Journal ArticleDOI

Event-related EEG/MEG synchronization and desynchronization: basic principles.

TL;DR: Quantification of ERD/ERS in time and space is demonstrated on data from a number of movement experiments, whereby either the same or different locations on the scalp can display ERD and ERS simultaneously.
Book

Virtual Reality Technology

TL;DR: This in-depth review of current virtual reality technology and its applications provides a detailed analysis of the engineering, scientific and functional aspects of virtual reality systems and the fundamentals of VR modeling and programming.
Book

Supersizing the Mind: Embodiment, Action, and Cognitive Extension

TL;DR: Chalmers as discussed by the authors discusses the relationship between the human brain and the human body, from the perspective of Embodiment to Cognitive Extension, and discusses the differences between the two.
Book

3D User Interfaces: Theory and Practice

TL;DR: This book discusses 3D User Interfaces, the history and roadmap of 3D UIs, and strategies for Designing and Developing 3D user Interfaces.
Related Papers (5)
Frequently Asked Questions (13)
Q1. What are the contributions in "Combining bci with virtual reality: towards new applications and improved bci" ?

In this paper, the authors proposed a BCI-VR combination that provides the user with immersive 3D graphics and feedback with which it can interact in real-time by using BCI. 

Indeed, it would be interesting to study how BCI could be used more naturally, transparently and ecologically with virtual environments, in order to make the interactive experience even more immersive. In addition to the classical need for BCI with higher recognition performances, it would be interesting to study whether new mental states and neurophysiological signals could be used to drive a BCI more naturally within a VE. For instance, a study by Plass-Oude Bos et al suggested that visual spatial attention could be detected, to some extent, from EEG signals and could thus be used in the future to naturally look around in a VE [ 62 ]. Similarly, further research in the area of passive BCI [ 23, 76 ] could help to monitor different mental states of the user ( e. g., flow, presence, emotions, attention, etc. ) and dynamically adapt the content of the VE accordingly, thus providing an enhanced experience for the user. 

BCI could be used to select and manipulate virtual objects as well, for which evoked potentials (P300, SSVEP) seem to be the most used and probably the most appropriate neurophysiological signals. 

Due to the huge potential of BCI-based VR applications, not only for patients but also for healthy users, it quickly became necessary to evaluate them outside laboratories, in close to real-life conditions. 

3D spheres were randomly appearing over the objects that can be manipulated and the user could turn them on or off simply by counting the number of times a sphere appears over the desired object. 

The heart rate (HR) generally decreases during motor imagery in normal BCI conditions(without VR feedback) [38, 57] which is similar to that observed during preparation for a voluntary movement. 

(5) Finally, in the study with the tetraplegic patient [39], the analysis revealed that the induced beta oscillations were accompanied by a characteristic heart rate (HR) change in form of a preparatory HR acceleration followed by a short-lasting deceleration in the order of 10–20 bpm [55]. 

various other software and hardware can also be used to design BCIbased VR applications, such as Matlab/Simulink for real-time EEG signal processing and XVR (eXtremeVR 3D software, VRMedia, Italy) for VE design and applications [28, 27]. 

Results suggested that the P300 BCI gives lower Presence scores which might be due to the lack of motor actions which are relevant for semantic tasks and more breaks in presence. 

In one of the scenarios, seven healthy volunteers successfully controlled an avatar to alternately push one of two buttons in an asynchronousparadigm. 

the BCI being used as an input device, it should be, ideally, as convenient and intuitive to use as other VR input devices. 

This is probably due to the fact that MI is a popular and well-studied neurophysiological signal for BCI [60], and that, contrary to SSVEP and P300, MI does not require any external stimulus which could be more convenient and natural for the the user of a VR application. 

From the point of view of the VE design and rendering, the challenges include (1) to provide a meaningful VR feedback to the user, in order to enable him to control the BCI; (2) to integrate the stimuli needed for BCI based on evoked potentials as tightly and seamlessly as possible in order not to deteriorate the credibility and thus the immersiveness of the VE and (3) to design a VR application that is useful and usable despite the huge differences between a typical VE and the standard BCI training protocols.