scispace - formally typeset

Journal ArticleDOI

A study of developments and applications of mixed reality cubicles and their impact on learning

16 Sep 2019-Vol. 37, pp 15-31

TL;DR: This paper investigates and presents the cost effective application of augmented reality (AR) as a mixed reality technology via or to mobile devices such as head-mounted devices, smart phones and tablets.

AbstractThe purpose of this paper is to report on developments and applications of mixed reality cubicles and their impacts on learning in higher education. This paper investigates and presents the cost effective application of augmented reality (AR) as a mixed reality technology via or to mobile devices such as head-mounted devices, smart phones and tablets. Discuss the development of mixed reality applications for mobile (smartphones and tablets) devices leading up to the implementation of a mixed reality cubicle for immersive three dimensional (3D) visualizations.,The approach adopted was to limit the considerations to the application of AR via mobile platforms including head-mounted devices with focus on smartphones and tablets, which contain basic feedback–to-user channels such as speakers and display screens. An AR visualization cubicle was jointly developed and applied by three collaborating institutions. The markers, acting as placeholders acts as identifiable reference points for objects being inserted in the mixed reality world. Hundreds of participants comprising academics and students from seven different countries took part in the studies and gave feedback on impact on their learning experience.,Results from current study show less than 30 percent had used mixed reality environments. This is lower than expected. About 70 percent of participants were first time users of mixed reality technologies. This indicates a relatively low use of mixed reality technologies in education. This is consistent with research findings reported that educational use and research on AR is still not common despite their categorization as emerging technologies with great promise for educational use.,Current research has focused mainly on cubicles which provides immersive experience if used with head-mounted devices (goggles and smartphones), that are limited by their display/screen sizes. There are some issues with limited battery lifetime for energy to function, hence the need to use rechargeable batteries. Also, the standard dimension of cubicles does not allow for group visualizations. The current cubicle has limitations associated with complex gestures and movements involving two hands, as one hand are currently needed for holding the mobile phone.,The use of mixed reality cubicles would allow and enhance information visualization for big data in real time and without restrictions. There is potential to have this extended for use in exploring and studying otherwise inaccessible locations such as sea beds and underground caves. Social implications – Following on from this study further work could be done to developing and application of mixed reality cubicles that would impact businesses, health and entertainment.,The originality of this paper lies in the unique approach used in the study of developments and applications of mixed reality cubicles and their impacts on learning. The diverse composition in nature and location of participants drawn from many countries comprising of both tutors and students adds value to the present study. The value of this research include amongst others, the useful results obtained and scope for developments in the future.

Topics: Mixed reality (68%), Augmented reality (61%), Mobile device (51%)

Summary (4 min read)

1. Introduction

  • Humans typically perceive and relate with their surrounding environment using the five physiological senses of sight, smell, touch, sound and taste, although sight, sound and touch are more readily used.
  • Augmented reality as the leading technology has the capability to engage the user in an enhanced perception of the surroundings as well as the possibility to act as a bridge towards different types of contents encompassing text, audio, video.
  • This opposing relationship between reality on the one hand and virtuality on the other hand is illustrated in Figure 1, where reality is at one extreme of a continuum while virtuality, better known as Virtual Reality (VR), is at the opposite extreme and in-between them is the mixed-reality environment (Onime et. al., 2016).
  • In AR, the goal is not to exclude the real objects (as in VR) but to blend additional or computer generated information into the real world.
  • In practice, the solution of Equations 3 and 1 is simplified during the creation of mixed reality environments by introducing / using special place holders known as markers to indicate the relative entry-points (or positions) and/or orientation of other (to be introduced) objects within environment.

1.1 Virtual Reality (VR)

  • Spatialized sound may be used to provide direction such as sound growing louder as the user approaches (Zahorik, 2002).
  • Haptic devices allow users (with a VR environment) to touch surfaces, grasp and move virtual objects, possibly obtaining feedback/reactions them (Basdogan et. al., 2000; Tan and Pentland, 1997).
  • In such virtual world(s), everything is possible as typical laws of physics such as gravity and time may be modified or eliminated completely and the users can (within its confines) overcome limitations that were previously imposed by the physical world (Loscos et. al. 2003).
  • In non-immersive VR systems, users do not have a stereo view and/or experience of the virtual environment.
  • Fully-immersive VR systems provide a total (3D) view of the computer generated environment obtained using multiple large screen devices or special eye-wear along with special input devices such as touch-screens, wands, gloves and controllers.

2. Literature Review

  • It has also been used as a platform for teaching specialized procedures to pilots (Pausch et. al., 1992) and doctors (O’Toole et. al, 1998) without the associated risks involved in a real environment.
  • They are used for information visualization, remote collaboration, humanmachine-interfaces, design tools as well as education and training (Scholz and Smith, 2016; Bacca et al, 2015).
  • Virtual reality is used to provide the interactive display of 3D objects in the gaming industry (massive on-line role playing games) and scientific research work especially those involving modeling and simulation.
  • Mixed reality technology provides the opportunities to combine learning and entertainment in new ways especially suited for laboratory and classroom (Davidsson et. al. 2012).

2.1 Cave Automatic Virtual Environment (CAVE)

  • In most implementations of CAVEs, the walls (including floor and ceiling) are replaced by large (wall-sized) displays or projection screens arranged such that the computer generated environment is projected all around the user.
  • Within CAVEs, VR systems also have to track and respond to, the user’s physical orientation, movements and gestures.
  • Sometimes, this may involve the use of special hand-gloves or body suits suitable for tracking movements in very fine detail.
  • Another example is the Wall-sized Interaction with Large Datasets (WILD) room (Beaudouin-Lafon et. al., 2012).
  • In WILD, the CAVE room could be used by a group of microbiologist (co-located inside the CAVE) to study how one molecule docks with another and interactively and seamlessly switch between several 3D representations, different molecular models, online databases, websites and research articles along with the ability to collaborate with colleagues in remote locations (Beaudouin-Lafon et. al., 2012).

2.2 Immersive Mixed Reality Environments

  • Immersive mixed reality environments offer a different approach to reproducing reality or embodied presence (Nakevska, 2012).
  • The user is exposed to a multi- dimensional environment developed from a heterogeneous composition of technologies including sensors, augmented reality, augmented virtuality supported by processing applications and components that manage the use of contextual information without exclusion of the real-physical environment.
  • Similar to CAVEs which focus on virtual worlds, the immersive mixed reality environment aims to create a ”fantasy” world where the user is engaged using multisensory augmentations of the surrounding environment.
  • They may be expanded or moved along with the user thanks to the use of multiple geographically displaced markers.
  • Museums may associate unique markers to a sequence of displayed exhibits spread out across several rooms and corridors that would provide an immersive mixed reality environment useful for providing more information.

2.3 Mixed reality and mobile devices

  • The Augmented Reality (AR) form of mixed-reality is already present in many every-day applications, that are location or context aware, including the live- television broadcast of sports events (Azuma et. al, 2001) as it provides new ways of showing relationships and connections in the real world.
  • (Uhomoibhi et. al., 2011) and (Andujar et. al., 2011) show the use of augmented reality in education and (FitzGerald, 2012) reported examples of AR applications from specific domains such as architecture and tourism, that engage the user in an exploratory role (like in games) aimed at the discovery of additional material or content.
  • Mobile devices also contain one or more of the following sensors: microphone, multi-touch input , camera, location (gps), accelerometer (for acceleration, rotation or orientation), ambient light level, which may be used to aid the augmentation process.
  • Many of the existing examples of mixed-reality on mobile platforms focus on using AR in providing passive information (text, audio and video overlays) to users based on input from sensors about physical location, movement and gestures.
  • Other works such as (Onime, Uhomoibhi and Pietrosemoli, 2015) document the use of Augmented Virtuality (AV) on mobile devices for estimating power output of solar panels.

3.1 Mixed Reality Visualization Cubicles

  • A mixed reality visualization cubicle may be created using a spatial arrangement of multiple markers.
  • One or more AR markers are placed on each wall of the cubicle and each one provides a windowed view of the virtual environment.
  • Figure 3 shows the AR visualization cubicle jointly developed by Santa’s Co (a software development company from Reggio Emilia, Italy), the Ulster University (UU) and the International Centre for Theoretical Physics (ICTP).
  • The semi-immersive AR environment is composed of four large A3 markers, while three were positioned vertically, each on a separate wall (left, right and front from perspective of a user) to cover a 180◦ horizontal angle; the fourth was placed horizontal on the floor to cover a 90◦ vertical angle.
  • That is, using this configuration, the cubicle may be used to provide a wide-angle seamless 180◦ view of the virtual world in the horizontal direction seamlessly combined with a 90◦ angle in the vertical direction.

3.2 Mixed-reality Augmentation Markers

  • The marker is a type of place holder located within the environment that acts as an identifiable reference point for insertion of objects in the mixed reality world.
  • In its basic format, the visualisation of a two dimensional QR code using a suitable application (QR reader) would cause the opening of a pre-determined Uniform Resource Locator (URL).
  • Shapefiles definitions of represented objects are used to facilitate the 3D rendering by a suitable graphics library or engine that also provide the ability to scale them.
  • In a technique used in mobile AR, the marker image is decomposed into unique set(s) of simple shapes and angles, which is then registered or encoded within the AR application as the marker (Onime, Uhomoibhi and Radicella, 2015).
  • The markers employed for the mixed-reality cubicle discussed in this paper were computer generated abstract patterns composed of random polygons in greyscale colour.

3.3 Creating mobile Augmented Reality (mAR) software

  • See-through augmented reality on mobile devices Figure 4 shows the technical flow-chart for the sequence of steps implemented in a typical mixed reality (AR) application software.
  • Consider the 3-axis accelerometer device shown in Figure 5a, which is com- posed of elastic elements and a suspended mass.

4.1 Applications

  • VR and mixed reality technologies are already widely applied in sectors ranging from medicine, entertainment, education and interactive guides, nature and earth science.
  • Within the cubicle described in this paper, the spatial arrangement of markers provided a wide-angle (180◦ horizontal + 90◦ vertical) exploration that was used in an educational context for the interactive visualisation of geospatial data representing landforms.
  • This form of application is equally useful for conducting interactive visits to cities or other remote sites/locations in a manner that allows a user to travel along streets and also experiencing the sights, sounds and smells.
  • The latter would be possible with coordinated use of specialized sensors that release precaptured scents.
  • Mixed-reality based tools are gaining grounds as a new class of ”Big Data” visualisation tools capable of providing interactive exploration for growing research outputs/data, large or big datasets resulting from simulations and physical experiments such as the LHC (CERN, Geneva) or Genome related sequencing (Onime and Uhomobhi, 2016).

4.2 Familiarity with mixed reality technology

  • Anonymous feedback was obtained from 174 academicians (researchers and students).
  • Demographically, participants were from 7 different countries al- though primarily from two institutions.
  • The consenting adult volunteers, who participated without incentives, risks and disadvantages in the international study were informed of the purpose, confidentiality of the study and the intended use of the collected data.
  • The collected data show less than 30% had used mixed reality environments which is lower than expected.
  • It is possible that mixed reality cubicles as discussed in this paper would improve knowledge about both AR and VR technologies (Onime, Uhomoibhi and Wang, 2016).

4.3 Limitations

  • The cubicle provides a fully immersive experience if used with suitable AR goggles or headmounted devices.
  • Tablets and normal smart-phones alone provide a windowed semi-immersive view limited by their display/screen sizes.
  • The mobile technology based viewing devices used within the cubicle may experience issues related to poor visibility in the presence of strong ambient light.
  • Within the current version of the cubicle, complex gestures or movements involving twohands is not yet possible as users would hold the mobile device in one hand and can only perform gestures with the other hand.
  • The standard dimensions of a typical cubicle does not allow for group visualizations or use.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

A Study of Developments and Applications of Mixed Reality
Cubicles and Their Impact on Learning
J.
Uhomoibhi
Artificial Intelligence Research Group, Ulster University, Northern Ireland, UK
C.
Onime
International Centre for Theoretical Physics, Trieste, Italy
H.
Wa
ng
Artificial Intelligence Research Group, Ulster University, Northern Ireland, UK
Abstract
Purpose - This paper reports on developments and applications of mixed reality cubicles and their
impacts on learning in higher education. This paper investigates and presents the cost effective
application of augmented reality (AR) as a mixed reality technology via or to mobile devices such as
head-mounted devices, smart phones and tablets. Discuss the development of mixed reality applications
for mobile (smartphones and tablets) devices leading up to the implementation of a mixed reality cubicle
for immersive three dimensional (3D) visualizations.
Design/methodology/approach - The approach adopted was to limit the considerations to the application
of AR via mobile platforms including head-mounted devices with focus on smartphones and tablets,
which contain basic feedback to-user channels such as speakers and display screens. An AR
visualization cubicle was jointly developed and applied by three collaborating institutions. The markers,
acting as placeholders acts as identifiable reference points for objects being inserted in the mixed reality
world. Hundreds of participants comprising academics and students from seven different countries took
part in the studies and gave feedback on impact on their learning experience.
Findings - Results from current study show less than 30% had used mixed reality environments. This is
lower than expected. About 70% of participants were first time users of mixed reality technologies. This
indicates a relatively low use of mixed reality technologies in education. This is consistent with research
findings reported that educational use and research on augmented reality is still not common despite their
categorization as emerging technologies with great promise for educational use.
Research limitations/implications - Current research has focused mainly on cubicles which provides
immersive experience if used with head-mounted devices (goggles and smartphones), that are limited by
their display/screen sizes. There are some issues with limited battery lifetime for energy to function,
hence the need to use rechargeable batteries. Also, the standard dimension of cubicles does not allow for
group visualizations. The current cubicle has limitations associated with complex gestures and
movements involving two hands, as one hand are currently needed for holding the mobile phone.
Practical implications - The use of mixed reality cubicles would allow and enhance information
visualization for big data in real time and without restrictions. There is potential to have this extended for
use in exploring and studying otherwise inaccessible locations such as sea beds and underground caves.
Social implications - Following on from this study further work could be done to developing and
application of mixed reality cubicles that would impact businesses, health, and entertainment.
Originality/value - The originality of this paper lies in the unique approach used in the study of
developments and applications of mixed reality cubicles and their impacts on learning. The diverse
composition in nature and location of participants drawn from many countries comprising of both tutors
and students adds value to the present study. The value of this research include amongst others, the useful
results obtained and scope for developments in the future.
Keywords Mixed Reality, Cubicles, CAVE, Mobile Computing, Learning impacts
Paper type Research paper

1. Introduction
Humans typically perceive and relate with their surrounding environment using the five
physiological senses of sight, smell, touch, sound and taste, although sight, sound and
touch are more readily used. Mixed Reality technology has the potential to offer richer
information, increase learner engagement and to improve the educational offering for different
categories of learners. Augmented reality as the leading technology has the capability to engage
the user in an enhanced perception of the surroundings as well as the possibility to act as a bridge
towards different types of contents encompassing text, audio, video. AR is characterized by the
combination of real and virtual components and by interaction in real time (Azuma, 1997;
Milgram and Kishino, 1994;).
The portability of technology over the years, has seen a shift from the use of heavy backpaks
and associated displays to the use of light glasses connected to mobile devices such as Google
Glass (Google Glass, 2013) and the futuristic AR contact lense. Most recently, the use of
Microsoft's Hololens platform (Microsoft Hololens) has resulted in new levels of immersion to a
holographic AR reality experience, with the help of a head-mounted display embedding all the
hardware (Cheok et al, 2004; M. Ostanin, A. Klimchik, 2018; Lang et al, 2019).
Reality may be considered as a state of having existence, substance or objects that may be
actually experienced and/or seen (Onime and Abiona, 2016), while virtuality may be considered
as having a non-realistic (or abstract) view of objects, that is opposite of an idealistic,
realistic or notional view. This opposing relationship between reality on the one hand and
virtuality on the other hand is illustrated in Figure 1, where reality is at one extreme of a
continuum while virtuality, better known as Virtual Reality (VR), is at the opposite extreme
and in-between them is the mixed-reality environment (Onime et. al., 2016).
Fig.
Real it y
-Virtuality Continuum. Adapted
from (
Milgram et. al. 1994)
Traveling a l o ng the
continuum
from left to
righ
t
represents diminishing
reality (or
reduction
in real objects) and increasing
virtuality
(increase
in
virtual objects) resulting
in the complete absence of real objects at the
virtual
end. In other words, at the VR end,
the
environmen
t
is completely made
up
of virtual objects. Two kinds of mixed reality
environments
are
presen
t
in
the
continuum: Augmented
Reality (AR), where the
environmen
t
is
predomin
antly
composed of real objects and
Augmented Virtuality
(AV)
where it is made
up
of virtual o b j e c t s . In AR, the goal is not to exclude the real
objects (as
in
VR) but to blend
additional
or
computer generated information
into the
real
world. While in AV, the goal is to blend real objects
(data
or
information
from
real world) into a
computer generated
environmen
t
(Onime et. al., 2016)
. From Figure
1,
it is not difficult to imagine a centroid
p
oin
t
of the
continuum
where it is
no
longer
possible to
distinguish
the real world from the virtual world (Milgram et. al. 1994), loca
ted
hypothetically
between AR and AV
that
represen
t
a
situation
of balance,
or
equal
number of real and virtual
objects,
In general, the
environmen
t
described by the
continuum
may be simplified as the

integration
of real and virtual objects as shown in
Equation 1.
E
󰇛 󰇜 (1)
Where E represents
the
environment,
R the set of real objects and
V
the
set
of virtual
objects.
As earlier discussed, E may be
conditionally
grouped into
distinct
environ
ments as
follows:








(2)
Where
E
R
,
E
AR
, E
c
, E
AV
and E
V
R
represen
t
the Real, AR, centroid,
AV
and VR
environments
respectively, each of which may be
individually
expanded from
Equation
1.
Eliminating
the
extremities
from
Equation
2 then results in the
mixed
reality
e
n
vironmen
t
as show in
Equation 3.

󰇱





(3)
Where,
E
M
R
represents
the mixed reality
environmen
t.
In practice, the solution of
Equations
3 and 1 is simplified during
the
creation of mixed
reality
environments
by
introducing
/
using special
place
holders known as markers to
indicate the relative
entry-points
(or
p
ositions)
and/or orientation
of other (to be
introduced)
objects within
environment.
F
or
example, in the visual form of AR, the marker
is a graphically visible
image
that
should be recognised at
run-time
from
d
iffe
ren
t
distances,
resolutions
and
angles.
That is,

󰇛
󰇜 (4)
Where R
p
is the set of real place-holders used for insertion of virtual
objects

󰇛
󰇜 (5)
Where
V
p
is the set of virtual place-holders used for insertion of real
objects.
E
c
is simply the
special case of either
Equation
4 or 5 when the
cardinalit
y
of both sets (under the integral
sign) are
e
q
ual.
1.1 Virtual Reality
(VR)
A broad definition of VR
portrays
it as a technology
that attempts
to
pro
vide
3D
interactions
with a
computer
in new ways with emphasis on the
heigh
t
ened use of the
human senses of sight, sound and touch. For example,
spa
tialized sound may be used to

provide direction such as sound growing
louder
as the user approaches (Zahorik, 2002). While,
a narrower definition describes VR as
a
3D
computer-generated
simulation oriented
environmen
t
that
allows users
to
interact
at various levels in a more
natural
manner using
interface devices
and
peripherals such as 3D eye-wear and trackers [9]. For example, haptic
devices allow users (with a VR
environment)
to touch surfaces, grasp and move
virtual
objects, possibly
obtaining feedback/reactions
them (Basdogan et. al., 2000; Tan and Pentland,
1997).
In VR, the user undergoes an immersion or the psychological
exp
erience
of loosing
himself in the
computer (digitally) generated
environmen
t
(virtual
space or world)
that
may be sometimes modeled after or based on an
existing
(real)
environment.
Although, in
such virtual world(s), everything is possible
as
t
ypical
laws of physics such as gravity and
time may be modified or
eliminated
completely and the users can (within its confines)
overcome
limitations
that
were previously imposed by the physical world (Loscos et. al.
2003).
VR h a s b e e n c l a s s i f i ed into non, semi and fully immersive systems,
according
to
the degree of immersion experienced by the users (Fox et. al., 2009). In non-immersive
VR
systems, users do not have a stereo view
and/or
experience of the virtual
en
vi
ronment.
Semi-immersive VR systems provide a bigger view of the
computer generated
environmen
t
mainly
through
use of a large screen device or
sp
ecial eye-wear (or goggles), commonly
combined with special input devices such
as
wands, gloves or controllers.
Fully-immersive
VR systems provide a total
(3D)
view of the
computer generated
environmen
t
obtained
using multiple
large
screen devices or special eye-wear along with special input devices
such
as touch-screens,
wands, gloves and
con
trollers.
Figure 2 shows
t
wo
differen
t
examples of VR
environments,
the first
rep-
resents an
indoor
environmen
t
with various bits of
furniture
including
c
hairs,
a sofa and a
painting,
while the second is an outdoor view of a well
dev
elop
ed
w
ater-fron
t.
In many VR systems as discussed in the literature review section 2, full immersion
occurs
when
all references to the real world
environmen
t
are completely removed by
housing
the user in specially designed CAVE
environments(s)
or using special
head-
mounted
displays (HMD) (helmet devices with mounted displays) for
mobility
.
This paper discusses
obtaining
similar heightened (fully) immersive
exp
erience
using
mixed-reality
technology
and Section 3 presents the
development, lim
itations
of a fully immersive
mixed-reality
cubicle and results of a study
on
familiarity with mixed reality technologies at
t
wo
diffe
r
e
n
t
academic
institu
tions, while Section 4 concludes the
pap
er.

Fig.
2
Examples
of VR
environments. (Santa’s Company, 2013)
2. Literature Review
VR and mixed-reality are two technologies that are changing the future directions of
ubiquitous computing and there are already, many diverse applications of VR technology in
various sectors: For example, VR has been used as a plat- form to study differences in human
behaviour within controlled environment and the real physical world (Santa’s Company, 2013).
It has also been used as a platform for teaching specialized procedures to pilots (Pausch et. al.,
1992) and doctors (O’Toole et. al, 1998) without the associated risks involved in a real
environment.
Mixed reality applications now surround us everywhere in education, at home and in industry.
They are most obviously in video games and entertainment, but also in live events, in retail,
education, healthcare and engineering (Quint, 2015; Bellini et al, 2016; The Ford Motor
Company, 2017). They are used for information visualization, remote collaboration, human-
machine-interfaces, design tools as well as education and training (Scholz and Smith, 2016;
Bacca et al, 2015).
Mixed Reality combines the real world and the virtual world into one user experience, which
significantly helps to extend opportunities for enhanced real learning (Lee, 2012; Guo, 2015). In
the face of rapid technological developments with increasing student number abd diversity of
needs, there is the search for new ways to teach, it could be argued that AR has the potential
pedagogical applications to meet some of the needs.
In the education sector, there are on-line resources that use non-immersive VR related
techniques to provide several chemistry laboratory experiments/exercises , as well as, simulation
of a chemistry laboratory through use of rich media powered by JavaScript (Georgiou et. al,
2007). In civil engineering, building technology and architecture, VR based prototyping is also
commonly used to provide a 3D view (or 3D printed model) of objects with varying levels of
abstraction (Cecil and Huber, 2010).
Virtual reality is used to provide the interactive display of 3D objects in the gaming industry

Citations
More filters

29 Oct 2012
Abstract: Researchers in the field of virtual environments (VE), or virtual reality, surround a participant with synthetic stimuli, The flight simulator community, primarily in the U.S. military, has a great deal of experience with aircraft simulations, and VE researchers should be aware of the major results in this field. In this survey of the literature, we have especially focused on military literature that may be hard for traditional academics to locate via the standard journals. One of the authors of this paper is a military helicopter pilot himself, which was quite useful in obtaining access to many of our references. We concentrate on research that produces specific, measured results that apply to VE research. We assume no background other than basic knowledge of computer graphics, and explain simulator terms and concepts as necessary. This paper ends with an annotated bibliography of some harder to find research results in the field of flight simulators: • The effects of display parameters, including field-of-view and scene complexity; • The effect of lag in system response; • The effect of refresh rate in graphics update; • The existing theories on causes of simulator sickness; and • The after-effects of simulator use Many of the results we cite are contradictory. Our global observation is that with flight simulator research, like most human-computer interaction research, there are very few correct answers. Almost always, the answer to a specific question depends on the task the user was attempting to perform with the simulator.

17 citations


Journal ArticleDOI
03 Jul 2020
TL;DR: The paper will add to the existing literature on emerging technologies as a unique environment to improve co-create/co-design the visuals created during the fuzzy front end of the design process and offer a potential framework for future empirical work.
Abstract: The purpose of this paper is to explore the benefits of co-creation/co-design using extended reality (XR) technologies during the initial stages of the design process. A review of the emerging co-creation tools within XR will be examined along with whether they offer the potential to improve the design process; this will also highlight the gaps on where further research is required.,The paper draws on professional and academic experiences of the authors in creative practices within the realm of XR technology, co-creation and co-design. In addition, a review of the current literature on emerging technologies and work-based learning will offer further insight on the themes covered.,To design, collaborate, iterate and amend with colleagues and peers in a virtual space gives a wide range of obvious benefits. Creative practitioners both in education and employment are working more collaboratively with the advancement of technology. However, there is a need to find a space where collaboration can also offer the opportunity for co-creation that improves the initial stages of the design process. This technology also offers solutions on the constraints of distance and ameliorates creative expression.,There is an opportunity to test the ideas expressed in this paper empirically; this can be done through testing co-creation tools with professionals, work-based learners and students.,The paper will add to the existing literature on emerging technologies as a unique environment to improve co-create/co-design the visuals created during the fuzzy front end of the design process and offer a potential framework for future empirical work.

6 citations


Book Chapter
23 Dec 2015
TL;DR: This chapter introduces mobile augmented reality (semi-immersive 3D virtualreality) as a vehicle for the delivery of practical laboratory experiments in science, technology and engineering.
Abstract: The average learner today, being quite exposed to information and communication technologytools, is less inclined to read books or manuals and prefers to carry out most of thecommunications on-line using new/modern electronic devices or gadgets. The traditionalteaching styles built around using only face-to-face classroom based lessons no longer suit thelearning styles of the average learner; introducing multimedia or other on-line content intoteaching results in improved performance by the learners. Blended e-learning or other on-lineteaching strategies tend to focus on the delivery of theoretical material; however thepedagogy/training of engineers, technologists and scientists involves a strong hands-onpractical/laboratory training component as they are expected to create new things/technologiesand not just repeat what previous generations did. The benefits of this hands-on or practicalcomponent include stimulating deep and reflective learning, thereby improving the creativeproblem solving capabilities while also providing exposure/insight into real world problemsand challenges. This chapter introduces mobile augmented reality (semi-immersive 3D virtualreality) as a vehicle for the delivery of practical laboratory experiments in science, technologyand engineering. Mobile augmented reality delivers multi-sensorial interactions with acomputing platform over commodity hardware technology that is already widely accepted.Two illustrated examples in the fields of micro-electronics and communications engineeringare presented to highlight the innovative features such as the ability to closely replicate anexisting laboratory based hands-on experiment and use of the mobile augmented realityexperiment as a blended learning aid for laboratory experiments or stand-alone off-lineexperiment for distance learning.

4 citations


Journal ArticleDOI
TL;DR: This holistic research offers a thorough analysis of e-learning platforms, as seen through the lens of engineering students, and proves to be an all-encompassing one, potent enough to surface critical issues marring the e- learning experience.
Abstract: As educational institutes began to address the challenges posed by COVID-19, e-learning came to the foreground as the best bet left. This study is in quest of revealing engineering student's perceptions of the available e-learning platforms, thus surfacing the underlying bottlenecks. Further, it aims at providing solutions that would help enhance the e-learning experience not only in pandemic times but also in the long run.,This holistic research begins with a comprehensive comparative study about the available e-learning platforms, followed by a primary data analysis through an online survey of 364 engineering students from various colleges and branches. The collected data was analyzed to detect bottlenecks in online learning and suggestions are given for solving some challenges.,On a five-point Likert scale, the available e-learning platforms garnered ratings ranging from 2.81 to 3.46. Google meet was the most preferred platform. However, with a net promoter score (NPS) of 30.36, Microsoft Teams emerged as the most satisfying platform. Technical shortcomings clubbed with psychological and biological factors were found to be taking a toll on e-learning.,This innovative research is based on the perceptions of engineering students hailing majorly from Indian cities, and hence, it may be having educational stream bias and geographical bias. The research could be further extended to cover rural areas and global trends in e-learning.,The research offers a thorough analysis of e-learning platforms, as seen through the lens of engineering students. Furthermore, the analysis does not constrain itself to the technicalities and thus proves to be an all-encompassing one, potent enough to surface critical issues marring the e-learning experience.

4 citations


Journal ArticleDOI
06 Jan 2021
Abstract: This study examines students' emotional responses to augmented reality (AR) applications and their willingness to share on social media. It also compares user experiences of AR and virtual reality (VR).,In line with expectation disconfirmation theory, the study focuses on students' experiences in the post-adoption situation where they had gained actual experiences of AR applications. The participants in this case study included 100 undergraduate students from higher educational institutes.,Augmentation as a value-creating mechanism seems to create surprising emotional reactions, as it created completely new and unexpected experiences for first-time users. This study also shows that positive user experiences increased the students' willingness to share AR content on social media channels. In addition, AR seems to be easier to adopt than does VR with “cardboard-style” VR headsets.,More research is needed to determine which specific features of AR applications and pedagogical methods create positively surprising emotional experiences that affect rewarding learning experiences and social media sharing.,The results of this study allow designers and educators to select educational technologies that emotionally engage students to use and share them. Positively surprising emotional experiences are important for rewarding learning experiences. The findings also provide hints on the future preferences of new AR users.,This study created a new understanding of the emotional determinants of AR adoption and sharing on social media.

3 citations


Cites background from "A study of developments and applica..."

  • ...Although AR is a promising technology in the educational context, its educational use and research are in its infancy (Uhomoibhi et al., 2020)....

    [...]


References
More filters

Journal ArticleDOI
Ronald Azuma1
TL;DR: The characteristics of augmented reality systems are described, including a detailed discussion of the tradeoffs between optical and video blending approaches, and current efforts to overcome these problems are summarized.
Abstract: This paper surveys the field of augmented reality AR, in which 3D virtual objects are integrated into a 3D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment, and military applications that have been explored. This paper describes the characteristics of augmented reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective augmented reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using augmented reality.

6,895 citations


Journal Article
TL;DR: Paul Milgram's research interests include display and control issues in telerobotics and virtual environments, stereoscopic video and computer graphics, cognitive engineering, and human factors issues in medicine.
Abstract: Paul Milgram received the BASc degree from the University of Toronto in 1970, the MSEE degree from the Technion (Israel) in 1973 and the PhD degree from the University of Toronto in 1980 From 1980 to 1982 he was a ZWO Visiting Scientist and a NATO Postdoctoral in the Netherlands, researching automobile driving behaviour From 1982 to 1984 he was a Senior Research Engineer in Human Engineering at the National Aerospace Laboratory (NLR) in Amsterdam, where his work involved the modelling of aircraft flight crew activity, advanced display concepts and control loops with human operators in space teleoperation Since 1986 he has worked at the Industrial Engineering Department of the University of Toronto, where he is currently an Associate Professor and Coordinator of the Human Factors Engineering group He is also cross appointed to the Department of Psychology In 1993-94 he was an invited researcher at the ATR Communication Systems Research Laboratories, in Kyoto, Japan His research interests include display and control issues in telerobotics and virtual environments, stereoscopic video and computer graphics, cognitive engineering, and human factors issues in medicine He is also President of Translucent Technologies, a company which produces "Plato" liquid crystal visual occlusion spectacles (of which he is the inventor), for visual and psychomotor research

3,605 citations


"A study of developments and applica..." refers background in this paper

  • ...AR is characterized by the combination of real and virtual components and by interaction in real time (Azuma, 1997; Milgram and Kishino, 1994;)....

    [...]


Journal ArticleDOI
TL;DR: This work refers one to the original survey for descriptions of potential applications, summaries of AR system characteristics, and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies.
Abstract: In 1997, Azuma published a survey on augmented reality (AR). Our goal is to complement, rather than replace, the original survey by presenting representative examples of the new advances. We refer one to the original survey for descriptions of potential applications (such as medical visualization, maintenance and repair of complex equipment, annotation, and path planning); summaries of AR system characteristics (such as the advantages and disadvantages of optical and video approaches to blending virtual and real, problems in display focus and contrast, and system portability); and an introduction to the crucial problem of registration, including sources of registration error and error-reduction strategies.

3,269 citations


Journal ArticleDOI
TL;DR: The MagicBook project is an early attempt to explore how the authors can use a physical object to smoothly transport users between reality and virtuality.
Abstract: The MagicBook project is an early attempt to explore how we can use a physical object to smoothly transport users between reality and virtuality. Young children often fantasize about flying into the pages of a fairy tale and becoming part of the story. The MagicBook project makes this fantasy a reality using a normal book as the main interface object. People can turn the pages of the book, look at the pictures, and read the text without any additional technology. However, if a person looks at the pages through an augmented reality display, they see 3D virtual models appearing out of the pages. The models appear attached to the real page so users can see the augmented reality scene from any perspective by moving themselves or the book. The virtual content can be any size and is animated, so the augmented reality view is an enhanced version of a traditional 3D pop-up book.

651 citations


Journal ArticleDOI
TL;DR: This literature review research describes Augmented Reality (AR), how it applies to education and training, and the potential impact on the future of education.
Abstract: There are many different ways for people to be educated and trained with regard to specific information and skills they need. These methods include classroom lectures with textbooks, computers, handheld devices, and other electronic appliances. The choice of learning innovation is dependent on an individual’s access to various technologies and the infrastructure environment of a person’s surrounding. In a rapidly changing society where there is a great deal of available information and knowledge, adopting and applying information at the right time and right place is needed to main efficiency in both school and business settings. Augmented Reality (AR) is one technology that dramatically shifts the location and timing of education and training. This literature review research describes Augmented Reality (AR), how it applies to education and training, and the potential impact on the future of education.

546 citations


Frequently Asked Questions (2)
Q1. What have the authors contributed in "A study of developments and applications of mixed reality cubicles and their impact on learning" ?

This paper reports on developments and applications of mixed reality cubicles and their impacts on learning in higher education. This paper investigates and presents the cost effective application of augmented reality ( AR ) as a mixed reality technology via or to mobile devices such as head-mounted devices, smart phones and tablets. This is consistent with research findings reported that educational use and research on augmented reality is still not common despite their categorization as emerging technologies with great promise for educational use. There is potential to have this extended for use in exploring and studying otherwise inaccessible locations such as sea beds and underground caves. Following on from this study further work could be done to developing and application of mixed reality cubicles that would impact businesses, health, and entertainment. The originality of this paper lies in the unique approach used in the study of developments and applications of mixed reality cubicles and their impacts on learning. The diverse composition in nature and location of participants drawn from many countries comprising of both tutors and students adds value to the present study. The value of this research include amongst others, the useful results obtained and scope for developments in the future. 

Future work include the creation of an Experience Lab by the Artificial Intelligence and Ap- plications Research Group at Ulster University, that involve the deployment of several enhanced mixed-reality visualization cubicles at Ulster University and the ICTP. In line with the IVIS4BigData reference model, The mixed reality cubicle would need to be extended for multidisciplinary Computer Supported Group Work ( CSCW ) based on open standards over various infrastructure including local wireless or mesh networks, the internet and Clouds.