scispace - formally typeset
Search or ask a question

Showing papers on "Mixed reality published in 1999"



Book ChapterDOI
01 Jan 1999
TL;DR: This work reviews MR techniques for developing CSCW interfaces and describes lessons learned from developing a variety of collaborative Mixed Reality interfaces, and involves the use of computer vision techniques for accurate MR registration.
Abstract: Virtual Reality (VR) appears a natural medium for computer supported collaborative work (CSCW). However immersive Virtual Reality separates the user from the real world and their traditional tools. An alternative approach is through Mixed Reality (MR), the overlaying of virtual objects on the real world. This allows users to see each other and the real world at the same time as the virtual images, facilitating a high bandwidth of communication between users and intuitive manipulation of the virtual information. We review MR techniques for developing CSCW interfaces and describe lessons learned from developing a variety of collaborative Mixed Reality interfaces. Our recent work involves the use of computer vision techniques for accurate MR registration. We describe this and identify areas for future research.

358 citations


Journal ArticleDOI
TL;DR: A general model is presented that describes how the interplay between virtual reality's features and other important factors in shaping the learning process and learning outcomes for this type of material work together.
Abstract: Designers and evaluators of immersive virtual reality systems have many ideas concerning how virtual reality can facilitate learning. However, we have little information concerning which of virtual reality's features provide the most leverage for enhancing understanding or how to customize those affordances for different learning environments. In part, this reflects the truly complex nature of learning. Features of a learning environment do not act in isolation; other factors such as the concepts or skills to be learned, individual characteristics, the learning experience, and the interaction experience all play a role in shaping the learning process and its outcomes. Through Project Science Space, we have been trying to identify, use, and evaluate immersive virtual reality's affordances as a means to facilitate the mastery of complex, abstract concepts. In doing so, we are beginning to understand the interplay between virtual reality's features and other important factors in shaping the learning process and learning outcomes for this type of material. In this paper, we present a general model that describes how we think these factors work together and discuss some of the lessons we are learning about virtual reality's affordances in the context of this model for complex conceptual learning.

298 citations


Journal ArticleDOI
TL;DR: The first volume of the International Symposium on Mixed Reality as mentioned in this paper provides an in-depth look at the current state of mixed reality technology and the scope of its use in entertainment and interactive arts, as well as in engineering and medical applications.
Abstract: This volume is the first book describing the new concept of "Mixed Reality" which is a kind of virtual reality in a broader sense. Published as the proceedings of the first International Symposium on Mixed Reality and written by an interdisciplinary group of experts from all over the world in both industry and academia, this book provides an in-depth look at the current state of mixed reality technology and the scope of its use in entertainment and interactive arts, as well as in engineering and medical applications.Because of the inherent interdisciplinary applications of the mixed reality technology, this book will be useful for computer scientists in computer graphics, computer vision, human computer interaction, and multimedia technologies, and for people involved in cinema/movie, architecture/civil engineering, medical informatics, and interactive entertainment.

231 citations


10 Nov 1999
TL;DR: This paper introduces here a new paradigm, Spatially Augmented Reality (SAR), where virtual objects are rendered directly within or on the user's physical space, and discusses the fundamentally different visible artifacts that arise as a result of errors in tracker measurements.
Abstract: To create an effective illusion of virtual objects coexisting with the real world, see-through HMD-based Augmented Reality techniques supplement the user's view with images of virtual objects. We introduce here a new paradigm, Spatially Augmented Reality (SAR), where virtual objects are rendered directly within or on the user's physical space. A key benefit of SAR is that the user does not need to wear a head-mounted display. Instead, with the use of spatial displays, wide field of view and possibly high-resolution images of virtual objects can be integrated directly into the environment. For example, the virtual objects can be realized by using digital light projectors to "paint" 2D/3D imagery onto real surfaces, or by using built-in flat panel displays. In this paper we present the rendering method used in our implementation and discuss the fundamentally different visible artifacts that arise as a result of errors in tracker measurements. Finally, we speculate about how SAR techniques might be combined with see-through AR to provide an even more compelling AR experience.

202 citations


Proceedings ArticleDOI
14 Jul 1999
TL;DR: Whether virtual reality (VR) and augmented reality (AR) offer potential for the training of manual skills, such as for assembly tasks, in comparison to conventional media, is investigated.
Abstract: In this paper we investigate whether virtual reality (VR) and augmented reality (AR) offer potential for the training of manual skills, such as for assembly tasks, in comparison to conventional media. We present results from experiments that compare assembly completion times for a number of different conditions. We firstly investigate completion times for a task where participants can study an engineering drawing and an assembly plan and then conduct the task. We then investigate the task under various VR conditions and context-free AR. We discuss the relative advantages and limitations of using VR and AR as training media for investigating assembly operations, and we present the results of our experimental work.

197 citations


Ronald Azuma1
01 Jan 1999
TL;DR: Despite its potential, Augmented Reality has not received nearly the amount of attention paid to its sibling, Virtual Environments (or Virtual Reality), despite the fact that both fields share a common ancestor.
Abstract: Despite its potential, Augmented Reality has not received nearly the amount of attention paid to its sibling, Virtual Environments (or Virtual Reality), despite the fact that both fields share a common ancestor. It is often forgotten that Sutherland’s original HMD system was an optical see-through display. While the creators of that system did not explicitly attempt to register virtual 3-D objects with real-world objects, they did have an example of combining virtual and real. The motivation was to allow the user to issue commands. The problem was that the graphics engine did not have sufficient power to draw the menus and commands virtually at interactive rates. Therefore, they physically put large signs with the command names on a real wall, and allowed the user to virtually select one of the real signs by pointing at one with the hand controller. Despite this common root, most efforts following Sutherland’s focused on Virtual Environments. It wasn’t until the late 1980’s and early 1990’s that research in Augmented Reality began again in earnest.

126 citations


Proceedings ArticleDOI
13 Mar 1999
TL;DR: The current state of the state-of-the-art in virtual reality is surveyed, addressing the perennial questions of technology and applications.
Abstract: Ivan Sutherland first proposed virtual reality in 1965, and in the next few years built a working system. Twenty years later, line-drawing hardware, the Polhemus tracker, and LCD tiny-TV displays made VR feasible, if costly and inadequate, for several explorers. In 1990, journalists jumped on the idea, and hype levels went out of sight. As usual with infant technologies, the realization of the early dreams and the harnessing to real work has taken longer than the wild prognostications, but it is now happening. I survey the current state-of-the-art, addressing the perennial questions of technology and applications.

120 citations


Book ChapterDOI
23 Jun 1999
TL;DR: The Eigen-Texture method is practical because it does not require any detailed reflectance analysis of the object surface, and has great advantages due to the accurate 3D geometric models.
Abstract: Image-based and model-based methods are two representative rendering methods for generating virtual images of objects from their real images. Extensive research on these two methods has been made in CV and CG communities. However, both methods still have several drawbacks when it comes to applying them to the mixed reality where we integrate such virtual images with real background images. To overcome these difficulties, we propose a new method which we refer to as the Eigen-Texture method. The proposed method samples appearances of a real object under various illumination and viewing conditions, and compresses them in the 2D coordinate system defined on the 3D model surface. The 3D model is generated from a sequence of range images. The Eigen-Texture method is practical because it does not require any detailed reflectance analysis of the object surface, and has great advantages due to the accurate 3D geometric models. This paper describes the method, and reports on its implementation.

108 citations



Journal ArticleDOI
TL;DR: The efforts to apply multimodal and spoken language interfaces to a number of ER applications are described, with the goal of creating an even more ‘realistic’ or natural experience for the end user.
Abstract: We use the term ‘electronic reality’ (ER) to encompass a broad class of concepts that mix real-world metaphors and computer interfaces. In our definition, ‘electronic reality’ includes notions such as virtual reality, augmented reality, computer interactions with physical devices, interfaces that enhance 2D media such as paper or maps, and social interfaces where computer avatars engage humans in various forms of dialogue. One reason for bringing real-world metaphors to computer interfaces is that people already know how to navigate and interact with the world around them. Every day, people interact with each other, with pets, and sometimes with physical objects by using a combination of expressivemodalities, such as spoken words, lone of voice, pointing and gesturing, facial expressions, and body language. In contrast, when people typically interact with computers or appliances, interactions are unimodal, with a single method of communication such as the click of a mouse or a set of keystrokes serving to express intent. In this article, we describe our efforts to apply multimodal and spoken language interfaces to a number of ER applications, with the goal of creating an even more ‘realistic’ or natural experience for the end user.

Book
01 Jan 1999
TL;DR: Overview and perspective registration and rendering - image-based approach multi sensory augmentation communication and collaboration systems - design issues and future.
Abstract: Overview and perspective registration and rendering - image-based approach multi sensory augmentation communication and collaboration systems - design issues and future.

Journal ArticleDOI
01 Sep 1999-Ethos
TL;DR: In this paper, a phenomenological perspective is adopted to explore user embodiment in both those VR applications which do and do not include a visual body (re)presentation (virtual body).
Abstract: This paper considers the experience of embodiment in current and (possible) future virtual reality (VR) applications. A phenomenological perspective is adopted to explore user embodiment in both those VR applications which do and do not include a visual body (re)presentation (virtual body). Embodiment is viewed from the perspective of sensorial immersion where issues of gender, race and culture are all implicated. Accounts of ‘disrupted’ bodies (for example phantom limb and dissociation of the ‘self’ from the body, paralysis and objectified bodies) are advanced in order to provide a context for understanding the ways in which embodiment in virtual reality environments may be instantiated. The explicit claim that VR is an embodied experience and can facilitate the radical transfiguration of the body and its sensorial architecture are explored and evaluated.

Journal Article
TL;DR: It is proved that the HMD using Lippmann hologram has the potentiality of miniaturization, lightening, wide field of vision for outside world, and the Mixed Reality system by using the authors' new HMD is developed.
Abstract: Synopsis We propose the stereoscopic Head Mounted Display(HMD) using Holographic Optical Elements(HOE). This display uses HOE as the holographic combiner instead of half mirror that is used by conventional HMD. A single HOE can record the multiple optical images by the characteristic in itself, that is angular selectivity a~d wavelength selectivity: Therefore, the HOE as the combiner has the feature which can separate the stereoscopic images onto the left and right eyes and it has high efficiency both transparency for outside world and reflectance of virtual images shown by electrical display like as small LCD or CRT and so on. In this paper we proved that the HMD using Lippmann hologram has the potentiality of miniaturization, lightening, wide field of vision for outside world. In addition, we have developed the Mixed Reality system by using oUr new HMD. This HMD can be suitable to the Virtual Reality field, especially Mixed Reality we have to observe both the real outside world and the virtual world at the same time.

01 Jan 1999
TL;DR: Over the past decade, there has been a ground swell of activity in two elds of user interface research: augmented reality and wearable computing.
Abstract: Over the past decade, there has been a ground swell of activity in two elds of user interface research: augmented reality and wearable computing. Augmented reality [1] refers to the creation of virtual environments that supplement, rather than replace, the real world with additional information. This is accomplished through the use of \see-through" displays that enrich the user's view of the world by overlaying visual, auditory, and even haptic, material on what she experiences. Visual augmented reality systems typically, but not exclusively, employ head-tracked, head-worn displays. These either use half-silvered mirror beam splitters to re ect small computer displays, optically combining them with a view of the real world, or use opaque displays fed by electronics that merge imagery captured by head-worn cameras with synthesized graphics. Wearable computing moves computers o the desktop and onto the user's body, made possible through the miniaturization of computers, peripherals, and networking technology. (While we prefer this general de nition implied by the

Journal ArticleDOI
TL;DR: The authors consider how StarWalker's design illustrates the potential of unifying spatial models, semantic structures, and social navigation metaphors in the development of multiuser virtual environments.
Abstract: In the StarWalker virtual environment, users explore a shared semantic space and collaborate with concurrent visitors. The authors consider how StarWalker's design illustrates the potential of unifying spatial models, semantic structures, and social navigation metaphors in the development of multiuser virtual environments.

Proceedings ArticleDOI
27 Sep 1999
TL;DR: This paper addressed the aspect related to the development of AR applications in the cultural heritage field, including the use of haptic interfaces in AR systems, designed at PERCRO.
Abstract: Augmented reality (AR) systems allow the user to see a combination of a mixed scenario, generated by a computer, in which virtual objects are merged with the real environment. The calibration between the two frames, the real world and the virtual environment, and the real time tracking of the user are the most important problems for the AR application implementations. Augmented reality systems are proposed as solutions in many application domains. In this paper we addressed the aspect related to the development of AR applications in the cultural heritage field. Possible future applications are described, including the use of haptic interfaces in AR systems, designed at PERCRO.

Proceedings ArticleDOI
30 Oct 1999
TL;DR: The proposed method is an automatic organization method focusing on video objects to describe video data in an efficient way by collating the real-world video data with map information using DP matching, to demonstrate the reliability of this method.
Abstract: Mixed reality (MR) systems which integrate the virtual world and the real world have become a major topic in the research area of multimedia. As a practical application of these MR systems, we propose an efficient method for making a 3D map from real-world video data. The proposed method is an automatic organization method focusing on video objects to describe video data in an efficient way, i.e., by collating the real-world video data with map information using DP matching. To demonstrate the reliability of this method, we describe successful experiments that we performed using 3D information obtained from the real-world video data.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: The paper describes the method and reports on how the method is implemented, using the method originally designed by P. Viola (1995), to solve the alignment problem of the 3D model and color images.
Abstract: Rendering photorealistic virtual objects from their real images is one of the main research issues in mixed reality systems. We previously proposed the Eigen-Texture method (K. Nishino et al., 1999), a new rendering method for generating virtual images of objects from their real images to deal with the problems posed by past work in image based methods and model based methods. Eigen-Texture method samples appearances of a real object under various illumination and viewing conditions, and compresses them in the 2D coordinate system defined on the 3D model surface. However, we had a serious limitation in our system, due to the alignment problem of the 3D model and color images. We deal with this limitation by solving the alignment problem; we do this by using the method originally designed by P. Viola (1995). The paper describes the method and reports on how we implement it.

Book ChapterDOI
12 Sep 1999
TL;DR: A set of properties that allow mixed reality boundaries to be configured to support different styles of co-operative activity are introduced and two contrasting demonstrations, a performance and an office-door, that rely on different property configurations are described.
Abstract: Mixed reality boundaries establish transparent windows between physical and virtual spaces. We introduce a set of properties that allow such boundaries to be configured to support different styles of co-operative activity. These properties are grouped into three categories: permeability (properties of visibility, audibility and solidity); situation (properties of location, alignment, mobility and segmentation); and dynamics (properties of lifetime and configurability). We discuss how each of these properties can be technically realised. We also introduce the meta-properties of symmetry and representation. We then describe and compare two contrasting demonstrations, a performance and an office-door, that rely on different property configurations.


01 Jan 1999
TL;DR: In this paper describes the context and history of VR in Eindhoven and presents the current set-up of the studio, discusses the impact of the technology on the design process and outlines pedagogical issues in the studio work.
Abstract: Since 1991 Virtual Reality has been used in student projects in the Building Information Technology group. It started as an experimental tool to assess the impact of VR technology in design, using the environment of the associated Calibre Institute. The technology was further developed in Calibre to become an important presentation tool for assessing design variants and final design solutions. However, it was only sporadically used in student projects. A major shift occurred in 1997 with a number of student projects in which various computer technologies including VR were used in the whole of the design process. In 1998, the new Design Systems group started a design studio with the explicit aim to integrate VR in the whole design process. The teaching effort was combined with the research program that investigates VR as a design support environment. This has lead to increasing number of innovative student projects. The paper describes the context and history of VR in Eindhoven and presents the current set-up of the studio. It discusses the impact of the technology on the design process and outlines pedagogical issues in the studio work.

Book ChapterDOI
01 Jan 1999
TL;DR: This paper corrects the early taxonomy of interaction devices and actions introduced by Foley for screen based interactive systems by adapting it to real world and to virtual reality systems.
Abstract: The growing power of computing devices allows the representation of three-dimensional interactive virtual worlds. Interfaces with such a world must profit from our experience in the interaction with the real world. This paper corrects the early taxonomy of interaction devices and actions introduced by Foley for screen based interactive systems by adapting it to real world and to virtual reality systems. Basing on the taxonomy derived, the paper presents a model for a Virtual Reality system based on Systems Theory. The model is capable of including both traditional event-based interaction input devices, as well as continuous input devices. It is strongly device oriented, and allows to model mathematically all currently possible input devices for Virtual Reality. The model has been used for the implementation of a general input device library serving as an abstraction layer to a Virtual Reality system.


Book
01 Jan 1999
TL;DR: Virtual Worlds Virtual Worlds (Jean-Claude Heudin) An Evolutionary Approach to Synthetic Biology: Zen and the Art of Creating Life (Thomas S. Ray) Animated Artificial Life (Jeffrey Ventrella)
Abstract: Virtual Worlds Virtual Worlds (Jean-Claude Heudin) An Evolutionary Approach to Synthetic Biology: Zen and the Art of Creating Life (Thomas S. Ray) Animated Artificial Life (Jeffrey Ventrella) Virtual Humans on Stage (Nadia Magnenat-Thalmann and Laurent Moccozet) Inhabited Virtual Worlds in Cyberspace (Bruce Damer, Stuart Gold, Karen Marcelo, Frank Revi) Virtual Great Barrier Reef (S. Refsland, T. Ojika, C. Loeffler, T. DeFanti) Virtual Worlds and Complex Systems (Yaneer Bar-Yam) Investigating the Complex with Virtual soccer (Itsuki Noda, Ian Frank) Feeping Creatures (Rodney Berry) Art in Virtual Worlds: Cyberart (Olga Kisseleva).

Proceedings ArticleDOI
J. Bowskill1, Mark Billinghurst, B. Crabtree, N. Dyer, A. Loffler 
18 Oct 1999
TL;DR: The ideal of contextual communications is described, where contextual cues collected by the wearable computer are used to establish and enhance communication links.
Abstract: Wearable computers provide constant access to computing and communications resources. We describe how the computing power of wearables can enhance computer mediated communications, with a focus upon collaborative working. In particular we describe the ideal of contextual communications, where contextual cues collected by the wearable computer are used to establish and enhance communication links. We describe the hardware and software technology we have developed to support contextual communication and two experimental contextual communication systems. The first, a wearable conferencing tool uses the user's physical location to select the online conference which they connect to. The second, MetaPark, is a mixed reality educational experiment which explores communications, data retrieval and recording, and navigation techniques within and across real and virtual environments.

Proceedings ArticleDOI
23 Feb 1999
TL;DR: The e-MUSE system (electronic multi user stage environment) is the underlying platform for networked communication, interface, rendering and display organisation that uses VRML to implement mixedreality environments in which visitors’ exploration and experience of virtual space are connected to real space as well as other participants’ experiences.
Abstract: This paper presents our work and research findings on developing the concept of a multi-user shared environment for culture, performance, art and entertainment. It introduces artistic concepts of multi-user spaces focusing on the notion of virtual space as a stage setting and on the behaviours and interactions of people within it. The VRML based demonstrator “Murmuring Fields” presents a mixed reality shared environment installation for several users based on a decentralised network architecture and supporting external participation across internet and in shared physical space. The notion of user representation is replaced by the notion of user enactment, treating the concept of avatar as an extended body of communication. “Murmuring Fields” presents a prototype of an information space where real space becomes the interface to the virtual enabled by an invisible and intuitive fullbody interface environment. Following our goals for user embodiment and group interaction, connecting real and virtual environments as a mixed reality, we have developed the e-MUSE system (electronic multi user stage environment). Derived from an artistic point of departure, the installation “Murmuring Fields”, e-MUSE is the underlying platform for networked communication, interface, rendering and display organisation. It uses VRML to implement mixedreality environments in which visitors’ exploration and experience of virtual space are connected to real space as well as other participants’ experiences. CR


Journal ArticleDOI
01 Sep 1999
TL;DR: The Head Mounted Display (HMD) is discussed as a subset of Mixed Reality (MR) displays and a taxonomic framework for classifying MR systems is presented, in terms of not only the RV continuum but also the degree of centricity of the observer relative to a nominal viewpoint and the extent of control-display congruence.
Abstract: The Head Mounted Display (HMD) is discussed as a subset of Mixed Reality (MR) displays. A definition of MR is given, in terms of image mixtures along a Reality-Virtuality (RV) Continuum, including the subclasses of augmented reality (AR) and augmented virtuality (AV). In relation to actual task execution, the relative need for local guidance information versus more global planning and navigational information is discussed. A taxonomic framework for classifying MR systems is presented, in terms of not only the RV continuum, but also the degree of centricity of the observer relative to a nominal viewpoint and the extent of control-display congruence. Several practical examples of MR systems are presented, all from the domain of surgery, and for each a volume within the MR design space is proposed.

10 Nov 1999
TL;DR: This paper introduces two case studies of augmented reality (AR) system, which use see-through HMDs, and the system congurations of both systems, the details of registration algorithms implemented are described.
Abstract: This paper introduces two case studies of augmented reality (AR) system, which use see-through HMDs. The rst case is a collaborative AR system called ARHockey that requires real-time interactive operations, moderate registration accuracy, and a relatively small registration area. The players can hit a virtual puck with physical mallets in a shared physical game eld. The other case study is the MR Living Room where participants can visually simulate the location of virtual furniture and articles in the physically half-equipped living room. The registration is more crucial in this case because of the requirement of visual simulation. As well as the system congurations of both systems, the details of registration algorithms implemented are described.