scispace - formally typeset
Search or ask a question

Showing papers on "Augmented reality published in 2010"


Journal ArticleDOI
TL;DR: The field of AR is described, including a brief definition and development history, the enabling technologies and their characteristics, and some known limitations regarding human factors in the use of AR systems that developers will need to overcome.
Abstract: We are on the verge of ubiquitously adopting Augmented Reality (AR) technologies to enhance our percep- tion and help us see, hear, and feel our environments in new and enriched ways. AR will support us in fields such as education, maintenance, design and reconnaissance, to name but a few. This paper describes the field of AR, including a brief definition and development history, the enabling technologies and their characteristics. It surveys the state of the art by reviewing some recent applications of AR technology as well as some known limitations regarding human factors in the use of AR systems that developers will need to overcome.

1,526 citations


Proceedings ArticleDOI
22 Nov 2010
TL;DR: This paper provides a classification of perceptual issues in augmented reality into ones related to the environment, capturing, augmentation, display, and individual user differences, and describes current approaches to addressing these problems.
Abstract: This paper provides a classification of perceptual issues in augmented reality, created with a visual processing and interpretation pipeline in mind. We organize issues into ones related to the environment, capturing, augmentation, display, and individual user differences. We also illuminate issues associated with more recent platforms such as handhelds or projector-camera systems. Throughout, we describe current approaches to addressing these problems, and suggest directions for future research.

426 citations


Patent
05 Jan 2010
TL;DR: In this paper, a device can receive live video of a real-world, physical environment on a touch sensitive surface, and an information layer can be generated related to the objects.
Abstract: A device can receive live video of a real-world, physical environment on a touch sensitive surface. One or more objects can be identified in the live video. An information layer can be generated related to the objects. In some implementations, the information layer can include annotations made by a user through the touch sensitive surface. The information layer and live video can be combined in a display of the device. Data can be received from one or more onboard sensors indicating that the device is in motion. The sensor data can be used to synchronize the live video and the information layer as the perspective of video camera view changes due to the motion. The live video and information layer can be shared with other devices over a communication link.

425 citations


Proceedings ArticleDOI
03 Oct 2010
TL;DR: The interactions and algorithms unique to LightSpace are detailed, some initial observations of use are discussed, and future directions are suggested.
Abstract: Instrumented with multiple depth cameras and projectors, LightSpace is a small room installation designed to explore a variety of interactions and computational strategies related to interactive displays and the space that they inhabit. LightSpace cameras and projectors are calibrated to 3D real world coordinates, allowing for projection of graphics correctly onto any surface visible by both camera and projector. Selective projection of the depth camera data enables emulation of interactive displays on un-instrumented surfaces (such as a standard table or office desk), as well as facilitates mid-air interactions between and around these displays. For example, after performing multi-touch interactions on a virtual object on the tabletop, the user may transfer the object to another display by simultaneously touching the object and the destination display. Or the user may "pick up" the object by sweeping it into their hand, see it sitting in their hand as they walk over to an interactive wall display, and "drop" the object onto the wall by touching it with their other hand. We detail the interactions and algorithms unique to LightSpace, discuss some initial observations of use and suggest future directions.

398 citations


Journal ArticleDOI
TL;DR: This paper achieves interactive frame rates of up to 30 Hz for natural feature tracking from textured planar targets on current generation phones using an approach based on heavily modified state-of-the-art feature descriptors, namely SIFT and Ferns plus a template-matching-based tracker.
Abstract: In this paper, we present three techniques for 6DOF natural feature tracking in real time on mobile phones. We achieve interactive frame rates of up to 30 Hz for natural feature tracking from textured planar targets on current generation phones. We use an approach based on heavily modified state-of-the-art feature descriptors, namely SIFT and Ferns plus a template-matching-based tracker. While SIFT is known to be a strong, but computationally expensive feature descriptor, Ferns classification is fast, but requires large amounts of memory. This renders both original designs unsuitable for mobile phones. We give detailed descriptions on how we modified both approaches to make them suitable for mobile phones. The template-based tracker further increases the performance and robustness of the SIFT- and Ferns-based approaches. We present evaluations on robustness and performance and discuss their appropriateness for Augmented Reality applications.

376 citations


Patent
18 Nov 2010
TL;DR: In this paper, a see-through head-mounted display (HMD) device, e.g., in the form of augmented reality glasses, allows a user to view a video display device and an associated augmented reality image.
Abstract: A see-through head-mounted display (HMD) device, e.g., in the form of augmented reality glasses, allows a user to view a video display device and an associated augmented reality image. In one approach, the augmented reality image is aligned with edges of the video display device to provide a larger, augmented viewing region. The HMD can include a camera which identifies the edges. The augmented reality image can be synchronized in time with content of the video display device. In another approach, the augmented reality image video provides a virtual audience which accompanies a user in watching the video display device. In another approach, the augmented reality image includes a 3-D which appears to emerge from the video display device, and which is rendered from a perspective of a user's location. In another approach, the augmented reality image can be rendered on a vertical or horizontal surface in a static location.

339 citations


Journal ArticleDOI
TL;DR: An augmented book called AR-Dehaes has been designed to provide 3D virtual models that help students to perform visualization tasks to promote the development of their spatial ability during a short remedial course.

315 citations


Patent
05 Nov 2010
TL;DR: In this article, the authors describe a direct user-interaction method for 3D augmented reality, which comprises displaying a 3D AR environment having a virtual object and a real first and second objects controlled by a user, tracking the position of the objects in 3D using camera images, displaying the virtual object on the first object from the user's viewpoint, and enabling interaction between the second object and the virtual objects when the first and the second objects are touching.
Abstract: Techniques for user-interaction in augmented reality are described. In one example, a direct user-interaction method comprises displaying a 3D augmented reality environment having a virtual object and a real first and second object controlled by a user, tracking the position of the objects in 3D using camera images, displaying the virtual object on the first object from the user's viewpoint, and enabling interaction between the second object and the virtual object when the first and second objects are touching. In another example, an augmented reality system comprises a display device that shows an augmented reality environment having a virtual object and a real user's hand, a depth camera that captures depth images of the hand, and a processor. The processor receives the images, tracks the hand pose in six degrees-of-freedom, and enables interaction between the hand and the virtual object.

305 citations


Patent
15 Nov 2010
TL;DR: In this paper, the authors proposed a method for providing an augmented reality operations tool to a mobile client positioned in a building, which includes, with a server ( 660 ), receiving ( 720 ) from the client ( 642 ) an augmented augmented reality request for building system equipment ( 612 ) managed by an energy management system (EMS) ( 620 ).
Abstract: A method ( 700 ) for providing an augmented reality operations tool to a mobile client ( 642 ) positioned in a building ( 604 ). The method ( 700 ) includes, with a server ( 660 ), receiving ( 720 ) from the client ( 642 ) an augmented reality request for building system equipment ( 612 ) managed by an energy management system (EMS) ( 620 ). The method ( 700 ) includes transmitting ( 740 ) a data request for the equipment ( 612 ) to the EMS ( 620 ) and receiving ( 750 ) building management data ( 634 ) for the equipment ( 612 ). The method ( 700 ) includes generating ( 760 ) an overlay ( 656 ) with an object created based on the building management data ( 634 ), which may be sensor data, diagnostic procedures, or the like. The overlay ( 656 ) is configured for concurrent display on a display screen ( 652 ) of the client ( 642 ) with a real-time image of the building equipment ( 612 ). The method ( 700 ) includes transmitting ( 770 ) the overlay ( 656 ) to the client ( 642 ).

277 citations


Journal ArticleDOI
TL;DR: A new procedure for static head-pose estimation and a new algorithm for visual 3-D tracking are presented and integrated into the novel real-time system for measuring the position and orientation of a driver's head.
Abstract: Driver distraction and inattention are prominent causes of automotive collisions. To enable driver-assistance systems to address these problems, we require new sensing approaches to infer a driver's focus of attention. In this paper, we present a new procedure for static head-pose estimation and a new algorithm for visual 3-D tracking. They are integrated into the novel real-time (30 fps) system for measuring the position and orientation of a driver's head. This system consists of three interconnected modules that detect the driver's head, provide initial estimates of the head's pose, and continuously track its position and orientation in six degrees of freedom. The head-detection module consists of an array of Haar-wavelet Adaboost cascades. The initial pose estimation module employs localized gradient orientation (LGO) histograms as input to support vector regressors (SVRs). The tracking module provides a fine estimate of the 3-D motion of the head using a new appearance-based particle filter for 3-D model tracking in an augmented reality environment. We describe our implementation that utilizes OpenGL-optimized graphics hardware to efficiently compute particle samples in real time. To demonstrate the suitability of this system for real driving situations, we provide a comprehensive evaluation with drivers of varying ages, race, and sex spanning daytime and nighttime conditions. To quantitatively measure the accuracy of system, we compare its estimation results to a marker-based cinematic motion-capture system installed in the automotive testbed.

273 citations


Proceedings ArticleDOI
10 Apr 2010
TL;DR: Touch Projector is presented, a system that enables users to interact with remote screens through a live video image on their mobile device, and is found that participants achieved highest performance with automatic zooming and temporary image freezing.
Abstract: In 1992, Tani et al. proposed remotely operating machines in a factory by manipulating a live video image on a computer screen. In this paper we revisit this metaphor and investigate its suitability for mobile use. We present Touch Projector, a system that enables users to interact with remote screens through a live video image on their mobile device. The handheld device tracks itself with respect to the surrounding displays. Touch on the video image is "projected" onto the target display in view, as if it had occurred there. This literal adaptation of Tani's idea, however, fails because handheld video does not offer enough stability and control to enable precise manipulation. We address this with a series of improvements, including zooming and freezing the video image. In a user study, participants selected targets and dragged targets between displays using the literal and three improved versions. We found that participants achieved highest performance with automatic zooming and temporary image freezing.

Proceedings ArticleDOI
23 Sep 2010
TL;DR: An application framework that is developing, in order to help others implement their own applications for Augmented Reality, and some future works to highlight the capabilities of the AR API.
Abstract: Virtual Reality is becoming more than a part of our everyday life, helping us to identify quickly the elements of the environment or to better entertain us. The purpose of this paper is to present an application framework that we are developing, in order to help others implement their own applications. In the first section the focus is set on the Augmented Reality basic concepts and the necessity of developing such a framework. The prototype that we present in the second part of this paper comes to demonstrate how our framework can be used to achieve our targeted application for Augmented Reality. It also contains some future works to highlight the capabilities of the AR API.

Patent
04 Oct 2010
TL;DR: In this article, a controller modulates the wavefront curvature of light emitted for each pixel so that the user can see the virtual imagery at a predetermined position relative to the surface regardless of changes in position of the user's eyes with respect to the display device.
Abstract: An augmented reality device for inserting virtual imagery into a user's view of their physical environment. The device comprises: a see-through display device including a wavefront modulator; a camera for imaging a surface in the physical environment; and a controller. The controller is configured for capturing an image of the surface; determining the virtual imagery to be displayed at a predetermined position relative to the surface; determining a position of the surface relative to the augmented reality device; generating an image based on the virtual imagery and on the position of the surface relative to the augmented reality device; and displaying the generated image via the display device. Based on pixel depth information, the controller modulates the wavefront curvature of light emitted for each pixel so that the user sees the virtual imagery at the predetermined position relative to the surface regardless of changes in position of the user's eyes with respect to the display device.

Patent
Mingoo Kang1, Haengjoon Kang1, Jongsoon Park1, Jinyung Park1, Jongchul Kim1, Junho Park1 
03 Dec 2010
TL;DR: In this paper, an image display may be displayed by augmented reality on a remote controller and a user authentication input may be received and a determination may be made whether the received user authentication inputs match a previously stored user authentication information.
Abstract: An image display may be displayed by augmented reality on a remote controller. This may include identifying an electronic device having playable content, receiving information regarding a locked status of the playable content of the identified electronic device, and displaying, on a screen, an object indicating a locked status when the playable content of the identified electronic device requires a user authentication for playing the content. A user authentication input may be received and a determination may be made whether the received user authentication input matches a previously stored user authentication information. The playable content may be released from the locked status when it is determined that the received user authentication input matches the previously stored user authentication information, and information relating to the released playable content may be displayed.

Journal ArticleDOI
TL;DR: A formal usability study is presented in order to explore participants' perceived 'sense of being there' and enjoyment while exposed to a virtual museum exhibition in relation to real-world visits and to determine whether a high level of presence results in enhanced enjoyment.
Abstract: The Augmented Representation of Cultural Objects (ARCO) system, developed as a part of an EU ICT project, provides museum curators with software and interface tools to develop web-based virtual museum exhibitions by integrating augmented reality (AR) and 3D computer graphics. ARCO technologies could also be deployed in order to implement educational kiosks placed in real-world museums. The main purpose of the system is to offer an entertaining, informative and enjoyable experience to virtual museum visitors. This paper presents a formal usability study that has been undertaken in order to explore participants' perceived 'sense of being there' and enjoyment while exposed to a virtual museum exhibition in relation to real-world visits. The virtual museum implemented was based on an existing gallery in the Victoria and Albert Museum, London, UK. It is of interest to determine whether a high level of presence results in enhanced enjoyment. After exposure to the system, participants completed standardized presence questionnaires related to the perceived realism of cultural artifacts referred to as AR objects' presence, as well as to participants' generic perceived presence in the virtual museum referred to as VR presence. The studies conducted indicate that previous experience with ICTs (Information and Communication Technologies) did not correlate with perceived AR objects' presence or VR presence while exposed to a virtual heritage environment. Enjoyment and both AR objects' presence and VR presence were found to be positively correlated. Therefore, a high level of perceived presence could be closely associated with satisfaction and gratification which contribute towards an appealing experience while interacting with a museum simulation system.

Patent
16 Dec 2010
TL;DR: In this article, a method and system that enhances a user's experience when using a near eye display device, such as a see-through display device or a head mounted display device is provided.
Abstract: A method and system that enhances a user's experience when using a near eye display device, such as a see-through display device or a head mounted display device is provided. The user's intent to interact with one or more objects in the scene is determined. An optimized image is generated for the user based on the user's intent. The optimized image is displayed to the user, via the see-through display device. The optimized image visually enhances the appearance of objects that the user intends to interact with in the scene and diminishes the appearance of objects that the user does not intend to interact with in the scene. The optimized image can visually enhance the appearance of the objects that increase the user's comprehension. The optimized image is displayed to the user, via the see-through display device.

Journal ArticleDOI
TL;DR: The feasibility studies showed that augmented reality of the image overlay system could increase the surgical instrument placement accuracy and reduce the procedure time as a result of intuitive 3-D viewing.
Abstract: A 3-D augmented reality navigation system using autostereoscopic images was developed for MRI-guided surgery. The 3-D images are created by employing an animated autostereoscopic image, integral videography (IV), which provides geometrically accurate 3-D spatial images and reproduces motion parallax without using any supplementary eyeglasses or tracking devices. The spatially projected 3-D images are superimposed onto the surgical area and viewed via a half-slivered mirror. A fast and accurate spatial image registration method was developed for intraoperative IV image-guided therapy. Preliminary experiments showed that the total system error in patient-to-image registration was 0.90 ± 0.21 mm, and the procedure time for guiding a needle toward a target was shortened by 75%. An animal experiment was also conducted to evaluate the performance of the system. The feasibility studies showed that augmented reality of the image overlay system could increase the surgical instrument placement accuracy and reduce the procedure time as a result of intuitive 3-D viewing.

Patent
Erick Tseng1
02 Nov 2010
TL;DR: In this paper, a computer-implemented augmented reality method includes receiving one or more indications, entered on a mobile computing devices by a user of the mobile computing device, of a distance range for determining items to display with an augmented reality application, the distance range representing geographic distance from a base point where the mobile device is located.
Abstract: A computer-implemented augmented reality method includes receiving one or more indications, entered on a mobile computing device by a user of the mobile computing device, of a distance range for determining items to display with an augmented reality application, the distance range representing geographic distance from a base point where the mobile computing device is located. The method also includes selecting, from items in a computer database, one or more items that are located within the distance range from the mobile computing device entered by the user, and providing data for representing labels for the selected one or more items on a visual display of the mobile computing device, the labels corresponding to the selected items, and the items corresponding to geographical features that are within the distance range as measure from the mobile computing device.

Journal ArticleDOI
TL;DR: Focusing students on developing their own AR games provides the best of both virtual and physical worlds: a more portable solution that deeply connects young people to their own surroundings.
Abstract: Augmented Reality (AR) simulations superimpose a virtual overlay of data and interactions onto a real-world context. The simulation engine at the heart of this technology is built to afford elements of game play that support explorations and learning in students' natural context--their own community and surroundings. In one of the more recent games, TimeLab 2100, players role-play citizens of the early 22nd century when global climate change is out of control. Through AR, they see their community as it might be nearly one hundred years in the future. TimeLab and other similar AR games balance location specificity and portability--they are games that are tied to a location and games that are movable from place to place. Focusing students on developing their own AR games provides the best of both virtual and physical worlds: a more portable solution that deeply connects young people to their own surroundings. A series of initiatives has focused on technical and pedagogical solutions to supporting students authoring their own games.

Patent
23 Jun 2010
TL;DR: In this article, a mobile device is operative to change from a first operational mode to a second or third operational mode based on a user's natural motion gesture, and a determination is made as to whether the user's motion gesture places the mobile device in the second or three operational modes.
Abstract: A mobile device is operative to change from a first operational mode to a second or third operational mode based on a user's natural motion gesture. The first operational mode may include a voice input mode in which a user provides a voice input to the mobile device. After providing the voice input to the mobile device, the user then makes a natural motion gesture and a determination is made as to whether the natural motion gesture places the mobile device in the second or third operational mode. The second operational mode includes an augmented reality display mode in which the mobile device displays images recorded from a camera overlaid with computer-generated images corresponding to results output in response to the voice input. The third operational mode includes a reading display mode in which the mobile device displays, without augmented reality, results output in response to the voice input.

Patent
13 Oct 2010
TL;DR: In this paper, a 3D audio signal is generated based on sensor data collected from the actual room in which the listener is located and the actual position of the listener in the room.
Abstract: Techniques are provided for providing 3D audio, which may be used in augmented reality A 3D audio signal may be generated based on sensor data collected from the actual room in which the listener is located and the actual position of the listener in the room The 3D audio signal may include a number of components that are determined based on the collected sensor data and the listener's location For example, a number of (virtual) sound paths between a virtual sound source and the listener may be determined The sensor data may be used to estimate materials in the room, such that the affect that those materials would have on sound as it travels along the paths can be determined In some embodiments, sensor data may be used to collect physical characteristics of the listener such that a suitable HRTF may be determined from a library of HRTFs

Patent
08 Dec 2010
TL;DR: In this paper, a portable device held by a user is utilized to capture an image stream of a real environment, and generate an augmented reality image stream which includes a virtual character, which is configured to demonstrate awareness of the user by adjusting its gaze so as to look in the direction of the portable device.
Abstract: Methods and systems for enabling an augmented reality character to maintain and exhibit awareness of an observer are provided. A portable device held by a user is utilized to capture an image stream of a real environment, and generate an augmented reality image stream which includes a virtual character. The augmented reality image stream is displayed on the portable device to the user. As the user maneuvers the portable device, its position and movement are continuously tracked. The virtual character is configured to demonstrate awareness of the user by, for example, adjusting its gaze so as to look in the direction of the portable device.

Journal ArticleDOI
TL;DR: The design of such a system is described, its technical accuracy is quantified, and a qualitative proof of its efficiency is provided through cadaver studies conducted by trauma surgeons, which studies the relevance of this system for surgical navigation within pedicle screw placement, vertebroplasty, and intramedullary nail locking procedures.
Abstract: Mobile C-arm is an essential tool in everyday trauma and orthopedics surgery. Minimally invasive solutions, based on X-ray imaging and coregistered external navigation created a lot of interest within the surgical community and started to replace the traditional open surgery for many procedures. These solutions usually increase the accuracy and reduce the trauma. In general, they introduce new hardware into the OR and add the line of sight constraints imposed by optical tracking systems. They thus impose radical changes to the surgical setup and overall procedure. We augment a commonly used mobile C-arm with a standard video camera and a double mirror system allowing real-time fusion of optical and X-ray images. The video camera is mounted such that its optical center virtually coincides with the C-arm's X-ray source. After a one-time calibration routine, the acquired X-ray and optical images are coregistered. This paper describes the design of such a system, quantifies its technical accuracy, and provides a qualitative proof of its efficiency through cadaver studies conducted by trauma surgeons. In particular, it studies the relevance of this system for surgical navigation within pedicle screw placement, vertebroplasty, and intramedullary nail locking procedures. The image overlay provides an intuitive interface for surgical guidance with an accuracy of <;1 mm, ideally with the use of only one single X-ray image. The new system is smoothly integrated into the clinical application with no additional hardware especially for down-the-beam instrument guidance based on the anteroposterior oblique view, where the instrument axis is aligned with the X-ray source. Throughout all experiments, the camera augmented mobile C-arm system proved to be an intuitive and robust guidance solution for selected clinical routines.

Proceedings ArticleDOI
20 Mar 2010
TL;DR: The maps generated with this technique are visually appealing, very accurate and allow drift-free rotation tracking and have applications in the creation of panoramic images for offline browsing, for visual enhancements through environment mapping and for outdoor Augmented Reality on mobile phones.
Abstract: We present a novel method for the real-time creation and tracking of panoramic maps on mobile phones. The maps generated with this technique are visually appealing, very accurate and allow drift-free rotation tracking. This method runs on mobile phones at 30Hz and has applications in the creation of panoramic images for offline browsing, for visual enhancements through environment mapping and for outdoor Augmented Reality on mobile phones.

Journal ArticleDOI
26 Jul 2010
TL;DR: It is found that while wearing the HMD can cause some degree of distance underestimation, this effect depends on the measurement protocol used, and photo-based presentation did not help to improve distance perception in a large-screen immersive display system.
Abstract: We conducted two experiments that compared distance perception in real and virtual environments in six visual presentation methods using either timed imagined walking or direct blindfolded walking, while controlling for several other factors that could potentially impact distance perception. Our presentation conditions included unencumbered real world, real world seen through an HMD, virtual world seen through an HMD, augmented reality seen through an HMD, virtual world seen on multiple, large immersive screens, and photo-based presentation of the real world seen on multiple, large immersive screens. We found that there was a similar degree of underestimation of distance in the HMD and large-screen presentations of virtual environments. We also found that while wearing the HMD can cause some degree of distance underestimation, this effect depends on the measurement protocol used. Finally, we found that photo-based presentation did not help to improve distance perception in a large-screen immersive display system. The discussion focuses on points of similarity and difference with previous work on distance estimation in real and virtual environments.

Patent
30 Nov 2010
TL;DR: In this paper, a system for continuously monitoring a user's motion and for continuously providing realtime visual physical performance information to the user while the user is moving to enable the user to detect physical performance constructs that expose a user to increased risk of injury or that reduce the user's physical performance.
Abstract: A system for continuously monitoring a user's motion and for continuously providing realtime visual physical performance information to the user while the user is moving to enable the user to detect physical performance constructs that expose the user to increased risk of injury or that reduce the user's physical performance. The system includes multiple passive controllers 100 A-F for measuring the user's motion, a computing device 102 for communicating with wearable display glasses 120 and the passive controllers 100 A-F to provide realtime physical performance feedback to the user. The computing device 102 also transmits physical performance constructs to the wearable display glasses 120 to enable the user to determine if his or her movement can cause injury or reduce physical performance.

Proceedings ArticleDOI
Stephan Gammeter1, Alexander Gassmann1, Lukas Bossard1, Till Quack1, Luc Van Gool1 
13 Jun 2010
TL;DR: This is the first system, which demonstrates a complete pipeline for augmented reality on mobile devices with visual object recognition scaled to millions of objects combined with real-time object tracking, and introduces a method to speed-up geometric verification of feature matches.
Abstract: In this paper we present a system for mobile augmented reality (AR) based on visual recognition. We split the tasks of recognizing an object and tracking it on the user's screen into a server-side and a client-side task, respectively. The capabilities of this hybrid client-server approach are demonstrated with a prototype application on the Android platform, which is able to augment both stationary (landmarks) and non stationary (media covers) objects. The database on the server side consists of hundreds of thousands of landmarks, which is crawled using a state of the art mining method for community photo collections. In addition to the landmark images, we also integrate a database of media covers with millions of items. Retrieval from these databases is done using vocabularies of local visual features. In order to fulfill the real-time constraints for AR applications, we introduce a method to speed-up geometric verification of feature matches. The client-side tracking of recognized objects builds on a multi-modal combination of visual features and sensor measurements. Here, we also introduce a motion estimation method, which is more efficient and precise than similar approaches. To the best of our knowledge this is the first system, which demonstrates a complete pipeline for augmented reality on mobile devices with visual object recognition scaled to millions of objects combined with real-time object tracking.

Journal ArticleDOI
01 Dec 2010
TL;DR: In this paper, the authors presented and explained the usage of AR technology in what can be named Augmented Reality Student Card (ARSC) for serving the education field, using single static markers combined in one card for assigning different objects, leaving the choice to the computer application for minimizing the tracking process.
Abstract: Augmented Reality (AR) is the technology of adding virtual objects to the real scenes through enabling the addition of missing information at real life. As the lack of resources is a problem that can be solved through AR, this paper represents and explains the usage of AR technology in what can be named Augmented Reality Student Card (ARSC) for serving the education field. ARSC uses single static markers combined in one card for assigning different objects, leaving the choice to the computer application for minimizing the tracking process. ARSC is designed to be a useful low cost solution for serving the education field. ARSC represents any lesson in a 3D format that helps students to visualize the facts, interact with theories and deal with the information in a totally new effective and interactive way. ARSC can be used in offline, online and game applications with seven markers, four of them are used as a joystick game controller. One of the novelties in this paper is that full experimental tests had been made for the ARTag marker set for sorting them according to their efficiency. The results of the tests are used in this research to choose the most efficient markers for ARSC, and can be used for further researches. The experimental work that had been made in this paper also shows the constraints for marker creation for an AR application. Due to the need to work for online and offline application, merging of toolkits and libraries has been made, as presented in this paper. ARSC was examined by a number of students of both genders with average age between 10–17 years and it was found to have a great acceptance among them.

Patent
21 Jan 2010
TL;DR: In this paper, a method and a corresponding system for designing interior and exterior environments and for selling real world goods that appear as virtual objects within an augmented reality-generated design is described.
Abstract: Described is a method and a corresponding system for designing interior and exterior environments and for selling real world goods that appear as virtual objects within an augmented reality-generated design. The method includes the steps of generating a digitized still or moving image of a real world environment; providing in a programmable computer a database of virtual objects; parsing the image with a programmable computer to determine if the image contains any real world markers corresponding to the virtual objects in the database; retrieving corresponding virtual objects from the database and superimposing the images contained in the virtual objects in registration upon their corresponding real world markers in the image; and enabling users to retrieve the attributes of the real world objects depicted in the augmented reality image.

Proceedings ArticleDOI
25 Mar 2010
TL;DR: In this article, the authors present early work in experimenting with desktop augmented reality (AR) for rehabilitation and discuss the development of rehabilitation prototypes using available AR libraries and express their thoughts on the potential of AR technology.
Abstract: Stroke is the number one cause of severe physical disability in the UK. Recent studies have shown that technologies such as virtual reality and imaging can provide an engaging and motivating tool for physical rehabilitation. In this paper we summarize previous work in our group using virtual reality technology and webcam-based games. We then present early work we are conducting in experimenting with desktop augmented reality (AR) for rehabilitation. AR allows the user to use real objects to interact with computer-generated environments. Markers attached to the real objects enable the system (via a webcam) to track the position and orientation of each object as it is moved. The system can then augment the captured image of the real environment with computer-generated graphics to present a variety of game or task-driven scenarios to the user. We discuss the development of rehabilitation prototypes using available AR libraries and express our thoughts on the potential of AR technology.