scispace - formally typeset
Search or ask a question

Showing papers presented at "Virtual Environments, Human-Computer Interfaces and Measurement Systems in 2011"


Proceedings ArticleDOI
20 Oct 2011
TL;DR: This paper presents a robust and intelligent scheme for driver drowsiness detection employing the fusion of eye closure and yawning detection methods, and proves the high efficiency of the proposed idea.
Abstract: Driver drowsiness is a major factor in most driving accidents In this paper we present a robust and intelligent scheme for driver drowsiness detection employing the fusion of eye closure and yawning detection methods In this approach, the driver's facial appearance is captured via a camera installed in the car In the first step, the face region is detected and tracked in the captured video sequence utilizing computer vision techniques Next, the eye and mouth areas are extracted from the face; and they are studied to find signs of driver fatigue Finally, in a fusion phase the driver state is determined and a warning message is sent to the driver if the drowsiness is detected Our experiments prove the high efficiency of the proposed idea

76 citations


Proceedings ArticleDOI
20 Oct 2011
TL;DR: An experimental comparison of 3D rotation techniques shows no performance or accuracy difference among Bell's Trackball, Shoemake's Arcball and the Two-axis Valuator method.
Abstract: In this paper, we present an experimental comparison of 3D rotation techniques. In all techniques, the third degree of freedom is controlled by the mouse-wheel. The investigated techniques are Bell's Trackball, Shoemake's Arcball and the Two-axis Valuator method. The result from a pilot showed no performance or accuracy difference among these three. However, we did observe minor differences in an experiment with more participants and trials, though these differences were not significant. Also, from questionnaires, we found that most of the users considered the use of mouse wheel helpful for completing the tasks.

21 citations


Proceedings ArticleDOI
20 Oct 2011
TL;DR: A software tool that provides an easy way to creates augmented reality presentations that allows the user to authoring his own presentation without the help of a programmer is presented.
Abstract: The article presents a software tool that provides an easy way to creates augmented reality presentations. This tool allows the user to authoring his own presentation without a help of a programmer. The tool is supported by a web portal that provides the account creation and a database to storage the presentations created by the users.

21 citations


Proceedings ArticleDOI
20 Oct 2011
TL;DR: This work proposes a framework for the development of an adaptive game engine for exergames based on an adaptive, context-aware game engine to demand the appropriate amount of exertion from the user.
Abstract: In this work we propose a framework for the development of an adaptive game engine for exergames. In our approach, exergames are based on an adaptive, context-aware game engine to demand the appropriate amount of exertion from the user. We identify the player's contextual and physiological sensory variables and how they can be used to affect the game dynamics to prompt a physiological response from the player. We present an initial implementation of a bike-based game designed using the pattern proposed on our framework.

18 citations


Proceedings ArticleDOI
20 Oct 2011
TL;DR: This paper builds a human model for micro-sensor motion capture (MMocap) and reconstruct 3D human motion using data from MMocap in real-time, resulting in the real- time 3D motion animation.
Abstract: With the rapid advancement of micro-sensor motion capture, human modeling and motion reconstruction in realtime have become more and more important. The human model must be accurate enough for representing motions but also simple to realize real-time application. In this paper, we build a human model for micro-sensor motion capture (MMocap) and reconstruct 3D human motion using data from MMocap in real-time. The human model is composed of bones and joints, driven by motion parameters from micro-sensor motion capture, resulting in the real-time 3D motion animation. The motion parameters include quaternion and position of each bone. Quaternions are used to represent orientation of bones. Positions are calculated from forward kinematics. The experimental results have shown that the proposed human model is of good fidelity and low delay for real-time micro-sensor data-driven motion capture.

17 citations


Proceedings ArticleDOI
20 Oct 2011
TL;DR: A virtual environment for the creation of synthetic wildfire smoke frame sequences, able to simulate a distant smoke plume and to integrate it with an existing frame sequence is proposed, and results comparable to real situations are shown.
Abstract: In this paper we propose a virtual environment for the creation of synthetic wildfire smoke frame sequences, able to simulate a distant smoke plume and to integrate it with an existing frame sequence. This work provides a virtual tool to measure the accuracy of existing image-based wildfire smoke detection systems without the need to produce real smoke and fires in the environments. The proposed algorithm uses a cellular model driven by the rules of propagation and collision to simulate the basic physical principles of advection, diffusion, buoyancy, and the response to external forces (such as the wind). Adverse environmental conditions like fog and low-light are also simulated, together with the introduction of noise in order to reproduce acquisition defects. The resulting frame sequences are then evaluated by using a smoke detection system, which shows that our method for virtual smoke simulation gives results comparable to real situations. The extracted data can then be used to increase the performance of smoke detection systems when few real data are available.

12 citations


Proceedings ArticleDOI
20 Oct 2011
TL;DR: A real time and automatic method to analyze vehicle operator's drowsiness by using a CCD camera and considered an online progressive haptic alerting scheme to warn the drowsy operator in order to prevent major road accidents.
Abstract: Sleepiness and fatigued driving are amongst the major causes of roadway accidents. Eye closure and blink frequency are two of the principle evidences of driver fatigue. In this research, we surveilled operator's eye closure over a period of time with the purpose of alerting her/him in a non-obtrusive manner. We propose a real time and automatic method to analyze vehicle operator's drowsiness by using a CCD camera. Our developed system delivered an accuracy rate of 96% in eye states recognition that we leveraged to deduce multi-level driver drowsiness states. We considered an online progressive haptic alerting scheme (similar to silent mobile vibration alert) to warn the drowsy operator in order to prevent major road accidents.

8 citations


Proceedings ArticleDOI
20 Oct 2011
TL;DR: The implementation of a portable and easy to install and use HCI tool to track the ecological footprint of the institution, provide indications on potential fields of ecological improvement, suggest actions for achieving these improvements and permit following their effects.
Abstract: This work deals with a feedback system based on the ecological footprint calculation for improving the sustainabilty related behavior of universiies. It is based on the potential for improvement that can be unleashed by a detailed knowledge of the environmental performance of the institution. The system integrates data managed by different administrative units at the university (economic data, energy consumption, built area,…) and collected by different sensors. In particular, this work concentrates on the implementation of a portable and easy to install and use HCI tool. The objectives of the tool are to track the ecological footprint of the institution, provide indications on potential fields of ecological improvement, suggest actions for achieving these improvements and permit following their effects. It calculates the environmental impact in terms of a single number (ecological footprint) to make things simpler for users, but it also provides the possibility of graphical insights into more detailed data which are quite valuable for decision makers. The interactivity of the graphical information permits quantifying the effect of hypothetical changes and offers a list of possible actions to reduce the impact for a selected factor. Finally, the tool allows the comparison of the results obtained by the same organization over different time periods and the comparison among universities.

7 citations


Proceedings ArticleDOI
20 Oct 2011
TL;DR: A human-computer interface is presented that allows users to manipulate 3D objects within a virtual space by simultaneously using one hand to perform gestures and the other hand to command a physical controller.
Abstract: The control of virtual video game environments through body motion is recently of great interest to academic and industry research groups since it enables many new interactive experiences. With the recent growth in the availability of affordable 3D camera technology, researchers have increasingly investigated the control of games through body and hand gestures. In addition, the dropping cost of MEMS technology has increased the popularity of physical controllers incorporating accelerometers, gyroscopes, and other sensors. Existing work, however, has yet to combine the strengths of a 3D camera with those of a physical game controller to provide six degrees of freedom and one-to-one correspondence between the real-world 3D space and the virtual environment. In this paper, a human-computer interface is presented that allows users to manipulate 3D objects within a virtual space by simultaneously using one hand to perform gestures and the other hand to command a physical controller. This is accomplished by processing the data returned from a custom 3D depth camera to obtain hand gestures along with the absolute position of the controller-wielding hand. Through the use of a composite transformation matrix, this position data is fused with the orientation data measured from the instruments within the controller. The matrix is then applied to a 3D object within a virtual environment in realtime. Two prototype environments that combine hand gestures and a physical controller are used to evaluate this new method of interactive gaming.

6 citations


Proceedings ArticleDOI
20 Oct 2011
TL;DR: A virtual 3D home in Second Life that mimics the look of a physical home and the proposed approach of Second Life based home automation system is appealing and useful to the user.
Abstract: In this paper we propose the design and development of a prototype system to monitor and control a physical home and to bridge the interaction gap between the virtual and real world device control mechanism. We created a virtual 3D home in Second Life that mimics the look of a physical home. In the physical home environment different devices and sensors are connected in order to ensure a safe and automated home. Any event that occurs in the physical space of the smart home is then synchronized with the virtual environment. More importantly, the virtual home interface provides the option to control the physical smart devices. By using the Second Life virtual interface the home owner have a better look to monitor or control the home appliances. As per the initial experimentation, we found out that the proposed approach of Second Life based home automation system is appealing and useful to the user.

5 citations


Proceedings ArticleDOI
20 Oct 2011
TL;DR: Several popular corner detectors were evaluated on imagery containing corners with a variety of internal angles, finding that some of these performance differences are statistically significant, allowing recommendations to be made regarding which detectors should be used when a problem has corners of known internal angles.
Abstract: Several popular corner detectors were evaluated on imagery containing corners with a variety of internal angles. Even in a noise-free environment, differences in performance were found. A null hypothesis approach was taken in evaluating whether these performance differences were significant, taking into account correctly the size of the dataset and the number of discrepancies. It was found that some of these performance differences are statistically significant, allowing recommendations to be made regarding which detectors should be used when a problem has corners of known internal angles.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: An informed virtual environment (environment including knowledge-based models and providing an action/perception coupling) for fluvial navigation training and an automatic guide to a driving ship simulator is added by displaying multimodal aids adapted to human perception for trainees.
Abstract: This paper presents an informed virtual environment (environment including knowledge-based models and providing an action/perception coupling) for fluvial navigation training. We add an automatic guide to a driving ship simulator by displaying multimodal aids adapted to human perception for trainees. To this end, a decision-making module determines the most appropriate aids according to heterogeneous data coming from observations of the learner (his/her mistakes, the risks taken, his/her state determined by using physiological sensors, etc.). The Dempster-Shafer theory is used to merge these uncertain data. The purpose of the whole system is to manage the training almost autonomously in order to relieve trainers from controlling the whole training simulation. We intend to demonstrate the relevance of taking the learner's state into account and the relevance of the heterogeneous data fusion with the Dempster-Shafer theory for decision-making about the best learner guiding. First results, obtained with a predefined set of data, show that our decision-making module is able to propose a guiding well-adapted to the trainees, even in complex situations with uncertain data.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: An enhanced version of Mean-Shift face tracking using Local Binary Pattern (LBP) histogram is proposed that outperforms both Mean- shift and LBP histogram methods.
Abstract: Face tracking is widely used in many applications. Mean-Shift method is one of the most popular face detection algorithms. In this article, we propose an enhanced version of Mean-Shift face tracking using Local Binary Pattern (LBP) histogram. Simulation results demonstrate that the proposed joint method outperforms both Mean-Shift and LBP histogram methods.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: It is argued that by using the mobile Second Life virtual interface the homeowner have a better look to monitor and control the home appliances and can change states of the real object through the virtual object interaction.
Abstract: In this paper we propose the development details of a mobile client that allows virtual 3D avatar interaction and virtual 3D annotation control in Second Life. We established adaptation based virtual rendering of the Second Life client and encoded the real-time frames into video stream, which is suitable for mobile client rendering. Additionally, we re-mapped the touch based interaction of the user and feed that to the Second Life client in a form of keyboard and mouse interactions. As a proof of concept, we annotated a virtual environment object in Second Life and linked that with a real object by using X10 controllers. Further, we captured the mobile interaction of the user and provided controller interface to change states of the real object through the virtual object interaction. We argue that by using the mobile Second Life virtual interface the homeowner have a better look to monitor and control the home appliances. We present illustration of the prototype system and show its application in a smart environment setup.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: By superimposing a position-based force field over traditional haptic feedback, a new impedance tuning (IT) simulation strategy is proposed, which aims to improve haptic system's stability and performance.
Abstract: Haptic rendering provides users with the senses of touch during their interacting with the simulated objects in virtual environment. The maximum achievable object stiffness is critical for realistic haptic rendering of rigid objects, and is constrained by several inevitable ”non-idealities” in the haptic system as well as the behavior of the human operator, i.e. the mechanical impedance of human arm. By superimposing a position-based force field over traditional haptic feedback, a new impedance tuning (IT) simulation strategy is proposed, which aims to improve haptic system's stability and performance. Based on Delta Haptic Device, comparison experiment is carried out with six individual subjects, and proves the validity of our proposed method.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: This work presents an operational setup for carrying out experiments in robotics using a mixed-reality approach that aims to be a useful educational tool for teaching robotics and a tool to help in the development and study of embodied evolution.
Abstract: This work presents an operational setup for carrying out experiments in robotics using a mixed-reality approach. The objective of this setup is twofold, on one hand it aims to be a useful educational tool for teaching robotics and on the other it is a tool to help in the development and study of embodied evolution. The design represents an intermediate system between a real and a simulated scenario so as to work with real robots and to make easier and simpler the configuration and modification of the rest of the elements of the experiments. The present operational setup consists, mainly, of a video projector to represent the virtual elements projecting them over the arena of the experiment, a zenithal camera to capture the state of real elements moving on the arena, and a computer which monitors and controls the scenario. As a result, both real and virtual elements interact on the arena and their state is updated based on world rules of the experiment and on the state and actions of other elements. The scheme implemented in the main computer to control and structure this environment is based on the one proposed in a previous work for the definition of scenarios in a simulation environment named Waspbed, which was designed for the study of simulated coevolution processes in multiagent systems. Apart from the advantages provided in robotics research when large testing period times are required, this setup is used for getting students from engineering degrees used to deal with robotics and all its associated fields, such as artificial vision, evolutionary robotics, communication protocols, etc. in a very simple, quick and cheap way, which otherwise wouldn't be feasible taking into account the academic constraints of time and resources.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: This work aims to provide a mechanism for facilitating users in determining their own preferred information/interaction profiles across environments with many smart objects, through cross-platform applications on generally accepted smart mobile devices.
Abstract: In dynamic environments, interactivities with smart objects can be complex for accomplishing particular activities or procedures. Ideally, interaction paradigms for smart objects should naturally adhere to the ethos of ubiquitous computing; however, it hasn't been fully achieved yet due to the lack of general and intuitive mechanisms in dealing with heterogeneous objects. The major challenges of assuring proper actions being carried out towards different smart objects or entities exist in how to apply reasonable and intuitive monitoring and customizing operations. In this paper, we present a piece of work, which aims to provide a mechanism for facilitating users in determining their own preferred information/interaction profiles across environments with many smart objects, through cross-platform applications on generally accepted smart mobile devices. This mechanism is validated with case studies demonstrating the management of real-time profiling of sensed and assigned information about objects, including specifying information thresholds for profile models and reviewing satisfactory status to the models.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: This paper evaluates two different interfaces for navigation in a 3D environment that involves using a mouse and a gaming style interface that uses both mouse and keyboard (i.e. the ‘WASD' keys) for view direction and movement respectively.
Abstract: This paper evaluates two different interfaces for navigation in a 3D environment. The first is a ‘Click-to-Move’ style interface that involves using a mouse. The second is a gaming style interface that uses both mouse and keyboard (i.e. the ‘WASD’ keys) for view direction and movement respectively. In the user study, participants were asked to navigate a supermarket environment and collect a specific subset of items. Results revealed significant differences only for some of the sub-measures. Yet, some revealing observations were made regarding user behavior and interaction with the user interface.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: A model-based control design strategy for the trajectory tracking of the internet-based bilateral telehaptic systems with symmetric and unsymmetric time varying communication delays is proposed.
Abstract: In this paper, we propose model-based control design strategy for the trajectory tracking of the internet-based bilateral telehaptic systems. The communication time delay between local-master and remote-slave platform is assumed to be symmetrical and unsymmetrical nature. The design comprises delayed position signals with the local velocity signals and the known structure of the master and slave system dynamics. Using Lyapunov-Krasovskii-like functional, asymptotic tracking property of the position and velocity of the master-slave closed-loop systems with symmetric and unsymmetric time varying communication delays are developed. Finally, simulations are conducted to demonstrate the validity of the proposed design for real-time telehaptic operation.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: Simulating a laser-based tracking system that was originally developed for a six-sided spatially immersive system in environments with one and five walls shows that performance degrades with the number of walls, but it is shown that tracking with even one wall is still very feasible.
Abstract: Virtual reality tracking systems are used to detect the position and orientation of a user inside spatially immersive systems. In this paper, we simulate a laser-based tracking system that was originally developed for a six-sided spatially immersive system in environments with one and five walls to evaluate its performance for other installations. As expected, the results show that performance degrades with the number of walls, but they also show that tracking with even one wall is still very feasible.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: The Scenario Framework is presented that includes the Scenario Markup Language (SML), a simple yet expressive language for authoring realistic traffic situations and capabilities for driver behavioral data collection.
Abstract: We present the Scenario Framework that includes (1) the Scenario Markup Language (SML), a simple yet expressive language for authoring realistic traffic situations, and (2) capabilities for driver behavioral data collection. The framework facilitates large scale driver behavioral studies for data collection in multi-user online three-dimensional (3D) virtual environments.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: E-dumbbell consists of a regular dumbbell mounted with an accelerometer and a pair of vibro-tactile actuators that allow the dumbbell interface to interact with a two dimensional Ping-Pong game.
Abstract: In this paper, the design and implementation of a wrist rehabilitation system that is cheap and simple is presented. E-dumbbell consists of a regular dumbbell mounted with an accelerometer and a pair of vibro-tactile actuators that allow the dumbbell interface to interact with a two dimensional Ping-Pong game. The main purpose of the system is to enable patients to perform their daily wrist training in a convenient and entertaining manner. The system can be adapted to fit the needs of different patients with different rehabilitation levels by allowing patients to customize the difficulty level of the game. The training sensory data are stored in a database that could be used by the therapist to track his/her patient's progress.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: It is illustrated that by minimizing an error value with respect to the training data set and reconstructing the trajectories, the low and high-energy variants can be separated from the main gait and hence extracted.
Abstract: This paper presents a new approach based on temporal minimization for separation and extraction of high/low-energy variants embedded in human motion. A data set of over 6500 frames is used for training the proposed algorithm. Spatiotemporal cubic splines are employed for approximating the trajectories associated with walking sequences. The optimal numbers of control points required for synthesizing the neutral movements are calculated. We illustrate that by minimizing an error value with respect to the training data set and reconstructing the trajectories, the low and high-energy variants can be separated from the main gait and hence extracted.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: This work extracts important tactile elements crucial to texture perception from the haptic signal in the time domain and proposes a method to encode them so that the overall storage requirements are minimized, significantly improves the texture assessment quality during playback, while reducing storage space requirements by up to 97%.
Abstract: Recently proposed haptic offline compression algorithms remove perceptually irrelevant haptic samples to achieve data reduction. At display-time, the irregularly subsampled haptic signal is resampled at a higher constant sampling rate using interpolation. Such algorithms, however, have an important drawback. Although they are well suited for large-amplitude quasi-static feedback forces, low-amplitude high-frequency texture information is adversely affected. This informative tactile high-frequency component, critical to convey convincing realistic haptic impressions, needs to be treated separately for compression. To this end, we extract important tactile elements crucial to texture perception from the haptic signal in the time domain and propose a method to encode them so that the overall storage requirements are minimized. We then synthesize and superimpose them onto the reconstructed signal at display-time. Psychophysical tests confirm that the proposed approach significantly improves the texture assessment quality during playback, while reducing storage space requirements by up to 97%.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: The TrendTV architecture is presented, a set of software layers that links various TV show viewers, producing a personalized recommendation system that contains the classification of several TV programs built from that architecture, and an application that change automatically to the best channel at the moment.
Abstract: In this work, we present the TrendTV architecture, a set of software layers that links various TV show viewers, producing a personalized recommendation system. In this architecture is possible to indicate the quality of a TV show in real time through the interaction between viewers using a social network that can be accessed through several different ways. Associated with a personalized Electronic Programming Guide, this social network allows the viewer to perform filtering on a particular subject from the indication made by other viewers through the interactivity over the Web, interactive TV or through a mobile device. The result is a dynamic database containing the classification of several TV programs built from that architecture, and an application that change automatically to the best channel at the moment.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: A novel mobile augmented reality system for artificial optical radiation (AOR) identification and measurement has been presented to help experts deal with AOR related problem in their fields according to the European Directive 2006/25/EC (Occupational health and safety).
Abstract: Augmented reality application, has received considerable interest in recent years with the advent of specialized applications and data fusion from different visual sources. In this paper, we present a novel mobile augmented reality system for artificial optical radiation (AOR) identification and measurement. The system has been designed to help experts (both medical doctors and engineers) deal with AOR related problem in their fields according to the European Directive 2006/25/EC (Occupational health and safety). An innovative tracking system, based on image retrieval paradigms, has been used in image registration.

Proceedings ArticleDOI
20 Oct 2011
TL;DR: The specification of an API called MultiArt is described, and its role is to support the production and presentation of telematic art in its many forms of expressions to provide communication and interactivity to viewers through a 3D virtual environment.
Abstract: This paper describes the specification of an API called MultiArt, and its role is to support the production and presentation of telematic art in its many forms of expressions. This API abstracts specific art's applications that can be used together. We present an initial architecture and a case study demonstrating the use of MultiArt. Our case study makes use of a predefined functions that allow us to program robots choreography in an easy way, manage video streams captured during the art presentations and provide communication and interactivity to viewers through a 3D virtual environment.