scispace - formally typeset
Search or ask a question
Author

Francisco R. Ortega

Bio: Francisco R. Ortega is an academic researcher from Colorado State University. The author has contributed to research in topics: Augmented reality & Gesture. The author has an hindex of 10, co-authored 69 publications receiving 477 citations. Previous affiliations of Francisco R. Ortega include University of Miami & Florida International University.


Papers
More filters
Journal ArticleDOI
TL;DR: This research highlights the need to understand more fully the role of language in the development of sports strategy and the role that language plays in the design and execution of sportsmanship.
Abstract: Anthony Steed, University College London Francisco R. Ortega, Colorado State University Adam S. Williams, Colorado State University Ernst Kruijff, Bonn-Rhein-Sieg University of Applied Science Wolfgang Stuerzlinger, Simon Fraser University Anil Ufuk Batmaz, Simon Fraser University Andrea Stevenson Won, Cornell University Evan Suma Rosenberg, University of Minnesota Adalberto L. Simeone, KU Leuven Aleshia Hayes, University of North Texas

76 citations

Book ChapterDOI
01 Jan 2015
TL;DR: In this paper a sensor fusion algorithm is developed and implemented for detecting orientation in three dimensions by combining accelerometer and gyroscope data and a Kalman filter is designed to compensate the inertial sensors errors.
Abstract: In this paper a sensor fusion algorithm is developed and implemented for detecting orientation in three dimensions. Tri-axis MEMS inertial sensors and tri-axis magnetometer outputs are used as input to the fusion system. A Kalman filter is designed to compensate the inertial sensors errors by combining accelerometer and gyroscope data. A tilt compensation unit is designed to calculate the heading of the system.

71 citations

Journal ArticleDOI
TL;DR: The overall results show that the PD signal is more effective and robust for differentiating “relaxation” vs. “stress,” in comparison with the traditionally used GSR signal.
Abstract: The pupil diameter (PD), controlled by the autonomic nervous system, seems to provide a strong indication of affective arousal, as found by previous research, but it has not been investigated fully yet. In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line "relaxation" vs. "stress" differentiation are proposed. For the off-line approach, wavelet denoising, Kalman filtering, data normalization, and feature extraction are sequentially utilized. For the on-line approach, a hard threshold, a moving average window and three stress detection steps are implemented. In order to use only the most reliable data, two types of data selection methods (paired t test based on galvanic skin response (GSR) data and subject self-evaluation) are applied, achieving average classification accuracies up to 86.43 and 87.20% for off-line and 72.30 and 73.55% for on-line algorithms, with each set of selected data, respectively. The GSR was also monitored and processed in our experiments for comparison purposes, with the highest classification rate achieved being only 63.57% (based on the off-line processing algorithm). The overall results show that the PD signal is more effective and robust for differentiating "relaxation" vs. "stress," in comparison with the traditionally used GSR signal.

59 citations

Journal ArticleDOI
TL;DR: Findings include the types of gestures performed, the timing between co-occurring gestures and speech (130 milliseconds), perceived workload by modality (using NASA TLX), and design guidelines arising from this study.
Abstract: The primary objective of this research is to understand how users manipulate virtual objects in augmented reality using multimodal interaction (gesture and speech) and unimodal interaction (gesture). Through this understanding, natural-feeling interactions can be designed for this technology. These findings are derived from an elicitation study employing Wizard of Oz design aimed at developing user-defined multimodal interaction sets for building tasks in 3D environments using optical see-through augmented reality headsets. The modalities tested are gesture and speech combined, gesture only, and speech only. The study was conducted with 24 participants. The canonical referents for translation, rotation, and scale were used along with some abstract referents (create, destroy, and select). A consensus set of gestures for interactions is provided. Findings include the types of gestures performed, the timing between co-occurring gestures and speech (130 milliseconds), perceived workload by modality (using NASA TLX), and design guidelines arising from this study. Multimodal interaction, in particular gesture and speech interactions for augmented reality headsets, are essential as this technology becomes the future of interactive computing. It is possible that in the near future, augmented reality glasses will become pervasive.

27 citations


Cited by
More filters
Journal ArticleDOI
28 Mar 1959-BMJ

858 citations

Book ChapterDOI
01 Jan 2008
TL;DR: The challenge of qualitative spatial reasoning (QSR) is to provide calculi that allow a machine to represent and reason with spatial entities without resort to the traditional quantitative techniques prevalent in, for example, computer graphics or computer vision communities.
Abstract: Publisher Summary Early attempts at qualitative spatial reasoning within the qualitative reasoning (QR) community led to the poverty conjecture. The need for spatial representations and spatial reasoning is ubiquitous in artificial intelligence (AI) from robot planning and navigation to interpreting visual inputs to understanding natural language. In all these cases, the need to represent and reason about spatial aspects of the world is of key importance. Related fields of research such as geographic information science (GIScience) have also driven the spatial representation and reasoning community to produce efficient, expressive, and useful calculi. There has been considerable research in spatial representations that are based on metric measurements, in particular within the vision and robotics communities, and also on raster and vector representations in GIScience. This chapter focuses on symbolic and, in particular, qualitative representations. The challenge of qualitative spatial reasoning (QSR) is to provide calculi that allow a machine to represent and reason with spatial entities without resort to the traditional quantitative techniques prevalent in, for example, computer graphics or computer vision communities.

420 citations

Journal ArticleDOI
TL;DR: The purpose of this paper is to provide a holistic conceptualisation and an up-to-date review of the literature on Service Engineering with a specific focus on its adoption in the PSS context.

368 citations

Journal ArticleDOI
TL;DR: This work reviews and brings together the recent works carried out in the automatic stress detection looking over the measurements executed along the three main modalities, namely, psychological, physiological and behaviouralmodalities, in order to give hints about the most appropriate techniques to be used and thereby, to facilitate the development of such a holistic system.

329 citations

Dissertation
01 Jan 2014
TL;DR: This thesis explores flat displays, deformable displays, flexible materials, static, and mobile projection displays in dynamic environments, and serves as an example application for merging the digital and the physical through flexible Materials, embodied computation, and actuation.
Abstract: In 1965 Ivan E. Sutherland envisioned the Ultimate Display, a room in which a computer can directly control the existence of matter. This type of display would merge the digital and the physical world, dramatically changing how people interact with computers. This thesis explores flat displays, deformable displays, flexible materials, static, and mobile projection displays in dynamic environments. %Dynamic environments are inherent to human behavior, but pose big problems to Human-Computer Interaction since computing devices rely on many assumptions of the interaction. Two aspects of the dynamic environment are considered. One is mobile human nature -- a person moving through or inside an environment. The other is the change or movement of the environment itself. The initial study consisted of a mixed reality application, based on recent motor learning research. It tested if a performer's attentional focus on markers external to the body improves the accuracy and duration of acquiring a motor skill, as compared with the performer focusing on their own body accompanied by verbal instructions. This experiment showed the need for displays that resemble physical reality. Deformable displays and Organic User Interfaces (OUIs) leverage shape, material, and the inherent properties of matter in order to create natural, intuitive forms of interaction. We suggested designing OUIs employing depth sensors as 3D input, deformable displays as 3D output, and identifying attributes that couple matter to human perception and motor skills. Flexible materials were explored by developing a soft gripper able to hold everyday objects of various shapes and sizes. It did not use complex hardware or control algorithms, but rather combined sheets of flexible plastic materials and a single servo motor. The gripper showed how a simple design with a minimal control mechanism can solve a complex problem in a dynamic environment. It serves as an example application for merging the digital and the physical through flexible materials, embodied computation, and actuation. The next two experiments merge digital information with the physical dynamic environment by using mobile and static projectors. The mobile projector experiment consisted of GPS navigation using a bike-mounted projector, displaying a map on the pavement in front of the bike. We found out that if compared with a bike-mounted smartphone, the mobile projector yields a lower cognitive load for the map navigation task. A dynamic space emerges from the navigation task requirements, and the projected display becomes a part of the physical environment. In the final experiment, a person interacts with a changing, growing environment, on which digital information is projected from above using a static projector. The interactive space consists of cardboard building blocks, the arrangement of which are limited by the area of projection. The user adds cardboard blocks to the cluster based upon feedback projected from above. Concepts from artificial intelligence and architecture were applied for understanding the interaction between the environment, the user, the morphology, and the material of the physical building system.

319 citations