scispace - formally typeset
Search or ask a question

Showing papers by "Paweł W. Woźniak published in 2017"


Proceedings ArticleDOI
04 Sep 2017
TL;DR: A study to derive the ergonomic constraints for using finger orientation as an effective input source and shows that the finger yaw input space can be divided into the comfort and non-comfort zones.
Abstract: While most current interactive surfaces use only the position of the finger on the surface as the input source, previous work suggests using the finger orientation for enriching the input space. Thus, an understanding of the physiological restrictions of the hand is required to build effective interactive techniques that use finger orientation. We conducted a study to derive the ergonomic constraints for using finger orientation as an effective input source. In a controlled experiment, we systematically manipulated finger pitch and yaw while performing a touch action. Participants were asked to rate the feasibility of the touch action. We found that finger pitch and yaw do significantly affect perceived feasibility and 21.1% of the touch actions were perceived as impossible to perform. Our results show that the finger yaw input space can be divided into the comfort and non-comfort zones. We further present design considerations for future interfaces using finger orientation.

18 citations


Proceedings ArticleDOI
17 Oct 2017
TL;DR: This work designed, built and evaluated a remote input device using a VMRS that facilitates choosing a number on a discrete scale and determined that for conditions where users are not looking at the slider, VMRS can offer significantly better performance and accuracy.
Abstract: Despite the proliferation of screens in everyday environments, providing values to remote displays for exploring complex data sets is still challenging. Enhanced input for remote screens can increase their utility and enable the construction of rich data-driven environments. Here, we investigate the opportunities provided by a variable movement resistance slider (VMRS), based on a motorized slide potentiometer. These devices are often used in professional soundboards as an effective way to provide discrete input. We designed, built and evaluated a remote input device using a VMRS that facilitates choosing a number on a discrete scale. By comparing our prototype to a traditional slide potentiometer and a software slider, we determined that for conditions where users are not looking at the slider, VMRS can offer significantly better performance and accuracy. Our findings contribute to the understanding of discrete input and enable building new interaction scenarios for large display environments.

16 citations


DOI
01 Jan 2017
TL;DR: This work presents four dimensions to consider when transferring knowledge through personalized educational assistance using augmented reality and aims to foster context-aware assistive systems that enhance the learning.
Abstract: Knowledge transfer in educational establishments has experienced a shift from printed media to digitized assistance. Creating adaptive content for educating individuals efficiently has become a major challenge within the design and development of computer-supported learning systems in schools, universities, and vocational schools. Assistance through augmented reality enables the storage of vast amounts of learning materials and annotation of content. Augmenting educational content adaptively provides a personalized experience, thus fostering the motivation of the individual. We present four dimensions to consider when transferring knowledge through personalized educational assistance using augmented reality. This is complemented by the presentation of four research questions based on the previously identified dimensions and related research. We aim to foster context-aware assistive systems that enhance the learning

10 citations


Proceedings ArticleDOI
26 Nov 2017
TL;DR: This paper investigates how users can effectively manage application windows on LHRDs using four window alignment techniques: curved zooming, window grouping, window spinning and side pane navigation.
Abstract: Large high-resolution displays (LHRDs) present new opportunities for interaction design in areas such as interactive visualization and data analytics. Design processes for graphical interfaces for LHRDs are still challenging. In this paper, we explore the design space of graphical interfaces for LHRDs by engaging in the creation of four prototypes for supporting office work. Specifically, we investigate how users can effectively manage application windows on LHRDs using four window alignment techniques: curved zooming, window grouping, window spinning and side pane navigation. We present the design and implementation of these window alignment techniques in a sample office application. Based on a mixed-methods user study of our prototypes, we contribute insights on designing future graphical interfaces for LHRDs. We show that potential users appreciate techniques, which enhance focus switching without changing the spatial relation between related windows.

8 citations


Proceedings ArticleDOI
26 Nov 2017
TL;DR: Physiological and behavioral measurements are explored as tools to implicitly detect users' uncertainty, and a method to integrate input variability in interactive systems is provided.
Abstract: Interactive systems, such as online search interfaces, require appropriate input if they are to produce accurate information. Without this, they can be inaccurate if the user is uncertain about the keywords. Current systems do not have the means to detect uncertainty, which may lead to a negative user experience. We explore physiological and behavioral measurements as tools to implicitly detect users' uncertainty, and provide a method to integrate input variability in interactive systems. We conducted a laboratory study where participants answered questions of varying difficulty, recording physiological data via a key logger, an eye tracker, and a heart rate sensor. Our results show that participants spent significantly more time on difficult questions and looked longer at their answers before submitting them. Based on our results, we provide initial insights on how data from physiological sensors and logged user behavior can be utilized to enrich interactive systems and evaluate a user's uncertainty level.

6 citations


Journal ArticleDOI
28 Aug 2017
TL;DR: This paper reflects on past studies in the nascent field of designing interactive technologies for advanced amateur sportsmen, and discusses issues of qualitative experience, community, motivation and temporality to highlight where current sports technologies are insufficient.
Abstract: Abstract While new technologies supporting the experience of sports appear every day, we still do not have a full understanding of how to design technology that augments the experience of physical exercise. As more and more users practice sports in western societies, interaction design must learn to readdress the practical, social, physical and psychological aspects of sports. In this paper, I reflect on my past studies in the nascent field of designing interactive technologies for advanced amateur sportsmen. I share the practical challenges involved in augmenting experiences of training and race day performance. I discuss issues of qualitative experience, community, motivation and temporality to highlight where current sports technologies are insufficient. In particular, I focus on the experiences of those already involved in training routines and place less emphasis on beginners or those who need to be convinced to practice sports. I then discuss reasons for the difficulties involved in developing sports technologies and propose potential solutions to those difficulties to identify ways to move interaction design for sports forward.

1 citations


Journal ArticleDOI
TL;DR: TomoTable presents a possibility of easily deployable invisible positional sensing that uses electrical capacitance tomography (ECT) and creates opportunities for in-the-wild studies using multi-device systems.
Abstract: Abstract We present TomoTable—a research prototype of a position sensing device hidden inside an ordinary table. While the Human-Computer Interaction (HCI) field has extensively explored possibilities for spatially-aware multi-device interactions, the sensing methods that would enable such systems are still complex and hard to deploy in the wild. TomoTable presents a possibility of easily deployable invisible positional sensing that uses electrical capacitance tomography (ECT). Electrodes are embedded inside the table structure and provide accurate imaging of what is placed on the table. The entire system is invisible to the user. Objects can be identified based on their electrical properties. Our work creates opportunities for in-the-wild studies using multi-device systems. In this paper, we share the technical concept of TomoTable, preliminary insights on its use and perspectives for future studies.