scispace - formally typeset
Search or ask a question

Showing papers presented at "Advanced Visual Interfaces in 2018"


Proceedings ArticleDOI
29 May 2018
TL;DR: It is found that Pursuits is robust against different sizes of virtual 3D targets, however performance improves when the trajectory size is larger, particularly if the user is walking while interacting, and the potential of smooth pursuits for interaction in VR is discussed.
Abstract: Gaze-based interaction using smooth pursuit eye movements (Pursuits) is attractive given that it is intuitive and overcomes the Midas touch problem. At the same time, eye tracking is becoming increasingly popular for VR applications. While Pursuits was shown to be effective in several interaction contexts, it was never explored in-depth for VR before. In a user study (N=26), we investigated how parameters that are specific to VR settings influence the performance of Pursuits. For example, we found that Pursuits is robust against different sizes of virtual 3D targets. However performance improves when the trajectory size (e.g., radius) is larger, particularly if the user is walking while interacting. While walking, selecting moving targets via Pursuits is generally feasible albeit less accurate than when stationary. Finally, we discuss the implications of these findings and the potential of smooth pursuits for interaction in VR by demonstrating two sample use cases: 1) gaze-based authentication in VR, and 2) a space meteors shooting game.

70 citations


Proceedings ArticleDOI
29 May 2018
TL;DR: A scale-based questionnaire covering five factors of viewer engagement identified from multiple application domains such as game design and marketing and a crowdsourcing study on Amazon's Mechanical Turk reveal that each technique has an effect on viewer engagement, impacting different factors.
Abstract: Pictographic representations and animation techniques are commonly incorporated into narrative visualizations such as data videos. General belief is that these techniques may enhance the viewer experience, thus appealing to a broad audience and enticing the viewer to consume the entire video. However, no study has formally assessed the effect of these techniques on data insight communication and viewer engagement. In this paper, we first propose a scale-based questionnaire covering five factors of viewer engagement we identified from multiple application domains such as game design and marketing. We then validate this questionnaire through a crowdsourcing study on Amazon's Mechanical Turk to assess the effect of animation and pictographs in data videos. Our results reveal that each technique has an effect on viewer engagement, impacting different factors. In addition, insights from these studies lead to design considerations for authoring engaging data videos.

50 citations


Proceedings ArticleDOI
29 May 2018
TL;DR: A playful digital activity for primary school classrooms that promotes sustainable and active mobility by leveraging the daily journey to school into a collaborative educational experience and shows a positive effect in terms of children's behavioural change as well as educational value is presented.
Abstract: In this paper, we present a playful digital activity for primary school classrooms that promotes sustainable and active mobility by leveraging the daily journey to school into a collaborative educational experience. In the class game, stretches of distance travelled in a sustainable way by each child contributes to the advancement of the whole school on a collective virtual trip. During the trip, several virtual stops are associated with the discovery of playful learning material. The approach has been evaluated in a primary school with 87 pupils and 6 teachers actively involved in the learning activity for 12 continuous weeks. The findings from the questionnaires with parents and the interviews with teachers show a positive effect in terms of children's behavioural change as well as educational value. Indications on the use of class and school collaborative gamification activities for supporting sustainable behavioural change are discussed.

29 citations


Proceedings ArticleDOI
29 May 2018
TL;DR: This work proposes a VR shop concept where product placement is not organized in shelves but through spatial placement in appropriate locations in an apartment environment, and indicates that product interaction using pointing in combination with the abstract cart concept performs best with regard to error rate, user experience and workload.
Abstract: In contrast to conventional retail stores, online shopping comes with many advantages, like unrestricted opening hours and is more focused on functionality. However, these pros often come at a cost of complex search and limited product visualization. Virtual Reality (VR) has the potential to create novel shopping experiences that combine the advantages of e-commerce sites and conventional stores. In this work, we propose a VR shop concept where product placement is not organized in shelves but through spatial placement in appropriate locations in an apartment environment. We thus investigated how the spatial arrangement of products in a non-retail environment affects the user, and how the actual shopping task can be supported in VR. In order to answer these questions, we designed two product selection and manipulation techniques (grabbing and pointing) and two VR shopping cart concepts (a realistic basket and an abstract one) and evaluated them in a user study. The results indicate that product interaction using pointing in combination with the abstract cart concept performs best with regard to error rate, user experience and workload. Overall, the proposed apartment metaphor provides excellent customer satisfaction, as well as a particularly high level of immersion and user experience, and it opens up new possibilities for VR shopping experiences that go far beyond mimicking real shop environments in VR.

28 citations


Proceedings ArticleDOI
29 May 2018
TL;DR: Choreomorphy is a system for a whole-body interactive experience, using Motion Capture and 3D technologies, that allows the users to experiment with different body and movement visualisations in real-time.
Abstract: Choreomorphy is inspired by the Greek words "choros" (dance) and "morphe" (shape). Visual metaphors, such as the notion of transformation, and visual imagery are widely used in various movement and dance practices, education, and artistic creation. Motion capture and comprehensive movement representation technologies, if appropriately employed can become valuable tools in this field. Choreomorphy is a system for a whole-body interactive experience, using Motion Capture and 3D technologies, that allows the users to experiment with different body and movement visualisations in real-time. The system offers a variety of avatars, visualizations of movement and environments which can be easily selected through a simple GUI. The motivation of designing this system is the exploration of different avatars as "digital selves" and the reflection on the impact of seeing one's own body as an avatar that can vary in shape, size, gender and human vs. non-human characteristics, while dancing and improvising. Choreomorphy is interoperable with different motion capture systems, including, but not limited to inertial, optical, and Kinect. The 3D representations and interactions are constantly updated through an explorative co-design process with dance artists and professionals in different sessions and venues.

24 citations


Proceedings Article
01 Jan 2018
TL;DR: New chatbot-based conversational interfaces that allow users to exploit the most used interaction strategies by human beings, that is the natural language are explored.
Abstract: The natural language, in its oral or textual form, represents the main medium for communicating between human beings. Over time, technological advances have provided new means and tools through which human beings can express themselves and communicate with each other, without altering the original mode of interaction. In recent years, the search for new forms of interaction between users and systems has led to the diffusion of a new possible communication method that exploits a conversational approach based on natural language, which is referred to by using the term chatbot. This paper aims at exploring new chatbot-based conversational interfaces that allow users to exploit the most used interaction strategies by human beings, that is the natural language. Among the many possible application domains, this paper focuses on the introduction of a chatbot for supporting users in interacting with services for public administration (PA), health and wellbeing and home automation.

23 citations


Proceedings ArticleDOI
29 May 2018
TL;DR: The paper articulates a conceptual framework for human-centered design focused on a design trade-off perspective inspired from a brief analysis of designTrade-offs in large scale developments (self-driving cars, sharing economy, and big data).
Abstract: Human-centered design should not only be grounded in understanding new media and technologies in terms of productivity, efficiency, reliability, and from economic perspectives, but it needs to explore innovative socio-technical environments contributing to human creativity, gratification, enjoyment, and quality of life. It represents a wicked problem with no "correct" solutions or "right" answers; the quality and success of design solutions are not only a question of fact, but a question of value and interests of the involved stakeholders.Design trade-offs are the most basic characteristics of design. They are universal and they make us aware that there are "no decontextualized sweet spots". In contrast to design guidelines, they widen rather than narrow design spaces by (1) avoiding simple solutions to complex problems and (2) by identifying and exploring interesting new approaches with the objective to synthesize the strengths and reduce the weaknesses of the binary choices defining the trade-offs.The paper articulates a conceptual framework for human-centered design focused on a design trade-off perspective. The framework is inspired from a brief analysis of design trade-offs in large scale developments (self-driving cars, sharing economy, and big data). Based on our own research activities, it is elaborated with specific design trade-offs (context-aware information delivery, meta-design, and cultures of participation) and further illustrated with the description of the Envisionment and Discovery Collaboratory, a socio-technical environment to frame and solve wicked problems in urban planning.

20 citations


Proceedings ArticleDOI
29 May 2018
TL;DR: ABBOT combines a smart tangible object to play outdoors, with a mobile app to access new content related to the discovered natural elements and helps children capture images of the elements they find interesting in the physical environment.
Abstract: This article illustrates ABBOT, a pervasive interactive game for children at the early years of primary school that aims to stimulate exploration of outdoor environments. ABBOT combines a smart tangible object to play outdoors, with a mobile app to access new content related to the discovered natural elements. The tangible object helps children capture images of the elements they find interesting in the physical environment. Through simple interactive games on a tablet, at home children can continue to interact with the collected digital materials and can also access new related content. The article illustrates the design of ABBOT; it also reports on an exploratory study with 160 kids of a preschool and a primary school that helped us assess the attitude of kids towards the game.

18 citations


Proceedings ArticleDOI
29 May 2018
TL;DR: This workshop will bring together researchers with expertise in visualization, interaction design, and natural user interfaces to build a community of researchers focusing on multimodal interaction for data visualization, explore opportunities and challenges in the research, and establish an agenda for multi-modal interaction research specifically for data visualize.
Abstract: Multimodal interaction offers many potential benefits for data visualization. It can help people stay in the flow of their visual analysis and presentation, with the strengths of one interaction modality offsetting the weaknesses of others. Furthermore, multimodal interaction offers strong promise for leveraging data visualization on diverse display hardware including mobile, AR/VR, and large displays. However, prior research on visualization and interaction techniques has mostly explored a single input modality such as mouse, touch, pen, or more recently, natural language. The unique challenges and opportunities of synergistic multimodal interaction for data visualization have yet to be investigated. This workshop will bring together researchers with expertise in visualization, interaction design, and natural user interfaces. We aim to build a community of researchers focusing on multimodal interaction for data visualization, explore opportunities and challenges in our research, and establish an agenda for multimodal interaction research specifically for data visualization.

17 citations


Proceedings ArticleDOI
29 May 2018
TL;DR: The aim of this satellite event was to bring together a variety of different stakeholders, ranging from local food producers, chefs, designers, engineers, data scientists, and sensory scientists, to discuss the interwoven future of computing technology and food.
Abstract: The excitement around computing technology in all aspects of life requires that we tackle fundamental issues of healthcare, leisure, labor, education, and food to create the society we want. The aim of this satellite event was to bring together a variety of different stakeholders, ranging from local food producers, chefs, designers, engineers, data scientists, and sensory scientists, to discuss the interwoven future of computing technology and food. This event was co-located with the AVI 2018 conference and supported by the ACM Future of Computing Academy (ACM-FCA). The event followed a co-creation approach that encourages conjoined creative and critical thinking that feeds into the formulation of a manifesto on the future of computing and food. We hope this will inspire future discussions on the transformative role of computing technology on food.

17 citations


Proceedings ArticleDOI
29 May 2018
TL;DR: This study compares an on-body interaction technique (BodyLoci) to mid-air Marking menus in a virtual reality context and suggests that inducing users to leverage simple learning techniques, such as story-making, can substantially improve recall, and thus make it easier to master gestural techniques.
Abstract: Previous studies have shown that spatial memory and semantic aids can help users learn and remember gestural commands. Using the body as a support to combine both dimensions has therefore been proposed, but no formal evaluations have yet been reported. In this paper, we compare an on-body interaction technique (BodyLoci) to mid-air Marking menus in a virtual reality context. We consider three levels of semantic aids: no aid, story-making, and story-making with background images. Our results show important improvement when story-making is used, especially for Marking menus (28.5% better retention). Both techniques performed similarly without semantic aids, but Marking menus outperformed BodyLoci when using them (17.3% better retention). While our study does not show a benefit in using body support, it suggests that inducing users to leverage simple learning techniques, such as story-making, can substantially improve recall, and thus make it easier to master gestural techniques. We also analyze the strategies used by the participants for creating mnemonics to provide guidelines for future work.

Proceedings ArticleDOI
29 May 2018
TL;DR: Results show that superimposing components and instructions through AR reduces the number of errors, allows users to easily troubleshoot them and reduces users' mental workload.
Abstract: Building an electronic circuit is an error-prone activity for novice users; many errors can occur, such as incorrect wirings or wrong component values. This work explores the use of Augmented Reality (AR) as a technology to mitigate the issues that arise when users construct circuits. We present a study that investigates the effectiveness, usability, and cognitive load of AR visual instructions for circuit prototyping tasks. A mobile-based, window-on-the-world AR tool is compared to traditional media such as paper-based or monitor-displayed electronic drawings. Results show that superimposing components and instructions through AR reduces the number of errors, allows users to easily troubleshoot them and reduces users' mental workload.

Proceedings ArticleDOI
29 May 2018
TL;DR: Future research and visual design could rely on the visual complexity aspects outlined in this paper, which review the aspects of GUI visual complexity and operationalizes four aspects with nine computation-based measures in total.
Abstract: Graphical User Interfaces (GUIs) of low visual complexity tend to have higher aesthetics, usability and accessibility, and result in higher user satisfaction. Despite a few authors recently used or studied visual complexity, the concept of visual complexity still needs to be better defined for the use in HCI research and GUI design, with its underlying aspects systematized and operationalized, and different measures validated. This paper reviews the aspects of GUI visual complexity and operationalizes four aspects with nine computation-based measures in total. Two user studies validated the measures on two types of stimuli - webpages (study 1, n = 55) and book pages (study 2, n = 150) - with two user groups, dyslexics (people with reading difficulties) and typical readers. The same complexity aspects could be expected to determine complexity perception for both GUI types, whereas different complexity aspects could be expected to determine complexity perception for dyslexics, relative to typical readers. However, the studies showed little to no difference between dyslexics and average readers, whereas web pages did differ from book pages in what aspects made them seem complex. It was not the intergroup differences, but the stimulus type that defined criteria to judge visual complexity. Future research and visual design could rely on the visual complexity aspects outlined in this paper.

Proceedings ArticleDOI
29 May 2018
TL;DR: Investigation of how several factors may affect recognition of Spatiotemporal Vibrotactile Patterns suggests that physical activity has very little impact, specifically compared to cognitive task, location of the vibrations or temporality.
Abstract: Previous research demonstrated the ability for users to accurately recognize Spatiotemporal Vibrotactile Patterns (SVP): sequences of vibrations on different motors occurring either sequentially or simultaneously. However, the experiments were only run in a lab setting and the ability for users to recognize SVP in a real-world environment remains unclear. In this paper, we investigate how several factors may affect recognition: (1) physical activity (running), (2) cognitive task (i.e. primary task, typing), (3) distribution of the vibration motors across body parts and (4) temporality of the patterns. Our results suggest that physical activity has very little impact, specifically compared to cognitive task, location of the vibrations or temporality. We discuss these results and propose a set of guidelines for the design of SVPs.

Proceedings ArticleDOI
29 May 2018
TL;DR: The notion of "Face-through HMD" is proposed and a face-capturing HMD configuration called "Behind-the-Mask" with infrared (IR) cut filters and side cameras that can be attached to existing HMDs and can be used in many VR applications are presented.
Abstract: A head-mounted display (HMD), which is common in virtual reality (VR) systems, normally hides the user's face. This feature prohibits to realize a face-to-face communication, in which two or more users share the same virtual space, or show a participant's face on a surrogate-robot's face when the user remotely connects to the robot through an HMD for tele-immersion. Considering that face-to-face communication is one of the fundamental requirements of real-time communications, and is widely realized and used by many nonVR telecommunication systems, an HMD's face hiding feature is considered to be a serious problem and limits the possibility of VR. To address this issue, we propose the notion of "Face-through HMD" and present a face-capturing HMD configuration called "Behind-the-Mask" with infrared (IR) cut filters and side cameras that can be attached to existing HMDs. As an IR cut filter only reflects infrared light and transmits visible light, it is transparent to the user's eye but reflects the user's face with infrared lights. By merging a prescanned 3D face model of the user with the face image obtained from our HMD, the 3D face model of the user with eyes and mouth movement can be reconstructed. We consider that our proposed HMD can be used in many VR applications.

Proceedings ArticleDOI
29 May 2018
TL;DR: This work proposes a method of interaction involving tapping specific locations on the body, identifies candidate locations for running and cycling, and compares them in a series of controlled experiments with athletes to suggest that specific locations are faster and have minimal disruption to movement, even under induced fatigue conditions.
Abstract: Wearables are increasingly used during training to quantify performance and provide valuable real-time information. However, interacting with these devices in motion may disrupt the movements of the activity. We propose a method of interaction involving tapping specific locations on the body, identify candidate locations for running and cycling, and compare them in a series of controlled experiments with athletes. A purpose-built prototype measures speed of interaction and gives feedback cues for athletes to report the physical effects on the activity itself. Our results suggest that specific locations are faster and have minimal disruption to movement, even under induced fatigue conditions. The overall method is fast - 1.31s for running and 1.65s for cycling. Preferred locations differ significantly across sports, with stable body parts ranking higher. We effectively demonstrated the use of a single hand for interaction during running with two distinct tap gestures. A set of guidelines inform the design of new sports technologies.

Proceedings ArticleDOI
29 May 2018
TL;DR: This work designs a VR editing tool and a custom 360° video player to provide the content creator with the ability to drive the user's attention, and introduces a technique called snap-changes, aimed at directing viewers to points of interest pre-defined by the content producer.
Abstract: Cinematic Virtual Reality (VR) has the potential of touching the masses with new exciting experiences, but faces two main hurdles: one is the ability to stream these videos, another is their design and creation. Indeed, rates are much higher and in addition to discomfort and sickness that might arise in fully immersive experience with a headset, users might get lost when exploring a 360° videos and miss main elements required to understand the underlying plot. We take an innovative approach by addressing jointly the creation and streaming problems. We introduce a technique called snap-changes, aimed at directing viewers to points of interest pre-defined by the content producer. We design a VR editing tool and a custom 360° video player, to provide the content creator with the ability to drive the user's attention, and report results from two sets of user experiments that indicate that snap-changes indeed help reduce user's head motion.

Proceedings ArticleDOI
29 May 2018
TL;DR: Results provide evidence that advanced visualization techniques provide a more suitable framework for deploying graphical user authentication schemes and underpin the need for considering such techniques for providing assistive and/or adaptive mechanisms to users aiming to assist them to create stronger graphical passwords.
Abstract: Nowadays, technological advances introduce new visualization and user interaction possibilities. Focusing on the user authentication domain, graphical passwords are considered a better fit for interaction environments which lack a physical keyboard. Nonetheless, the current graphical user authentication schemes are deployed in conventional layouts, which introduce security vulnerabilities associated with the strength of the user selected passwords. Aiming to investigate the effectiveness of advanced visualization layouts in selecting stronger passwords, this paper reports a between-subject study, comparing two different design layouts a two-dimensional and a three dimensional. Results provide evidence that advanced visualization techniques provide a more suitable framework for deploying graphical user authentication schemes and underpin the need for considering such techniques for providing assistive and/or adaptive mechanisms to users aiming to assist them to create stronger graphical passwords.

Proceedings ArticleDOI
29 May 2018
TL;DR: Several artificial-landmark visualizations are developed that can represent locations even in documents that are many hundreds of pages long, and results show that providing two columns of landmark icons led to significantly better performance and user preference.
Abstract: Document readers with linear navigation controls do not work well when users need to navigate to previously-visited locations, particularly when documents are long. Existing solutions - bookmarks, search, history, and read wear - are valuable but limited in terms of effort, clutter, and interpretability. In this paper, we investigate artificial landmarks as a way to improve support for revisitation in long documents - inspired by visual augmentations seen in physical books such as coloring on page edges or indents cut into pages. We developed several artificial-landmark visualizations that can represent locations even in documents that are many hundreds of pages long, and tested them in studies where participants visited multiple locations in long documents. Results show that providing two columns of landmark icons led to significantly better performance and user preference. Artificial landmarks provide a new mechanism to build spatial memory of long documents - and can be used either alone or with existing techniques like bookmarks, read wear, and search.

Proceedings ArticleDOI
29 May 2018
TL;DR: The design of an interactive multimedia platform that enhance the annotation process of medical images, in the domain of dermatology, adopting gamification and "games with a purpose" (GWAP) strategies in order to improve the engagement and the production of qualified datasets also fostering their sharing and practical evaluation is proposed.
Abstract: The deep learning approach has increased the quality of automatic medical diagnoses at the cost of building qualified datasets to train and test such supervised machine learning methods. Image annotation is one of the main activity of dermatologists and the quality of annotation depends on the physician experience and on the number of studied cases: manual annotations are very useful to extract features like contours, intersections and shapes that can be used in the processes of lesion segmentation and classification made by automatic agents. This paper proposes the design of an interactive multimedia platform that enhance the annotation process of medical images, in the domain of dermatology, adopting gamification and "games with a purpose" (GWAP) strategies in order to improve the engagement and the production of qualified datasets also fostering their sharing and practical evaluation. A special attention is given to the design choices, theories and assumptions as well as the implementation and technological details.

Proceedings ArticleDOI
29 May 2018
TL;DR: This work presents a model for IoT devices which allows to assess those devices and their suitability for a certain domain according to four dimensions: communication, target, data manipulation and development.
Abstract: The current Internet of Things (IoT) market proposes a wide variety of devices with complex design and different functionality. In addition, the same IoT device can be used in different domains, from home to industry, to healthcare. The management of such devices occurs in different ways, for example through visual interaction using high level programming languages (e.g. Event-Condition-Action rules) or through high level API. Generally, end users are not technical experts and are not able to configure their IoT devices, thus they need external tools (or visual interaction paradigm) to exploit and better control them. In this work, we present a model for IoT devices which allows to assess those devices and their suitability for a certain domain according to four dimensions: communication, target, data manipulation and development. The model aims at better understanding the device capabilities and, consequently, facilitating the choice of the devices that better suit the domain in which they should be used.

Proceedings ArticleDOI
29 May 2018
TL;DR: In VCP the 'party' is presented as a mixture of individuals and small conversation groups 'circulating' at the virtual venue, and words from conversations are shown in word-clouds displayed around conversation groups, sufficient to identify topics of conversation allowing participants to decide whether or not to join a group.
Abstract: Whilst the primary purpose of conferences is work --- formal exchange and sharing of information --- they almost always also include elements of play: informal social and entertainment elements, such as receptions, dinners, and tourism activities. These activities also provide the opportunity for 'serious' discussions, meeting people, and networking, and are an essential part of a good conference. Despite this, most virtual conferencing tools fail to provide support for such activities, instead focusing on austere goals related to saving money, time, and travel. This paper describes the concept of a Virtual Cocktail Party (VCP) tool to integrate into a virtual conference environment. In VCP the 'party' is presented as a mixture of individuals and small conversation groups 'circulating' at the virtual venue. Exploiting an automated speech-to-text system, words from conversations are shown in word-clouds displayed around conversation groups, sufficient to identify topics of conversation allowing participants to decide whether or not to join a group.

Proceedings ArticleDOI
29 May 2018
TL;DR: From a set of trajectories of the players and the ball in a football (soccer) game, the pressure of the defending players upon the ball and the opponents is estimated for each time frame.
Abstract: From1 a set of trajectories of the players and the ball in a football (soccer) game, we computationally estimate, for each time frame, the pressure of the defending players upon the ball and the opponents. The extracted pressure relationships are visualized in detailed and summarized forms. Interactive filtering enables exploration of the pressure relationships in selected game episodes or in game situations satisfying specific query conditions..

Proceedings ArticleDOI
29 May 2018
TL;DR: The goal of the workshop is to bring together researchers and practitioners interested in presenting and discussing the potential use of state-of-the-art advanced visual interfaces in enhancing the daily CH experience.
Abstract: Cultural Heritage (CH) is a challenging domain of application for novel Information and Communication Technologies (ICT), where visualization plays a major role in enhancing visitors' experience, either onsite or online. Technology-supported natural human-computer interaction is a key factor in enabling access to CH assets. Advances in ICT ease visitors to access collections online and better experience CH onsite. The range of visualization devices - from tiny smart watch screens and wall-size large situated public displays to the latest generation of immersive head-mounted displays - together with the increasing availability of real-time 3D rendering technologies for online and mobile devices and, recently, Internet of Things (IoT) approaches, require exploring how they can be applied successfully in CH. Following the successful workshop at AVI 2016 and the large numbers of recent events and projects focusing on CH and, considering that 2018 has been declared the European Year of Cultural Heritage, the goal of the workshop is to bring together researchers and practitioners interested in presenting and discussing the potential use of state-of-the-art advanced visual interfaces in enhancing our daily CH experience.

Proceedings ArticleDOI
29 May 2018
TL;DR: E evaluation of the GRAM system, with the help of university research management stakeholders, reveals interesting patterns in research investment and output for universities across the world (USA, Europe, Asia) and for different types of universities.
Abstract: The Global Research Activity Map (GRAM) is an interactive web-based system for visualizing and analyzing worldwide scholarship activity as represented by research topics. The underlying data for GRAM is obtained from Google Scholar academic research profiles and is used to create a weighted topic graph. Nodes correspond to self-reported research topics and edges indicate co-occurring topics in the profiles. The GRAM system supports map-based interactive features, including semantic zooming, panning, and searching. Map overlays can be used to compare human resource investment, displayed as the relative number of active researchers in particular topic areas, as well scholarly output in terms of citations and normalized citation counts. Evaluation of the GRAM system, with the help of university research management stakeholders, reveals interesting patterns in research investment and output for universities across the world (USA, Europe, Asia) and for different types of universities. While some of these patterns are expected, others are surprising. Overall, GRAM can be a useful tool to visualize human resource investment and research productivity in comparison to peers at a local, regional and global scale. Such information is needed by university administrators to identify institutional strengths and weaknesses and to make strategic data-driven decisions.

Proceedings ArticleDOI
29 May 2018
TL;DR: This work investigated how inattentional blindness affect users' perception through an eye tracking investigation on Simons and Chabris' video as well as on the web site of an airline that uses a rotating banner to advertise special deals.
Abstract: Interface designers often use change and movement to draw users' attention. Research on change blindness and inattentional blindness challenges this approach. In Simons and Chabris' 1999, "Gorillas in our midst" experiment, they showed how people that are focused on a task are likely to miss the occurrence of an unforeseen event (a man in a gorilla suit in their case), even if it appears in their field of vision. This relates to interface design because interfaces often include moving elements such as rotating banners or advertisements, which designers obviously want users to notice. We investigated how inattentional blindness affect users' perception through an eye tracking investigation on Simons and Chabris' video as well as on the web site of an airline that uses a rotating banner to advertise special deals. In both cases users performed tasks that required their full attention and were then interviewed to determine to what extent they perceived the changes or new information. We compared the results of the two experiments to see how Simons and Chabris' theory applies to interface design. Our findings show that although 43% of the participants had fixations on the gorilla, only 22% said that they noticed it. On the web site, 75% of participants had fixations on the moving banner but only 33% could recall any information related to it. We offer reasons for these results and provide designers with advice on how to address the effect of inattentional blindness and change blindness in their designs.

Proceedings ArticleDOI
29 May 2018
TL;DR: This model represents results as markers, or as geometric objects, on 2D/3D layers, using stylized and highly colored shapes to enhance their visibility and supports interactive information filtering in the map by enabling the user to focus on different data categories.
Abstract: In Geographical Information search, map visualization can challenge the user because results can consist of a large set of heterogeneous items, increasing visual complexity. We propose a novel visualization model to address this issue. Our model represents results as markers, or as geometric objects, on 2D/3D layers, using stylized and highly colored shapes to enhance their visibility. Moreover, the model supports interactive information filtering in the map by enabling the user to focus on different data categories, using transparency sliders to tune the opacity, and thus the emphasis, of the corresponding data items. A test with users provided positive results concerning the efficacy of the model.

Proceedings Article
01 Jan 2018
TL;DR: The Semiotic Framework for Virtual Reality (VR) usability and user experience evaluation (UX) offers a theoretical model for VR applications classification and a combination of evaluation methods and a study protocol to be used for testing usability and UX in the VR field.
Abstract: This paper presents the results of a pilot study aimed at validating the Semiotic Framework for Virtual Reality (VR) usability and user experience evaluation (UX). The framework offers a theoretical model for VR applications classification and a combination of evaluation methods and a study protocol to be used for testing usability and UX in the VR field. The main goal of our approach is to provide a complete framework able at overcoming and correctly interpreting the discrepancies that may arise from the application of cognitive and semiotic methods of evaluation. The positive preliminary results of the pilot experiment led the authors to the design of a full-scale study that is already ongoing and that is focused on developing a complete tool of evaluation for VR. Author

Proceedings ArticleDOI
29 May 2018
TL;DR: TouchTokenBuilder and TouchTokenTracker are introduced that, taken together, aim at facilitating the development of tailor-made tangible interfaces, showing the strengths and limitations of tangible interfaces with passive tokens.
Abstract: TouchTokens were introduced recently as a means to design low-cost tangible interfaces. The technique consists in recognizing multi-touch patterns associated with specific tokens, and works on any touch-sensitive surface, with passive tokens that can be made out of any material. TouchTokens have so far been limited to a few basic geometrical shapes only, which puts a significant practical limit to how tailored token sets can be. In this article, we introduce TouchTokenBuilder and TouchTokenTracker that, taken together, aim at facilitating the development of tailor-made tangible interfaces. TouchTokenBuilder is an application that assists interface designers in creating token sets using a simple direct-manipulation interface. TouchTokenTracker is a library that enables tracking the tokens' full geometry. We report on experiments with those tools, showing the strengths and limitations of tangible interfaces with passive tokens.

Proceedings ArticleDOI
29 May 2018
TL;DR: The existence of different interaction zones and the distance at which these zones are relevant are dependent on display size, and guidelines for the design of interactive display systems are offered.
Abstract: The goal of our research was to understand the effects of display size on interaction zones as it applies to interactive systems. Interaction zone models for interactive displays are often static and do not consider the size of the display in their definition. As the interactive display ecosystem becomes more size diverse, current models for interaction are limited in their applicability. This paper describes the results of an exploratory study in which participants interacted with and discussed expectations with interactive displays ranging from personal to wall-sized. Our approach was open-ended rather than grounded in existing interaction zone models in order to explore potential differences in interaction zones and distances. We found that the existence of different interaction zones and the distance at which these zones are relevant are dependent on display size. In discussion of the results, we explore implications of our findings and offer guidelines for the design of interactive display systems.