scispace - formally typeset
Search or ask a question

Showing papers presented at "Advanced Visual Interfaces in 2016"


Proceedings ArticleDOI
07 Jun 2016
TL;DR: This expanded description of roles played by animation in interfaces today aims at inspiring the HCI research community to find novel uses of animation, guide them towards evaluation and spark further research.
Abstract: Animations are commonplace in today's user interfaces From bouncing icons that catch attention, to transitions helping with orientation, to tutorials, animations can serve numerous purposes We revisit Baecker and Small's pioneering work Animation at the Interface, 25 years later We reviewed academic publications and commercial systems, and interviewed 20 professionals of various backgrounds Our insights led to an expanded set of roles played by animation in interfaces today for keeping in context, teaching, improving user experience, data encoding and visual discourse We illustrate each role with examples from practice and research, discuss evaluation methods and point to opportunities for future research This expanded description of roles aims at inspiring the HCI research community to find novel uses of animation, guide them towards evaluation and spark further research

71 citations


Proceedings ArticleDOI
07 Jun 2016
TL;DR: This paper proposes a dynamic blur control method that gradually blurs the image on the display to the threshold at which viewers are aware of the modulation of the display, while the region where viewers' attention should be guided remains unblurred.
Abstract: In information media such as TV programs, digital signage, or web pages, information content providers often want to guide viewers' attention to a particular location of the display. However, "active" methods, such as flashing displays, using animation, or changing colors, often interrupt viewers' concentration and makes viewers feel annoyed. This paper proposes a method for guiding viewers' attention without viewers noticing. By focusing on a characteristic of the human visual system, we propose a dynamic blur control method. Our method gradually blurs the image on the display to the threshold at which viewers are aware of the modulation of the display, while the region where viewers' attention should be guided remains unblurred. Two subjective experiments were conducted to show the effectiveness of our method. In the first, viewers' attention was guided to the unblurred region using blur control. In the second, a threshold was found at which viewers were aware of the modulation, and viewers' gaze is guided below this threshold. This means that the viewers' attention can be guided without them noticing.

40 citations


Proceedings ArticleDOI
07 Jun 2016
TL;DR: Graph autocomplete is introduced, an interactive approach that guides users to construct and refine queries, preventing over-specification and a twelve-participant, within-subject user study demonstrates Visage's ease of use and the ability to construct graph queries significantly faster than using a conventional query language.
Abstract: Extracting useful patterns from large network datasets has become a fundamental challenge in many domains. We present Visage, an interactive visual graph querying approach that empowers users to construct expressive queries, without writing complex code (e.g., finding money laundering rings of bankers and business owners). Our contributions are as follows: (1) we introduce graph autocomplete, an interactive approach that guides users to construct and refine queries, preventing over-specification; (2) Visage guides the construction of graph queries using a data-driven approach, enabling users to specify queries with varying levels of specificity, from concrete and detailed (e.g., query by example), to abstract (e.g., with "wildcard" nodes of any types), to purely structural matching; (3) a twelve-participant, within-subject user study demonstrates Visage's ease of use and the ability to construct graph queries significantly faster than using a conventional query language; (4) Visage works on real graphs with over 468K edges, achieving sub-second response times for common queries.

33 citations


Proceedings ArticleDOI
07 Jun 2016
TL;DR: This work demonstrates that a commodity smartwatch can serve as an effective pointing device in ubiquitous display environments and outperforms in terms of error rate for small (high Fitts's ID) targets.
Abstract: We describe the design and evaluation of a freehand, smartwatch-based, mid-air pointing and clicking interaction technique, called Watchpoint. Watchpoint enables a user to point at a target on a nearby large display by moving their arm. It also enables target selection through a wrist rotation gesture. We validate the use of Watchpoint by comparing its performance with two existing techniques: Myopoint, which uses a specialized forearm mounted motion sensor, and a camera-based (Vicon) motion capture system. We show that Watchpoint is statistically comparable in speed and error rate to both systems and, in fact, outperforms in terms of error rate for small (high Fitts's ID) targets. Our work demonstrates that a commodity smartwatch can serve as an effective pointing device in ubiquitous display environments.

33 citations


Proceedings ArticleDOI
07 Jun 2016
TL;DR: The results show that preventing visual clutter of the 3D scene prevails over gesture anticipation in OctoPocus3D, and that displaying upcoming portions of the gestures allows 8% faster completion times than displaying the complete remaining portions.
Abstract: Dynamic symbolic in-air hand gestures are an increasingly popular means of interaction with smart environments However, novices need to know what commands are available and which gesture to execute in order to trigger these commands We propose to adapt OctoPocus, a 2D gesture guiding system, to the case of 3D The OctoPocus3D guidance system displays a set of 3D gestures as 3D pipes and allows users to understand how the system processes gesture input Several feedback and feedforward visual alternatives are proposed in the literature However, their impact on guidance remains to be evaluated We report the results of two user experiments that aim at designing OctoPocus3D by exploring these alternatives The results show that a concurrent feedback, which visually simplifies the 3D scene during the execution of the gesture, increases the recognition rate, but only during the first two repetitions After the first two repetitions, users achieve the same recognition rate with a terminal feedback (after the execution of the gesture), a concurrent feedback, both or neither With respect to feedforward, the overall stability of the 3D scene explored through the origin of the pipes during the execution of the gestures does not influence the recognition rate or the execution time Finally, the results also show that displaying upcoming portions of the gestures allows 8% faster completion times than displaying the complete remaining portions This indicates that preventing visual clutter of the 3D scene prevails over gesture anticipation

32 citations


Proceedings ArticleDOI
07 Jun 2016
TL;DR: Using flow charts to provide transparency together with user control is found to have more positive effects on domain-specific quality measures than established, text-based approaches and using either of the techniques in isolation.
Abstract: Targeted advertising reaches users based on various traits, such as demographics or behaviour. However, users are often reluctant to accept ads. We hypothesise that users are more open to targeted advertising if they can inspect, control and thereby understand the process of ad selection. We conducted a between-subjects study (N=200) to investigate to what extent four key aspects of ads (Quality, Behavioural Intention, Understanding and Attitude) may be affected by transparency and user control using a flow chart. Our results indicate that positive effects of flow charts reported from other domains may also be applicable to advertising: Using flow charts to provide transparency together with user control is found to have more positive effects on domain-specific quality measures than established, text-based approaches and using either of the techniques in isolation. The paper concludes with recommendations for practitioners aiming to improve user response to ads.

32 citations


Proceedings ArticleDOI
07 Jun 2016
TL;DR: The presented PimVis solution enables a unified organisation of digital and paper documents through the creation of bidirectional links between the digital and physical information space and supports the extension with alternative document tracking techniques as well as augmented reality solutions.
Abstract: Over the last decade, we have witnessed an emergence of Personal Information Management (PIM) solutions. Despite the fact that paper documents still form a significant part of our daily working activities, existing PIM systems usually support the organisation and re-finding of digital documents only. While physical document tracking solutions such as RFID- or computer vision-based systems are recently gaining some attention, they usually focus on the paper document tracking and offer limited support for re-finding activities. We present PimVis, a solution for exploring and re-finding digital and paper documents in so-called cross-media information spaces. The PimVis user interface enables a unified organisation of digital and paper documents through the creation of bidirectional links between the digital and physical information space. The presented personal cross-media information management solution further supports the extension with alternative document tracking techniques as well as augmented reality solutions. A formative PimVis evaluation revealed the high potential of fully integrated cross-media PIM solutions.

30 citations


Proceedings ArticleDOI
07 Jun 2016
TL;DR: A controlled experiment comparing three sets of hand gestures for mid-air browsing and selection in image collections, that were identified out of an elicitation study, using MS Kinect suggests that from a usability perspective, sideways hand extension should be preferred for browsing image galleries, if no other contextual factors apply.
Abstract: Image collections are a common interaction pattern for 2D interfaces, however mid-air user interaction with collections has received little attention We present a controlled experiment (within-groups, n=24) comparing three sets of hand gestures for mid-air browsing and selection in image collections, that were identified out of an elicitation study, using MS Kinect Each set includes cursor-less gestures for browsing (sideways hand extension, wheel and swipe) and for selection/deselection (hand-up/hand-down) Task success was universal with high accuracy and few errors for all gestures Sideways extension outperforms swipe and perceived effort for this gesture is significantly lower Both gestures outperform wheel We suggest that from a usability perspective, sideways hand extension should be preferred for browsing image galleries, if no other contextual factors apply Also, the results of the elicitation study, in which most users proposed the swipe gesture for browsing, were not confirmed by the controlled usability experiment This suggests a combined use of elicitation studies with rigorous usability testing, especially when gestures for particular user interface design patterns are sought

29 citations


Proceedings ArticleDOI
07 Jun 2016
TL;DR: A tangible visualization that explores the link between the impact of energy feedback on household consumers and the resource demand impact on energy production and provides design insights for creating novel eco-feedback visualizations that leverage the balance between user lifestyles and the desire to influence consumption behaviors and practices is described.
Abstract: This paper describes a tangible visualization that explores the link between the impact of energy feedback on household consumers and the resource demand impact on energy production. Specifically, it positions a novel perspective attempting to move beyond the known limitations of current eco-feedback systems and contributes to enhance our understanding of how consumers comprehend energy production. The work is informed by a comprehensive study of an installation that displays the ratio of current power generation sources and the percentage of grid renewables. The paper provides design insights for creating novel eco-feedback visualizations that leverage the balance between user lifestyles and the desire to influence consumption behaviors and practices. Evaluation results show an increase in energy literacy and awareness as well as identifies high consumer preferences towards simple, representative interfaces and ubiquitous immediate feedback. Our study shows potential in terms of future scenarios for eco-feedback in distributed energy micro-generation and other inevitable disruptive changes for the energy utility.

25 citations


Proceedings ArticleDOI
07 Jun 2016
TL;DR: This paper describes both the hardware setting and the design of Rift-a-bike, a cycling fitmersive game (immersive games for fitness), and evaluates the effectiveness of different gamification techniques in IVR for supporting physical exercise through a user study.
Abstract: The decreasing hardware cost makes it affordable to pair Immersive Virtual Environments (IVR) visors with treadmills and exercise bikes. In this paper, we discuss the application of different gamification techniques in IVR for supporting physical exercise. We describe both the hardware setting and the design of Rift-a-bike, a cycling fitmersive game (immersive games for fitness). We evaluate the effectiveness of such techniques through a user study, which provides different insights on their effectiveness in designing such applications.

24 citations


Book ChapterDOI
07 Jun 2016
TL;DR: An approach to develop an up-to-date reference model that can support advanced visual user interfaces for distributed Big Data Analysis in virtual labs to be used in e-Science, industrial research, and Data Science education is introduced.
Abstract: This paper introduces an approach to develop an up-to-date reference model that can support advanced visual user interfaces for distributed Big Data Analysis in virtual labs to be used in e-Science, industrial research, and Data Science education. The paper introduces and motivates the current situation in this application area as a basis for a corresponding problem statement that is utilized to derive goals and objectives of the approach. Furthermore, the relevant state-of-the-art is revisited and remaining challenges are identified. An exemplar set of use cases, corresponding user stereotypes as well as a conceptual design model to address these challenges are introduced. A corresponding architectural system model is suggested as a conceptual reference architecture to support proof-of-concept implementations as well as to support interoperability in distributed infrastructures. Conclusions and an outlook on future work complete the paper.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: This study investigates how a plant, whose health relies on the plant owner's level of activity, can engage people in tracking and self-reflecting on their fitness data and introduces the Goal Motivation Model, a model considering the diversity of individuals, thus supporting and encouraging a diversity of strategies for accomplishing goals.
Abstract: Motivation is a key factor for introducing and maintaining healthy changes in behaviour. However, typical visualization methods (e.g., bar-, pie-, and line charts) hardly motivate individuals. We investigate how a plant---a living visualization---whose health relies on the plant owner's level of activity, can engage people in tracking and self-reflecting on their fitness data. To address this question, we designed, implemented, and studied Go & Grow, a living plant that receives water proportionally to its owner's activity. Our six-week qualitative study with ten participants suggests that living visualizations have qualities that their digital counterparts do not have. This includes people feeling: emotionally connected to their plant; sentiments such as pride and guilt; and responsibility towards their plant. Based on this study, we introduce the Goal Motivation Model, a model considering the diversity of individuals, thus supporting and encouraging a diversity of strategies for accomplishing goals.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: A framework for storytelling and learning activities that exploits an immersive virtual reality viewer to interact with target users and explores children reaction to - and acceptance of - the viewer, as well as therapists' ease of use when interacting with the framework.
Abstract: Our research aims at supporting existing therapies for children with intellectual and developmental disabilities (IDD) and with autism spectrum disorders (ASD). The personal and social autonomy is the desired end state to be achieved to enable a smooth integration in the real world. We developed and tested a framework for storytelling and learning activities that exploits an immersive virtual reality viewer to interact with target users. Our system uses Google Cardboard platform to enhance existing therapies for IDD and ASD children, enabling caregivers to supervise and personalize single therapeutic sessions. This way curative meetings can be adapted for each child's specific need accordingly to the severity of their disabilities. We co-designed our system with experts from the medical sector, identifying features that allow patients to stay focused on the task to perform. Our approach triggers a learning process for a seamless assimilation of common behavioral skills useful in every day's life. This paper highlights the technologic challenges in healthcare and discusses cutting-edge interaction paradigms. Among those challenges, we try to identify the best solution to support advanced visual interfaces for an interactive storytelling experience. Furthermore, this work reports our preliminary experimental results from a still ongoing evaluation with IDD and ASD children and discusses the benefits and flaws of our approach. On the one hand, we explore children reaction to - and acceptance of - the viewer, on the other hand, therapists' ease of use when interacting with our framework. We conclude this paper with few considerations on our approach.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: A pilot study is introduced on comparing two interfaces: an interface based on the Microsoft Human Interface Guidelines (HIG), a de facto standard in the field, and a novel interface, designed by us, which displays an avatar, which does not require any activation gestures to trigger actions.
Abstract: Public displays have lately become ubiquitous thanks to the decreasing cost of such technology and public policies supporting the development of smart cities. Depending on form factor, those displays might use touchless gestural interfaces that therefore are becoming more often the subject of public and private research. In this paper, we focus on touchless interactions with situated public displays, and introduce a pilot study on comparing two interfaces: an interface based on the Microsoft Human Interface Guidelines (HIG), a de facto standard in the field, and a novel interface, designed by us. Differently from the HIG-based one, our interface displays an avatar, which does not require any activation gestures to trigger actions. Our aim is to study how the two interfaces address the so-called interaction blindness --- the inability of the users to recognize the interactive capabilities of those displays. According to our pilot study, although providing a different approach, both interfaces has proven effective in the proposed scenario: a public display in a hall inside a University campus building.

Proceedings ArticleDOI
Daniel M. Russell1
07 Jun 2016
TL;DR: An analysis shows that the basic causes of low adoption are difficulty of data wrangling and sharing the work products of analysis, the need to share a common visual language literacy across different parts of the organization, and problems in using IV tools to communicate and present complex data analyses.
Abstract: While modern information visualization (IV) has been around for several decades, the inventions of IV seem to be peripheral to the everyday work in companies that would seem to be the most likely to use these inventions. In this case study, Google uses very few IV tools, relying mostly on more traditional ways of looking at data and data relationships. What has brought about this state of affairs? An analysis shows that the basic causes of low adoption are (a) difficulty of data wrangling and sharing the work products of analysis, (b) the need to share a common visual language literacy across different parts of the organization, (c) problems in using IV tools to communicate and present complex data analyses. At the same time, IV technology is found to be more useful in the investigation phase of research, rather than for communication and presentation reasons.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: This paper presents a timeline-based system for interactive events visualization, which has been exploited in a proxy-based mobile Web usability evaluation tool, and shows various ways to compare timelines related to users sessions with ideal timelines representing optimal behavior.
Abstract: In the field of Web usability evaluation, one potential effective approach is the usage of logging tools for remote usability analysis (i.e. tools capable of tracking and recording the users' activities while they interact with a Web site), and then presenting the recorded data to usability experts in such a way to support detection of possible usability problems. In the design of such automated tools, in addition to the problems related to user behavior recording, another important issue is the choice of meaningful visual representations in order to support the usability expert analysis. In this paper we present a timeline-based system for interactive events visualization, which has been exploited in a proxy-based mobile Web usability evaluation tool. We discuss how such visualizations can be exploited in finding usability issues and the type of problems that can be detected through them. We show various ways to compare timelines related to users sessions with ideal timelines representing optimal behavior.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: It is shown that intra personal synchronization of limb movements is a relevant feature for assessing coordination of motoric behavior and can also distinguish between full-body movements performed with different expressive qualities, namely rigidity, fluidity, and impulsivity.
Abstract: Intrapersonal synchronization of limb movements is a relevant feature for assessing coordination of motoric behavior. In this paper, we show that it can also distinguish between full-body movements performed with different expressive qualities, namely rigidity, fluidity, and impulsivity. For this purpose, we collected a dataset of movements performed by professional dancers, and annotated the perceived movement qualities with the help of a group of experts in expressive movement analysis. We computed intra personal synchronization by applying the Event Synchronization algorithm to the time-series of the speed of arms and hands. Results show that movements performed with different qualities display a significantly different amount of intra personal synchronization: impulsive movements are the most synchronized, the fluid ones show the lowest values of synchronization, and the rigid ones lay in between.

Proceedings Article
01 Jan 2016
TL;DR: Preliminary results show that the most of the subjects were able to properly interact with the system from the very first use, and that the emotional module is an interesting solution, even if further work must be devoted to address specific situations.
Abstract: New technologies for innovative interactive experience represent a powerful medium to deliver cultural heritage content to a wider range of users. Among them, Natural User Interfaces (NUI), i.e. non-intrusive technologies not requiring to the user to wear devices nor use external hardware (e.g. keys or trackballs), are considered a promising way to broader the audience of specific cultural heritage domains, like the navigation/interaction with digital artworks presented on wall-sized displays. Starting from a collaboration with a worldwide famous Italian designer, we defined a NUI to explore 360 panoramic artworks presented on wall-sized displays, like virtual reconstruction of ancient cultural sites, or rendering of imaginary places. Specifically, we let the user to "move the head" as way of natural interaction to explore and navigate through these large digital artworks. To this aim, we developed a system including a remote head pose estimator to catch movements of users standing in front of the wall-sized display: starting from a central comfort zone, as users move their head in any direction, the virtual camera rotates accordingly. With NUIs, it is difficult to get feedbacks from the users about the interest for the point of the artwork he/she is looking at. To solve this issue, we complemented the gaze estimator with a preliminary emotional analysis solution, able to implicitly infer the interest of the user for the shown content from his/her pupil size. A sample of 150 subjects was invited to experience the proposed interface at an International Design Week. Preliminary results show that the most of the subjects were able to properly interact with the system from the very first use, and that the emotional module is an interesting solution, even if further work must be devoted to address specific situations.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: A "smart" stuffed dolphin called SAM that engages children in a variety of play tasks and can be customized by therapists to address the specific needs of each child.
Abstract: Our research aims at helping children with intellectual disability (ID) to "learn through play" by interacting with digitally enriched physical toys Inspired by the practice of Dolphin Therapy (a special form of Pet Therapy) and, specifically, by the activities that ID children perform at Dolphinariums, we have developed a "smart" stuffed dolphin called SAM that engages children in a variety of play tasks SAM emits different stimuli (sound, vibration, and light) with its body in response to children's manipulation Its behavior is integrated with lights and multimedia animations or video displayed in the ambient and can be customized by therapists to address the specific needs of each child

Proceedings ArticleDOI
07 Jun 2016
TL;DR: Tiles is composed by a set of physical input/output primitives to describe interaction styles with technology-augmented objects, and extensible hardware modules easily embeddable in everyday things that implement the primitives.
Abstract: We present the groundwork for Tiles: an inventor toolbox to support the development of interactive objects by non-experts. Tiles is composed by (i) a set of physical input/output primitives to describe interaction styles with technology-augmented objects, (ii) extensible hardware modules easily embeddable in everyday things that implement the primitives, (iii) APIs to code application logics using popular programming languages. We are currently exploring the opportunities of using Tiles to develop applications for learning, games and advanced visual interfaces.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: YouTouch!, a system that tracks users in front of an interactive display wall and associates touches with users and requires no user instrumentation nor custom hardware, and there is no registration nor learning phase, is presented.
Abstract: We present YouTouch!, a system that tracks users in front of an interactive display wall and associates touches with users. With their large size, display walls are inherently suitable for multi-user interaction. However, current touch recognition technology does not distinguish between users, making it hard to provide personalized user interfaces or access to private data. In our system we place a commodity RGB + depth camera in front of the wall, allowing us to track users and correlate them with touch events. While the camera's driver is able to track people, it loses the user's ID whenever she is occluded or leaves the scene. In these cases, we re-identify the person by means of a descriptor comprised of color histograms of body parts and skeleton-based biometric measurements. Additional processing reliably handles short-term occlusion as well as assignment of touches to occluded users. YouTouch! requires no user instrumentation nor custom hardware, and there is no registration nor learning phase. Our system was thoroughly tested with data sets comprising 81 people, demonstrating its ability to re-identify users and correlate them to touches even under adverse conditions.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: AmI@Home is a collaborative system prototype for smart home management and configuration based on event-condition-action rules that occur through gamification mechanisms supporting social interaction, collaboration and competition to engage all family members in shaping their smart home.
Abstract: This paper describes AmI@Home, a collaborative system prototype for smart home management and configuration In particular, the system is based on event-condition-action rules Rule construction and manipulation occur through gamification mechanisms supporting social interaction, collaboration and competition, in order to engage all family members in shaping their smart home

Proceedings ArticleDOI
07 Jun 2016
TL;DR: A novel framework that allows designers without 3D-modelling experience to draw three-dimensional panoramic sketches by hand with the help of a support lines is proposed.
Abstract: A novel framework that allows designers without 3D-modelling experience to draw three-dimensional panoramic sketches by hand with the help of a support lines is proposed Sketches are viewed with panoramic viewing software, giving observers interactive three-dimensional 360-degree immersed experiences

Proceedings ArticleDOI
07 Jun 2016
TL;DR: VR GREP is presented, an immersive End User Development (EUD) tool that supports authoring and modifying general purpose immersive VR environments, without having technical knowledge.
Abstract: In this paper we present VR GREP, an immersive End User Development (EUD) tool that supports authoring and modifying general purpose immersive VR environments, without having technical knowledge. The system aims to facilitate exploring the potential of immersive tools to support situated design practices.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: A series of interaction techniques in a full immersive Virtual Reality environment are presented specifically designed to cover basic needs of a virtual museum experience such as navigating in the museum space and accessing meta-information associated to displayed items.
Abstract: Virtual museums are one of the most interesting applications of Virtual Reality environment, but their success is strongly depending on the development of effective interaction techniques allowing a natural and fast exploration of their contents. In this paper a series of interaction techniques in a full immersive Virtual Reality environment are presented. These techniques are specifically designed to cover basic needs of a virtual museum experience such as navigating in the museum space and accessing meta-information associated to displayed items. Details of the implemented methods and preliminary results collected in user tests performed to compare different navigation choices are presented and discussed.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: It is shown that freearm gestural input as sensed by a smartwatch exhibits similar efficiency as freearM gesturalinput sensed by motion capture systems, and that the smartwatch inertial measurement unit can support text input on ubiquitous computing displays.
Abstract: One challenge with modern smartwatches is text input. In this paper we explore the use of gestural interaction with a smartwatch to support text input. The inertial measurement unit of a smartwatch is used to capture gestural interaction by a user, and an external display is used to provide feedback. We examine two specific variants of gesture keyboards: the swype keyboard common on modern smartphones and the cirrin keyboard, a gestural keyboard that supports character input via directional gestures. We show, first, that freearm gestural input as sensed by a smartwatch exhibits similar efficiency as freearm gestural input sensed by motion capture systems. As well, we show that the smartwatch inertial measurement unit can support text input on ubiquitous computing displays.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: This work proposes an approach based on a catalogue of reusable narrative and interaction strategies with step-by-step instructions on how to adapt and instantiate them for a specific museum and type of visitors.
Abstract: It is of paramount importance that cultural heritage professionals are directly involved in the design of digitally augmented experiences in their museum spaces. We propose an approach based on a catalogue of reusable narrative and interaction strategies with step-by-step instructions on how to adapt and instantiate them for a specific museum and type of visitors. This work is conducted in the context of the European-funded project meSch.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: The VVH workshop focuses on the role of interactive data visualization tools by which people can make sense of healthcare data, which include sensor data, the messages exchanged in social media, the emails between patients and their doctors, the content of patient records as well as the discussions among different specialists that led to such record content.
Abstract: Big data analytics in healthcare would be almost useless, without suitable tools allowing users "see" them, and gain insight for their situated decisions. The VVH (Valuable Visualization in Healthcare) workshop focuses on the role of interactive data visualization tools by which people can make sense of healthcare data; these data include sensor data, the messages exchanged in social media, the emails between patients and their doctors, the content of patient records as well as the discussions among different specialists that led to such record content. All these data are used by different types of users, like doctors, nurses, policy makers and common citizens. The VVH workshop aims at contributing on: the assessment of the usability of advanced interactive tools of health-related data visualization; the assessment of the quality of the information and value for insight that these tools make available to their users; the collection of reports of either success stories or failures in the appropriation and use of complex and multidimensional healthcare datasets; the collection of methodological and design-oriented contributions that could share methods, techniques, and heuristics for the design of interactive tools and applications supporting data work, data telling and data interpretation in healthcare.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: It is found that gist-forming ability is remarkably resistant to changes in visual representation, though it deteriorates with topics of lower quality.
Abstract: As topic modeling has grown in popularity, tools for visualizing the process have become increasingly common. Though these tools support a variety of different tasks, they generally have a view or module that conveys the contents of an individual topic. These views support the important task of gist-forming: helping the user build a cohesive overall sense of the topic's semantic content that can be generalized outside the specific subset of words that are shown. There are a number of factors that affect these views, including the visual encoding used, the number of topic words included, and the quality of the topics themselves. To our knowledge, there has been no formal evaluation comparing the ways in which these factors might change users' interpretations. In a series of crowdsourced experiments, we sought to compare features of visual topic representations in their suitability for gist-forming. We found that gist-forming ability is remarkably resistant to changes in visual representation, though it deteriorates with topics of lower quality.

Proceedings ArticleDOI
07 Jun 2016
TL;DR: For the tasks tested, thick objects are faster but less accurate to operate, and that their graspability is only used occasionally, and it is found that coarse manipulation of multiple thin objects is error-prone, an issue that only thick objects may allow to alleviate.
Abstract: We introduce a novel method based on physical proxies for investigating fundamental differences between touch and tangible interfaces. This method uses physical chips to emulate the flat, non-graspable objects that make up touch interfaces, in a way that supports direct comparison with tangible interfaces. We ran an experiment to test the effect of object thickness on participants' behavior, performance and subjective experience in spatial rearrangement tasks. We found that for the tasks tested, thick objects are faster but less accurate to operate, and that their graspability is only used occasionally. We also found that coarse manipulation of multiple thin objects is error-prone, an issue that only thick objects may allow to alleviate.