scispace - formally typeset
Search or ask a question

Showing papers presented at "Advanced Visual Interfaces in 2012"


Proceedings ArticleDOI
21 May 2012
TL;DR: A novel saliency measure for selecting relevant terms and a seriation algorithm that both reveals clustering structure and promotes the legibility of related terms are contributed to Termite, a visual analysis tool for assessing topic model quality.
Abstract: Topic models aid analysis of text corpora by identifying latent topics based on co-occurring words. Real-world deployments of topic models, however, often require intensive expert verification and model refinement. In this paper we present Termite, a visual analysis tool for assessing topic model quality. Termite uses a tabular layout to promote comparison of terms both within and across latent topics. We contribute a novel saliency measure for selecting relevant terms and a seriation algorithm that both reveals clustering structure and promotes the legibility of related terms. In a series of examples, we demonstrate how Termite allows analysts to identify coherent and significant themes.

370 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: Profiler is presented, a visual analysis tool for assessing quality issues in tabular data, which applies data mining methods to automatically flag problematic data and suggests coordinated summary visualizations for assessing the data in context.
Abstract: Data quality issues such as missing, erroneous, extreme and duplicate values undermine analysis and are time-consuming to find and fix. Automated methods can help identify anomalies, but determining what constitutes an error is context-dependent and so requires human judgment. While visualization tools can facilitate this process, analysts must often manually construct the necessary views, requiring significant expertise. We present Profiler, a visual analysis tool for assessing quality issues in tabular data. Profiler applies data mining methods to automatically flag problematic data and suggests coordinated summary visualizations for assessing the data in context. The system contributes novel methods for integrated statistical and visual analysis, automatic view suggestion, and scalable visual summaries that support real-time interaction with millions of data points. We present Profiler's architecture --- including modular components for custom data types, anomaly detection routines and summary visualizations --- and describe its application to motion picture, natural disaster and water quality data sets.

235 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: This paper presents two systems specifically designed for 3D gestural user interaction on 3D geographical maps that rely on two consumer technologies both capable of motion tracking: the Nintendo Wii and the Microsoft Kinect devices.
Abstract: The recent diffusion of advanced controllers, initially designed for the home game console, has been rapidly followed by the release of proprietary or third part PC drivers and SDKs suitable for implementing new forms of 3D user interfaces based on gestures Exploiting the devices currently available on the game market, it is now possible to enrich, with low cost motion capture, the user interaction with desktop computers by building new forms of natural interfaces and new action metaphors that add the third dimension as well as a physical extension to interaction with users This paper presents two systems specifically designed for 3D gestural user interaction on 3D geographical maps The proposed applications rely on two consumer technologies both capable of motion tracking: the Nintendo Wii and the Microsoft Kinect devices The work also evaluates, in terms of subjective usability and perceived sense of Presence and Immersion, the effects on users of the two different controllers and of the 3D navigation metaphors adopted Results are really encouraging and reveal that, users feel deeply immerse in the 3D dynamic experience, the gestural interfaces quickly bring the interaction from novice to expert style and enrich the synthetic nature of the explored environment exploiting user physicality

95 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: A multidimensional framework for context-aware systems to address this challenge, transcending existing frameworks that limited their concerns to particular aspects of context-awareness and paid little attention to potential pitfalls.
Abstract: Based on the assumption that the scarce resource for many people in the world today is not information but human attention, the challenge for future human-centered computer systems is not to deliver more information "to anyone, at anytime, and from anywhere," but to provide "the 'right' information, at the 'right' time, in the 'right' place, in the 'right' way, to the 'right' person."This article develops a multidimensional framework for context-aware systems to address this challenge, transcending existing frameworks that limited their concerns to particular aspects of context-awareness and paid little attention to potential pitfalls. The framework is based on insights derived from the development and assessment of a variety of different systems that we have developed over the last twenty years to explore different dimensions of context awareness.Specific challenges, guidelines, and design trade-offs (promises and pitfalls) are derived from the framework for designing the next generation of context-aware systems. These systems will support advanced interactions for assisting humans (individuals and groups) to become more knowledgeable, more productive, and more creative by emphasizing context awareness as a fundamental design requirement.

92 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: A study investigating how different configurations of input and output across displays affect performance, subjective workload and preferences in map, text and photo search tasks gives recommendations for the design of distributed user interfaces.
Abstract: Attaching a large external display can help a mobile device user view more content at once. This paper reports on a study investigating how different configurations of input and output across displays affect performance, subjective workload and preferences in map, text and photo search tasks. Experimental results show that a hybrid configuration where visual output is distributed across displays is worst or equivalent to worst in all tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best in text and photo search tasks (tied with a mobile-only configuration). After conducting a detailed analysis of the performance differences across different UI configurations, we give recommendations for the design of distributed user interfaces.

65 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: It is shown that motion gestures result in significantly less time looking at the smartphone during walking than does tapping on the screen, even with interfaces optimized for eyes-free input, and there may be benefits to making use of motion gestures as a modality for distracted input on smartphones.
Abstract: Smartphones are frequently used in environments where the user is distracted by another task, for example by walking or by driving. While the typical interface for smartphones involves hardware and software buttons and surface gestures, researchers have recently posited that, for distracted environments, benefits may exist in using motion gestures to execute commands. In this paper, we examine the relative cognitive demands of motion gestures and surface taps and gestures in two specific distracted scenarios: a walking scenario, and an eyes-free seated scenario. We show, first, that there is no significant difference in reaction time for motion gestures, taps, or surface gestures on smartphones. We further show that motion gestures result in significantly less time looking at the smartphone during walking than does tapping on the screen, even with interfaces optimized for eyes-free input. Taken together, these results show that, despite somewhat lower throughput, there may be benefits to making use of motion gestures as a modality for distracted input on smartphones.

54 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: GraphPrism is presented, a technique for visually summarizing arbitrarily large graphs through combinations of 'facets', each corresponding to a single node- or edge-specific metric (e.g., transitivity).
Abstract: Visual methods for supporting the characterization, comparison, and classification of large networks remain an open challenge. Ideally, such techniques should surface useful structural features -- such as effective diameter, small-world properties, and structural holes -- not always apparent from either summary statistics or typical network visualizations. In this paper, we present GraphPrism, a technique for visually summarizing arbitrarily large graphs through combinations of 'facets', each corresponding to a single node- or edge-specific metric (e.g., transitivity). We describe a generalized approach for constructing facets by calculating distributions of graph metrics over increasingly large local neighborhoods and representing these as a stacked multi-scale histogram. Evaluation with paper prototypes shows that, with minimal training, static GraphPrism diagrams can aid network analysis experts in performing basic analysis tasks with network data. Finally, we contribute the design of an interactive system using linked selection between GraphPrism overviews and node-link detail views. Using a case study of data from a co-authorship network, we illustrate how GraphPrism facilitates interactive exploration of network data.

53 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: A novel style of mobile interaction based on mid-air gestures in proximity of the device to increase the number of DOFs and alleviate the limitations of touch interaction with mobile devices is proposed.
Abstract: Rotating 3D objects is a difficult task on mobile devices, because the task requires 3 degrees of freedom and (multi-)touch input only allows for an indirect mapping. We propose a novel style of mobile interaction based on mid-air gestures in proximity of the device to increase the number of DOFs and alleviate the limitations of touch interaction with mobile devices. While one hand holds the device, the other hand performs mid-air gestures in proximity of the device to control 3D objects on the mobile device's screen. A flat hand pose defines a virtual surface which we refer to as the PalmSpace for precise and intuitive 3D rotations. We constructed several hardware prototypes to test our interface and to simulate possible future mobile devices equipped with depth cameras. We conducted a user study to compare 3D rotation tasks using the most promising two designs for the hand location during interaction -- behind and beside the device -- with the virtual trackball, which is the current state-of-art technique for orientation manipulation on touch-screens. Our results show that both variants of PalmSpace have significantly lower task completion times in comparison to the virtual trackball.

52 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: This paper delves into the interactive techniques afforded by variant use of link curvature, delineating a six-dimensional design space that is populated by four families of interactive techniques: bundling, fanning, magnets, and legends.
Abstract: When exploiting the power of node-link diagrams to represent real-world data such as web structures, airline routes, electrical, telecommunication and social networks, link congestion frequently arises. Such areas in the diagram---with dense, overlapping links---are not readable connectivity, node shapes, labels, and contextual information are obscured. In response, graph-layout research has begun to consider the modification of link shapes with techniques such as link routing and bundling. In this paper, we delve into the interactive techniques afforded by variant use of link curvature, delineating a six-dimensional design space that is populated by four families of interactive techniques: bundling, fanning, magnets, and legends. Our taxonomy encompasses existing techniques and reveals several novel link interactions. We describe the implementation of these techniques and illustrate their potential for exploring dense graphs with multiple types of links.

50 citations


Proceedings ArticleDOI
Alex Endert1, Seth Fox1, Dipayan Maiti1, Scotland Leman1, Chris North1 
21 May 2012
TL;DR: A deeper understanding of how users spatially cluster information can inform the design of interactive algorithms to generate more meaningful spatializations for text analysis tasks, to better respond to user interactions during the analytics process, and ultimately to allow analysts to more rapidly gain insight.
Abstract: Analyzing complex textual datasets consists of identifying connections and relationships within the data based on users' intuition and domain expertise In a spatial workspace, users can do so implicitly by spatially arranging documents into clusters to convey similarity or relationships Algorithms exist that spatialize and cluster such information mathematically based on similarity metrics However, analysts often find inconsistencies in these generated clusters based on their expertise Therefore, to support sensemaking, layouts must be co-created by the user and the model In this paper, we present the results of a study observing individual users performing a sensemaking task in a spatial workspace We examine the users' interactions during their analytic process, and also the clusters the users manually created We found that specific interactions can act as valuable indicators of important structure within a dataset Further, we analyze and characterize the structure of the user-generated clusters to identify useful metrics to guide future algorithms Through a deeper understanding of how users spatially cluster information, we can inform the design of interactive algorithms to generate more meaningful spatializations for text analysis tasks, to better respond to user interactions during the analytics process, and ultimately to allow analysts to more rapidly gain insight

48 citations


Proceedings ArticleDOI
21 May 2012
TL;DR: Whether multi-touch instead of mouse input improves users' spatial memory and navigation performance for spatial panning or zooming & panning user interfaces for such UIs is studied.
Abstract: Recent findings from Embodied Cognition reveal strong effects of arm and hand movement on spatial memory. This suggests that input devices may have a far greater influence on users' cognition and users' ability to master a system than we typically believe -- especially for spatial panning or zooming & panning user interfaces. We conducted two experiments to observe whether multi-touch instead of mouse input improves users' spatial memory and navigation performance for such UIs. We observed increased performances for panning UIs but not for zooming & panning UIs. We present our results, provide initial explanations and discuss opportunities and pitfalls for interaction designers.

Proceedings ArticleDOI
21 May 2012
TL;DR: An environment able to support users in seamless access to Web applications in multi-device contexts and mechanisms for sharing information regarding devices, users, and Web applications with various levels of privacy are presented.
Abstract: In this work we present an environment able to support users in seamless access to Web applications in multi-device contexts. The environment supports dynamic push and pull of interactive Web applications, or parts of them, across desktop and mobile devices while preserving their state.We describe mechanisms for sharing information regarding devices, users, and Web applications with various levels of privacy and report on first experiences with the proposed environment.

Proceedings ArticleDOI
21 May 2012
TL;DR: This paper introduces FatFonts, a technique for visualizing quantitative data that bridges the gap between numeric and visual representations that is based on Arabic numerals but, unlike regular numeric typefaces, the amount of ink used for each digit is proportional to its quantitative value.
Abstract: In this paper we explore numeric typeface design for visualization purposes. We introduce FatFonts, a technique for visualizing quantitative data that bridges the gap between numeric and visual representations. FatFonts are based on Arabic numerals but, unlike regular numeric typefaces, the amount of ink (dark pixels) used for each digit is proportional to its quantitative value. This enables accurate reading of the numerical data while preserving an overall visual context. We discuss the challenges of this approach that we identified through our design process and propose a set of design goals that include legibility, familiarity, readability, spatial precision, dynamic range, and resolution. We contribute four FatFont typefaces that are derived from our exploration of the design space that these goals introduce. Finally, we discuss three example scenarios that show how FatFonts can be used for visualization purposes as valuable representation alternatives.

Proceedings ArticleDOI
21 May 2012
TL;DR: In this article, the authors describe some of the insights found from a 5-year mission to all corners of science fiction, and describe how designers can learn from these examples and apply them to real world interfaces.
Abstract: Interfaces from science fiction films and television offer lessons to interaction designers and other developers of real world interfaces that are humorous, prophetic, inspiring, and practical.Science fiction interfaces are more than fun. They reflect current interface understandings on the part of developers and expectations on the part of users. Production designers are allowed to develop "blue-sky" examples that, while lacking rigorous development with users, coalesce influential examples for practicing designers. Interaction designers can learn from these examples. This presentation will describe some of the insights found from a 5 year mission to all corners of science fiction.

Proceedings ArticleDOI
21 May 2012
TL;DR: The idea of tag clouds is exploited to visually analyze microblog content by an interactive visualization technique that combines colored histograms with visual highlighting of co-occurrences, thus allowing for a time-dependent analysis of term relations.
Abstract: The vast amount of contents posted to microblogging services each day offers a rich source of information for analytical tasks. The aggregated posts provide a broad sense of the informal conversations complementing other media. However, analyzing the textual content is challenging due to its large volume, heterogeneity, and time-dependence. In this paper, we exploit the idea of tag clouds to visually analyze microblog content. As a major contribution, tag clouds are extended by an interactive visualization technique that we refer to as time-varying co-occurrence highlighting. It combines colored histograms with visual highlighting of co-occurrences, thus allowing for a time-dependent analysis of term relations. An example dataset of Twitter posts illustrates the applicability and usefulness of the approach.

Proceedings ArticleDOI
21 May 2012
TL;DR: SpeeG, a multimodal speech- and body gesture-based text input system targeting media centres, set-top boxes and game consoles, is presented and represents a promising candidate for future controller-free text input.
Abstract: We present SpeeG, a multimodal speech- and body gesture-based text input system targeting media centres, set-top boxes and game consoles. Our controller-free zoomable user interface combines speech input with a gesture-based real-time correction of the recognised voice input. While the open source CMU Sphinx voice recogniser transforms speech input into written text, Microsoft's Kinect sensor is used for the hand gesture tracking. A modified version of the zoomable Dasher interface combines the input from Sphinx and the Kinect sensor. In contrast to existing speech error correction solutions with a clear distinction between a detection and correction phase, our innovative SpeeG text input system enables continuous real-time error correction. An evaluation of the SpeeG prototype has revealed that low error rates for a text input speed of about six words per minute can be achieved after a minimal learning phase. Moreover, in a user study SpeeG has been perceived as the fastest of all evaluated user interfaces and therefore represents a promising candidate for future controller-free text input.

Proceedings ArticleDOI
21 May 2012
TL;DR: It is found that the recognizable angles of the face-shaped screen were larger, and that the recognition of the head directions was better than on a flat 2D screen, resolving the problem of the Mona Lisa effect.
Abstract: We propose a telepresence system with a real human face-shaped screen. This system tracks the remote user's face and extracts the head motion and the face image. The face-shaped screen moves along three degree-of-freedom (DOF) by reflecting the user's head gestures. As the face-shaped screen is molded based on the 3D-shape scan data of the user, the projected image is accurate even when it is seen from different angles. We expect this system can accurately convey the user's nonverbal communication, in particular the user's gaze direction in 3D space that is not correctly transmitted by using a 2D screen (which is known as "the Mona Lisa effect"). To evaluate how this system can contribute to the communication, we conducted three experiments. The first one examines the blind angle of a face-shaped screen and a flat screen, and compares the ease with which users can distinguish facial expressions. The second one evaluates how the direction in which the remote user's face points can be correctly transmitted. The third experiment evaluates how the gaze direction can be correctly transmitted. We found that the recognizable angles of the face-shaped screen were larger, and that the recognition of the head directions was better than on a flat 2D screen. More importantly, we found that the face-shaped screen accurately conveyed the gaze direction, resolving the problem of the Mona Lisa effect.

Proceedings ArticleDOI
21 May 2012
TL;DR: A framework of five factors that determine the intuition of multi-touch interactions, including direct manipulation, physics, feedback, previous knowledge, and physical motion is constructed.
Abstract: Intuition is an important yet ill-defined factor when designing effective multi-touch interactions. Throughout the research community, there is a lack of consensus regarding both the nature of intuition and, more importantly, how to systematically incorporate it into the design of multi-touch gestural interactions. To strengthen our understanding of intuition, we surveyed various domains to determine the level of consensus among researchers, commercial developers, and the general public regarding which multi-touch gestures are intuitive, and which of these gestures intuitively lead to which interaction outcomes. We reviewed more than one hundred papers regarding multi-touch interaction, approximately thirty of which contained key findings we report herein. Based on these findings, we have constructed a framework of five factors that determine the intuition of multi-touch interactions, including direct manipulation, physics, feedback, previous knowledge, and physical motion. We further provide both design recommendations for multi-touch developers and an evaluation of research problems which remain due to the limitations of present research regarding these factors. We expect our survey and discussion of intuition will raise awareness of its importance, and lead to the active pursuit of intuitive multi-touch interaction design.

Proceedings ArticleDOI
21 May 2012
TL;DR: TimeSlice is presented, an interactive faceted visualization of temporal events, which allows users to easily compare and explore timelines with different attributes on a set of facets, and directly manipulating the filtering tree, which supports efficient navigation of multi-dimensional events data.
Abstract: Temporal events with multiple sets of metadata attributes, i. e., facets, are ubiquitous across different domains. The capabilities of efficiently viewing and comparing events data from various perspectives are critical for revealing relationships, making hypotheses, and discovering patterns. In this paper, we present TimeSlice, an interactive faceted visualization of temporal events, which allows users to easily compare and explore timelines with different attributes on a set of facets. By directly manipulating the filtering tree, a dynamic visual representation of queries and filters in the facet space, users can simultaneously browse the focused timelines and their contexts at different levels of detail, which supports efficient navigation of multi-dimensional events data. Also presented is an initial evaluation of TimeSlice with two datasets - famous deceased people and US daily flight delays.

Proceedings ArticleDOI
21 May 2012
TL;DR: Strip'TIC, a novel system for ATC that mixes augmented paper and digital pen, vision-based tracking and augmented rear and front projection, is described and solutions to technical challenges due to mixing competing technologies are described.
Abstract: The current environment used by French air traffic controllers mixes digital visualization such as radar screens and tangible artifacts such as paper strips. Tangible artifacts do not allow controllers to update the system with the instructions they give to pilots. Previous attempts at replacing them in France failed to prove efficient. This paper is an engineering paper that describes Strip'TIC, a novel system for ATC that mixes augmented paper and digital pen, vision-based tracking and augmented rear and front projection. The system is now working and has enabled us to run workshops with actual controllers to study the role of writing and tangibility in ATC. We describe the system and solutions to technical challenges due to mixing competing technologies.

Proceedings ArticleDOI
21 May 2012
TL;DR: This work introduces interactive graph matching, a process that conciliates visualization, interaction and optimization approaches to address the graph matching and graph comparison problems as a whole.
Abstract: We introduce interactive graph matching, a process that conciliates visualization, interaction and optimization approaches to address the graph matching and graph comparison problems as a whole. Interactive graph matching is based on a multi-layered interaction model and on a visual reification of graph matching functions. We present three case studies and a system named Donatien to demonstrate the interactive graph matching approach. The three case studies involve different datasets a) subgraphs of a lexical network, b) graph of keywords extracted from the InfoVis contest benchmark, and c) clustered graphs computed from different clustering algorithms for comparison purposes.

Proceedings ArticleDOI
21 May 2012
TL;DR: This work describes Stackables, their flexible and expressive combination to formulate queries, and the underlying interaction concept in detail, and an evaluation provides initial evidence of their usability in targeted and exploratory information seeking tasks.
Abstract: We introduce Stackables: tangibles designed to support faceted information seeking in a variety of contexts. We are faced, more than ever, with tasks that require us to find, access, and act on information by ourselves or together with others. Current interfaces for browsing and search in large data spaces, however, largely focus on the support of either individual or collaborative activities. Stackables were designed to bridge this gap and be useful in meetings, for sharing results from individual search activities, and for realistic datasets including multiple facets with large value ranges. Each Stackable tangible represents search parameters that can be shared amongst collaborators, modified during an information seeking process, and stored and transferred. We describe Stackables, their flexible and expressive combination to formulate queries, and the underlying interaction concept in detail. An evaluation provides initial evidence of their usability in targeted and exploratory information seeking tasks.

Proceedings ArticleDOI
Alex Endert1, Lauren Bradel1, Jessica Zeitz1, Christopher Andrews1, Chris North1 
21 May 2012
TL;DR: This paper presents a new usage for large, high-resolution displays: an everyday workspace, and discusses how seemingly small large-display design decisions can have significant impacts on users' perceptions of these workspaces, and thus the usage of the space.
Abstract: Large, high-resolution displays have enormous potential to aid in scenarios beyond their current usage. Their current usages are primarily limited to presentations, visualization demonstrations, or conducting experiments. In this paper, we present a new usage for such systems: an everyday workspace. We discuss how seemingly small large-display design decisions can have significant impacts on users' perceptions of these workspaces, and thus the usage of the space. We describe the effects that various physical configurations have on the overall usability and perception of the display. We present conclusions on how to broaden the usage scenarios of large, high-resolution displays to enable frequent and effective usage as everyday workspaces while still allowing transformation to collaborative or presentation spaces.

Proceedings ArticleDOI
21 May 2012
TL;DR: The results indicate that edge bundling negatively impacts user performance at tracing paths between nodes, both in terms of accuracy and time, and indicate that while edge Bundling may provide no clear significant benefit in Terms of accuracy for recognising higher-level cluster connectivity, it does provide a significant improvement in user response time.
Abstract: Edges are one of the primary sources of clutter when viewing graphs as node-link diagrams. One technique to reduce this clutter is to bundle edges together based on a nearby source or destination. Combined with edge translucency, edge bundling is reported to reduce the clutter and reveal higher-level edge patterns. However there is very little empirical data on the impact of edge bundling on user performance, as well as the impact of graph characteristics such as edge density and graph size on the effectiveness of edge bundling as a graph-visualization technique. We have performed user experiments to evaluate the impact of bundling on user performance, using a set of randomly generated undirected compound graphs with varying vertex counts and edge densities. Our results indicate that edge bundling negatively impacts user performance at tracing paths between nodes, both in terms of accuracy and time. They also indicate that while edge bundling may provide no clear significant benefit in terms of accuracy for recognising higher-level cluster connectivity, it does provide a significant improvement in user response time.

Proceedings ArticleDOI
21 May 2012
TL;DR: From this evaluation, it is found that pen and touch tabletops can successfully combine the advantages of paper and digital devices without their disadvantages and there are still a number of hardware and software limitations that impede the user experience.
Abstract: With the proliferation and sophistication of digital reading devices, new means to support the task of active reading (AR) have emerged. In this paper, we investigate the use of pen-and-touch-operated tabletops for performing essential processes of AR such as annotating, smooth navigation and rapid searching. We present an application to support these processes and then report on a user study designed to compare the suitability of our setup for three typical tasks against the use of paper media and Adobe Acrobat on a regular desktop PC. From this evaluation, we found out that pen and touch tabletops can successfully combine the advantages of paper and digital devices without their disadvantages. We however also learn from observations and participant feedback that there are still a number of hardware and software limitations that impede the user experience and hence need to be addressed in future systems.

Proceedings ArticleDOI
Jun Rekimoto1
21 May 2012
TL;DR: How in future, architectural space can become dynamically changeable is discussed and the Squama system is introduced as an initial instance for exemplifying this concept.
Abstract: In this paper we present Squama, a programmable physical window or wall that can independently control the visibility of its elemental small square tiles. This is an example of programmable physical architecture, our vision for future architectures where the physical features of architectural elements and facades can be dynamically changed and reprogrammed according to people's needs. When Squama is used as a wall, it dynamically controls the transparency through its surface, and simultaneously satisfies the needs for openness and privacy. It can also control the amount of sunlight and create shadows, called programmable shadows, in order to afford indoor comfort without completely blocking the outer view. In this paper, we discuss how in future, architectural space can become dynamically changeable and introduce the Squama system as an initial instance for exemplifying this concept.

Proceedings Article
01 Jan 2012
TL;DR: It is argued that for realizing a natural computer-supported collaboration in smart environments or interactive spaces, designers must achieve a holistic understanding and design of the users’ individual interactions, social interactions, workflows and their physical environment.
Abstract: In this paper, we propose Blended Interaction as a conceptual framework for the design of interactive spaces. We argue that for realizing a natural computer-supported collaboration in smart environments or interactive spaces, designers must achieve a holistic understanding and design of the users’ individual interactions, social interactions, workflows and their physical environment. To thoughtfully blend the power of the digital world with the users’ pre-existing skills and practices, we propose and explain conceptual blending as a potential design methodology. We illustrate our framework by discussing related theoretical and conceptual work and by explaining the design decisions we made in recent projects. In particular, we highlight how Blended Interaction introduces a new and more accurate description of users’ cognition and interaction in interactive spaces that can serve as a tool for HCI researchers and interaction designers.

Proceedings ArticleDOI
21 May 2012
TL;DR: DepthTouch, an installation which explores future interactive surfaces and features elastic feedback, allowing the user to go deeper than with regular multi-touch surfaces, is described.
Abstract: In this paper we describe DepthTouch, an installation which explores future interactive surfaces and features elastic feedback, allowing the user to go deeper than with regular multi-touch surfaces DepthTouch's elastic display allows the user to create valleys and ascending slopes by depressing or grabbing its textile surface We describe the experimental approach for eliciting appropriate interaction metaphors from interaction with real materials and the resulting digital prototype

Proceedings ArticleDOI
21 May 2012
TL;DR: The evaluation of a video annotator that supports multimodal annotation and is applied to contemporary dance as a creation tool is discussed.
Abstract: This paper discusses the evaluation of a video annotator that supports multimodal annotation and is applied to contemporary dance as a creation tool. The Creation-Tool was conceived and designed to assist the creative processes of choreographers and dance performers, functioning as a digital notebook for personal annotations. The prototype, developed for Tablet PCs, allows video annotation in real-time, using a live video stream, or postevent, using a pre-recorded video stream. The tool also allows different video annotation modalities, such as annotation marks, text, audio, ink strokes and hyperlinks. In addition, the system enables different modes of annotation and video visualization. The development followed an iterative design process involving two choreographers, and a usability study was carried out, involving international dance performers participating in a contemporary dance "residence - workshop".

Proceedings ArticleDOI
21 May 2012
TL;DR: Indiana Finder is presented, an interactive visualization system that supports archaeologists in the examination of large repositories of documents and drawings that provides visual analytic support for investigative analysis such as the interpretation of new archaeological findings, the detection of interpretation anomalies, and the discovery of new insights.
Abstract: With the invention and rapid improvement of data-capturing devices, such as satellite imagery and digital cameras, the information that archaeologists must manage in their everyday's activities has rapidly grown in complexity and amount. In this work we present Indiana Finder, an interactive visualization system that supports archaeologists in the examination of large repositories of documents and drawings. In particular, the system provides visual analytic support for investigative analysis such as the interpretation of new archaeological findings, the detection of interpretation anomalies, and the discovery of new insights. We illustrate the potential of Indiana Finder in the context of the digital protection and conservation of rock art natural and cultural heritage sites. In this domain, Indiana Finder provides an integrated environment that archaeologists can exploit to investigate, discover, and learn from textual documents, pictures, and drawings related to rock carvings. This goal is accomplished through novel visualization methods including visual similarity ring charts that may help archaeologists in the hard task of dating a symbol in a rock engraving based on its shape and on the surrounding symbols.