scispace - formally typeset
Search or ask a question

Showing papers in "Computer Graphics Forum in 2017"


Journal ArticleDOI
TL;DR: A holistic view of surface reconstruction is considered, which shows a detailed characterization of the field, highlights similarities between diverse reconstruction techniques and provides directions for future work in surface reconstruction.
Abstract: The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contain a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations-not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques and provide directions for future work in surface reconstruction.

405 citations


Journal ArticleDOI
TL;DR: A hierarchical taxonomy of techniques is derived by systematically categorizing and tagging publications and identifying the representation of time as the major distinguishing feature for dynamic graph visualizations.
Abstract: Dynamic graph visualization focuses on the challenge of representing the evolution of relationships between entities in readable, scalable and effective diagrams. This work surveys the growing number of approaches in this discipline. We derive a hierarchical taxonomy of techniques by systematically categorizing and tagging publications. While static graph visualizations are often divided into node-link and matrix representations, we identify the representation of time as the major distinguishing feature for dynamic graph visualizations: either graphs are represented as animated diagrams or as static charts based on a timeline. Evaluations of animated approaches focus on dynamic stability for preserving the viewer's mental map or, in general, compare animated diagrams to timeline-based ones. A bibliographic analysis provides insights into the organization and development of the field and its community. Finally, we identify and discuss challenges for future research. We also provide feedback from experts, collected with a questionnaire, which gives a broad perspective of these challenges and the current state of the field.

276 citations


Journal ArticleDOI
TL;DR: In this paper, a method for computing partial functional correspondence between non-rigid shapes is proposed, which uses perturbation analysis to show how removal of shape parts changes the Laplace-Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence.
Abstract: In this paper, we propose a method for computing partial functional correspondence between non-rigid shapes. We use perturbation analysis to show how removal of shape parts changes the Laplace-Beltrami eigenfunctions, and exploit it as a prior on the spectral representation of the correspondence. Corresponding parts are optimization variables in our problem and are used to weight the functional correspondence; we are looking for the largest and most regular in the Mumford-Shah sense parts that minimize correspondence distortion. We show that our approach can cope with very challenging correspondence settings.

220 citations


Journal ArticleDOI
TL;DR: This work addresses the problem of making human motion capture in the wild more practical by making use of a realistic statistical body model that includes anthropometric constraints and using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames.
Abstract: We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: i making use of a realistic statistical body model that includes anthropometric constraints and ii using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser SIP enables motion capture using only 6 sensors attached to the wrists, lower legs, back and head and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.

189 citations


Journal ArticleDOI
TL;DR: This survey provides an introduction into eye tracking visualization with an overview of existing techniques and identified challenges that have to be tackled in the future so that visualizations will become even more widely applied in eye tracking research.
Abstract: This survey provides an introduction into eye tracking visualization with an overview of existing techniques. Eye tracking is important for evaluating user behaviour. Analysing eye tracking data is typically done quantitatively, applying statistical methods. However, in recent years, researchers have been increasingly using qualitative and exploratory analysis methods based on visualization techniques. For this state-of-the-art report, we investigated about 110 research papers presenting visualization techniques for eye tracking data. We classified these visualization techniques and identified two main categories: point-based methods and methods based on areas of interest. Additionally, we conducted an expert review asking leading eye tracking experts how they apply visualization techniques in their analysis of eye tracking data. Based on the experts' feedback, we identified challenges that have to be tackled in the future so that visualizations will become even more widely applied in eye tracking research.

173 citations


Journal ArticleDOI
TL;DR: An end‐to‐end pipeline which takes a bitmap image as input and returns a visual encoding specification as output is contributed and accurate automatic inference of text elements, mark types, and chart specifications across a variety of input chart types is demonstrated.
Abstract: We investigate how to automatically recover visual encodings from a chart image, primarily using inferred text elements. We contribute an end-to-end pipeline which takes a bitmap image as input and...

171 citations


Journal ArticleDOI
TL;DR: This state‐of‐the‐art report presents a summary of the progress that has been made by highlighting and synthesizing select research advances and presents opportunities and challenges to enhance the synergy between machine learning and visual analytics for impactful future research directions.
Abstract: Visual analytics systems combine machine learning or other analytic techniques with interactive data visualization to promote sensemaking and analytical reasoning. It is through such techniques that people can make sense of large, complex data. While progress has been made, the tactful combination of machine learning and data visualization is still under-explored. This state-of-the-art report presents a summary of the progress that has been made by highlighting and synthesizing select research advances. Further, it presents opportunities and challenges to enhance the synergy between machine learning and visual analytics for impactful future research directions.

157 citations


Journal ArticleDOI
TL;DR: A characterization of structures within space‐time cubes is proposed, which allows for the description, criticism and comparison of temporal data visualizations, and to encourage the exploration of new techniques and systems.
Abstract: We present the generalized space-time cube, a descriptive model for visualizations of temporal data. Visualizations are described as operations on the cube, which transform the cube's 3D shape into readable 2D visualizations. Operations include extracting subparts of the cube, flattening it across space or time or transforming the cubes geometry and content. We introduce a taxonomy of elementary space-time cube operations and explain how these operations can be combined and parameterized. The generalized space-time cube has two properties: 1 it is purely conceptual without the need to be implemented, and 2 it applies to all datasets that can be represented in two dimensions plus time e.g. geo-spatial, videos, networks, multivariate data. The proper choice of space-time cube operations depends on many factors, for example, density or sparsity of a cube. Hence, we propose a characterization of structures within space-time cubes, which allows us to discuss strengths and limitations of operations. We finally review interactive systems that support multiple operations, allowing a user to customize his view on the data. With this framework, we hope to facilitate the description, criticism and comparison of temporal data visualizations, as well as encourage the exploration of new techniques and systems. This paper is an extension ofi?źBach eti?źal.'s 2014 work.

139 citations


Journal ArticleDOI
TL;DR: This paper shows that considering descriptors as linear operators acting on functions through multiplication, rather than as simple scalar‐valued signals, allows to extract significantly more information from a given descriptor and ultimately results in a more accurate functional map estimation.
Abstract: We consider the problem of non-rigid shape matching, and specifically the functional maps framework that was recently proposed to find correspondences between shapes A key step in this framework i

119 citations


Journal ArticleDOI
TL;DR: This survey overviews the various fabrication technologies, discussing their strengths, limitations and costs, and reviews works that have attempted to extend fabrication technologies in order to deal with the specific issues in the use of digital fabrication in the Cultural Heritage.
Abstract: Digital fabrication devices exploit basic technologies in order to create tangible reproductions of 3D digital models. Although current 3D printing pipelines still suffer from several restrictions, accuracy in reproduction has reached an excellent level. The manufacturing industry has been the main domain of 3D printing applications over the last decade. Digital fabrication techniques have also been demonstrated to be effective in many other contexts, including the consumer domain. The Cultural Heritage is one of the new application contexts and is an ideal domain to test the flexibility and quality of this new technology. This survey overviews the various fabrication technologies, discussing their strengths, limitations and costs. Various successful uses of 3D printing in the Cultural Heritage are analysed, which should also be useful for other application contexts. We review works that have attempted to extend fabrication technologies in order to deal with the specific issues in the use of digital fabrication in the Cultural Heritage. Finally, we also propose areas for future research.

115 citations


Journal ArticleDOI
TL;DR: An algorithm for the restoration of noisy point cloud data, termed Moving Robust Principal Components Analysis (MRPCA), which model the point cloud as a collection of overlapping two‐dimensional subspaces, and proposes a model that encourages collaboration between overlapping neighbourhoods.
Abstract: We present an algorithm for the restoration of noisy point cloud data, termed Moving Robust Principal Components Analysis (MRPCA). We model the point cloud as a collection of overlapping two-dimensional subspaces, and propose a model that encourages collaboration between overlapping neighbourhoods. Similar to state-of-the-art sparse modelling-based image denoising, the estimated point positions are computed by local averaging. In addition, the proposed approach models grossly corrupted observations explicitly, does not require oriented normals, and takes into account both local and global structure. Sharp features are preserved via a weighted l1 minimization, where the weights measure the similarity between normal vectors in a local neighbourhood. The proposed algorithm is compared against existing point cloud denoising methods, obtaining competitive results.

Journal ArticleDOI
TL;DR: An efficient procedure for calculating partial dense intrinsic correspondence between deformable shapes performed entirely in the spectral domain is proposed and a variant of the JAD problem with an appropriately modified coupling term allows to construct quasi‐harmonic bases localized on the latent corresponding parts.
Abstract: We propose an efficient procedure for calculating partial dense intrinsic correspondence between deformable shapes performed entirely in the spectral domain. Our technique relies on the recently introduced partial functional maps formalism and on the joint approximate diagonalization (JAD) of the Laplace‐Beltrami operators previously introduced for matching non‐isometric shapes. We show that a variant of the JAD problem with an appropriately modified coupling term (surprisingly) allows to construct quasi‐harmonic bases localized on the latent corresponding parts. This circumvents the need to explicitly compute the unknown parts by means of the cumbersome alternating minimization used in the previous approaches, and allows performing all the calculations in the spectral domain with constant complexity independent of the number of shape vertices. We provide an extensive evaluation of the proposed technique on standard non‐rigid correspondence benchmarks and show state‐of‐the‐art performance in various settings, including partiality and the presence of topological noise.

Journal ArticleDOI
TL;DR: An overview of the research conducted since 2005 on supporting text analysis tasks with close and distant reading visualizations in the digital humanities is presented and approaches that combine both reading techniques in order to provide a multi‐faceted view of the textual data are illustrated.
Abstract: In 2005, Franco Moretti introduced Distant Reading to analyse entire literary text collections. This was a rather revolutionary idea compared to the traditional Close Reading, which focuses on the thorough interpretation of an individual work. Both reading techniques are the prior means of Visual Text Analysis. We present an overview of the research conducted since 2005 on supporting text analysis tasks with close and distant reading visualizations in the digital humanities. Therefore, we classify the observed papers according to a taxonomy of text analysis tasks, categorize applied close and distant reading techniques to support the investigation of these tasks and illustrate approaches that combine both reading techniques in order to provide a multi-faceted view of the textual data. In addition, we take a look at the used text sources and at the typical data transformation steps required for the proposed visualizations. Finally, we summarize collaboration experiences when developing visualizations for close and distant reading, and we give an outlook on future challenges in that research area.

Journal ArticleDOI
TL;DR: A definition and a conceptual model of lenses as extensions of the classic visualization pipeline are proposed and a taxonomy of lenses for visualization is introduced and illustrated by dissecting in detail a multi‐touch lens for exploring large graph layouts.
Abstract: The elegance of using virtual interactive lenses to provide alternative visual representations for selected regions of interest is highly valued, especially in the realm of visualization. Today, more than 50 lens techniques are known in the closer context of visualization, far more in related fields. In this paper, we extend our previous survey on interactive lenses for visualization. We propose a definition and a conceptual model of lenses as extensions of the classic visualization pipeline. An extensive review of the literature covers lens techniques for different types of data and different user tasks and also includes the technologies employed to display lenses and to interact with them. We introduce a taxonomy of lenses for visualization and illustrate its utility by dissecting in detail a multi-touch lens for exploring large graph layouts. As a conclusion of our review, we identify challenges and unsolved problems to be addressed in future research.

Journal ArticleDOI
TL;DR: The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading, and discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate.
Abstract: Structural properties of molecules are of primary concern in many fields. This report provides a comprehensive overview on techniques that have been developed in the fields of molecular graphics and visualization with a focus on applications in structural biology. The field heavily relies on computerized geometric and visual representations of three-dimensional, complex, large and time-varying molecular structures. The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading. It discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate, covers different aspects of level of detail and reviews visualizations illustrating the dynamic aspects of molecular simulation data. The survey concludes with an outlook on promising and important research topics to foster further success in the development of tools that help to reveal molecular secrets.

Journal ArticleDOI
TL;DR: The diagonal problem: synthesizing appearance from given per‐pixel attributes using a CNN is considered and the resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.
Abstract: In computer vision, convolutional neural networks CNNs achieve unprecedented performance for inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen space shading has boosted the quality of real-time rendering, converting the same kind of attributes of a virtual scene back to appearance, enabling effects like ambient occlusion, indirect light, scattering and many more. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.

Journal ArticleDOI
TL;DR: A classification of this huge state of the art Additive Manufacturing technologies, and elicits the relation between each single algorithm and a list of desirable objectives during model preparation – a process globally refereed to as Process Planning.
Abstract: Due to the wide diffusion of 3D printing technologies, geometric algorithms for Additive Manufacturing are being invented at an impressive speed. Each single step along the processing pipeline that prepares the 3D model for fabrication can now count on dozens of methods, that analyse and optimize geometry and machine instructions for various objectives. This report provides a classification of this huge state of the art, and elicits the relation between each single algorithm and a list of desirable objectives during model preparation - a process globally refereed to as Process Planning. The objectives themselves are listed and discussed, along with possible needs for tradeoffs. Additive Manufacturing technologies are broadly categorized to explicitly relate classes of devices and supported features. Finally, this report offers an analysis of the state of the art while discussing open and challenging problems from both an academic and an industrial perspective.

Journal ArticleDOI
TL;DR: This work proposes to optimize for the geometric representation during the network learning process using a novel metric alignment layer that maps unstructured geometric data to a regular domain by minimizing the metric distortion of the map using the regularized Gromov–Wasserstein objective.
Abstract: Deep neural networks provide a promising tool for incorporating semantic information in geometry processing applications. Unlike image and video processing, however, geometry processing requires handling unstructured geometric data, and thus data representation becomes an important challenge in this framework. Existing approaches tackle this challenge by converting point clouds, meshes, or polygon soups into regular representations using, e.g., multi-view images, volumetric grids or planar parameterizations. In each of these cases, geometric data representation is treated as a fixed pre-process that is largely disconnected from the machine learning tool. In contrast, we propose to optimize for the geometric representation during the network learning process using a novel metric alignment layer. Our approach maps unstructured geometric data to a regular domain by minimizing the metric distortion of the map using the regularized Gromov-Wasserstein objective. This objective is parameterized by the metric of the target domain and is differentiable; thus, it can be easily incorporated into a deep network framework. Furthermore, the objective aims to align the metrics of the input and output domains, promoting consistent output for similar shapes. We show the effectiveness of our layer within a deep network trained for shape classification, demonstrating state-of-the-art performance for nonrigid shapes.

Journal ArticleDOI
TL;DR: This work names this problem map deblurring and proposes a robust method, based on a smoothness assumption, for its solution, which is suitable for non‐isometric shapes, is robust to mesh tessellation and accurately recovers vertex‐to‐point, or precise, maps.
Abstract: Shape correspondence is an important and challenging problem in geometry processing. Generalized map representations, such as functional maps, have been recently suggested as an approach for handling difficult mapping problems, such as partial matching and matching shapes with high genus, within a generic framework. While this idea was shown to be useful in various scenarios, such maps only provide low frequency information on the correspondence. In many applications, such as texture transfer and shape interpolation, a high quality pointwise map that can transport high frequency data between the shapes is required. We name this problem map deblurring and propose a robust method, based on a smoothness assumption, for its solution. Our approach is suitable for non-isometric shapes, is robust to mesh tessellation and accurately recovers vertex-to-point, or precise, maps. Using the same framework we can also handle map denoising, namely improvement of given pointwise maps from various sources. We demonstrate that our approach outperforms the state-of-the-art for both deblurring and denoising of maps on benchmarks of non-isometric shapes, and show an application to high quality intrinsic symmetry computation.

Journal ArticleDOI
R. Danźřek1, Endri Dibra1, Cengiz Oztireli1, Remo Ziegler, Markus Gross1 
TL;DR: This work illustrates that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates.
Abstract: 3D garment capture is an important component for various applications such as free-view point video, virtual avatars, online shopping, and virtual cloth fitting. Due to the complexity of the deformations, capturing 3D garment shapes requires controlled and specialized setups. A viable alternative is image-based garment capture. Capturing 3D garment shapes from a single image, however, is a challenging problem and the current solutions come with assumptions on the lighting, camera calibration, complexity of human or mannequin poses considered, and more importantly a stable physical state for the garment and the underlying human body. In addition, most of the works require manual interaction and exhibit high run-times. We propose a new technique that overcomes these limitations, making garment shape estimation from an image a practical approach for dynamic garment capture. Starting from synthetic garment shape data generated through physically based simulations from various human bodies in complex poses obtained through Mocap sequences, and rendered under varying camera positions and lighting conditions, our novel method learns a mapping from rendered garment images to the underlying 3D garment model. This is achieved by training Convolutional Neural Networks CNN-s to estimate 3D vertex displacements from a template mesh with a specialized loss function. We illustrate that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Improvement is shown if more than one view is integrated. Additionally, we show applications of our method to videos.

Journal ArticleDOI
TL;DR: How visual analytics can support predictive analytics tasks in a predictive visual analytics (PVA) pipeline is described and systems and techniques evaluated in terms of their supported interactions, and interactions specific to predictive analytics are discussed.
Abstract: Predictive analytics embraces an extensive range of techniques including statistical modeling, machine learning, and data mining and is applied in business intelligence, public health, disaster management and response, and many other fields. To date, visualization has been broadly used to support tasks in the predictive analytics pipeline. Primary uses have been in data cleaning, exploratory analysis, and diagnostics. For example, scatterplots and bar charts are used to illustrate class distributions and responses. More recently, extensive visual analytics systems for feature selection, incremental learning, and various prediction tasks have been proposed to support the growing use of complex models, agent-specific optimization, and comprehensive model comparison and result exploration. Such work is being driven by advances in interactive machine learning and the desire of end-users to understand and engage with the modeling process. In this state-of-the-art report, we catalogue recent advances in the visualization community for supporting predictive analytics. First, we define the scope of predictive analytics discussed in this article and describe how visual analytics can support predictive analytics tasks in a predictive visual analytics PVA pipeline. We then survey the literature and categorize the research with respect to the proposed PVA pipeline. Systems and techniques are evaluated in terms of their supported interactions, and interactions specific to predictive analytics are discussed. We end this report with a discussion of challenges and opportunities for future research in predictive visual analytics.

Journal ArticleDOI
TL;DR: The visual analytics pipeline for the social media is summarized, combining the above categories and supporting complex tasks and with these techniques, social media analytics can apply to multiple disciplines.
Abstract: With the development of social media e.g. Twitter, Flickr, Foursquare, Sina Weibo, etc., a large number of people are now using them and post microblogs, messages and multi-media information. The e...

Journal ArticleDOI
TL;DR: A new synthetic ground‐truth dataset is introduced that is used to evaluate the validity of these priors and the performance of the methods, and the performances of the different methods in the context of image‐editing applications are evaluated.
Abstract: Intrinsic images are a mid-level representation of an image that decompose the image into reflectance and illumination layers. The reflectance layer captures the color/texture of surfaces in the scene, while the illumination layer captures shading effects caused by interactions between scene illumination and surface geometry. Intrinsic images have a long history in computer vision and recently in computer graphics, and have been shown to be a useful representation for tasks ranging from scene understanding and reconstruction to image editing. In this report, we review and evaluate past work on this problem. Specifically, we discuss each work in terms of the priors they impose on the intrinsic image problem. We introduce a new synthetic ground-truth dataset that we use to evaluate the validity of these priors and the performance of the methods. Finally, we evaluate the performance of the different methods in the context of image-editing applications.

Journal ArticleDOI
TL;DR: This paper classifies survey papers into natural topic clusters which enables readers to find relevant literature and develops the first classification of classifications.
Abstract: Information visualization as a field is growing rapidly in popularity since the first information visualization conference in 1995. However, as a consequence of its growth, it is increasingly difficult to follow the growing body of literature within the field. Survey papers and literature reviews are valuable tools for managing the great volume of previously published research papers, and the quantity of survey papers in visualization has reached a critical mass. To this end, this survey paper takes a quantum step forward by surveying and classifying literature survey papers in order to help researchers understand the current landscape of Information Visualization. It is, to our knowledge, the first survey of survey papers SoS in Information Visualization. This paper classifies survey papers into natural topic clusters which enables readers to find relevant literature and develops the first classification of classifications. The paper also enables researchers to identify both mature and less developed research directions as well as identify future directions. It is a valuable resource for both newcomers and experienced researchers in and outside the field of Information Visualization and Visual Analytics.

Journal ArticleDOI
TL;DR: The analysis of 80 existing stories found on popular websites is systematically investigated, and seven characteristics of these stories are identified, which are named “flow‐factors,” and illustrated how they feed into the broader concept of “visual narrative flow.”
Abstract: Many factors can shape the flow of visual data-driven stories, and thereby the way readers experience those stories. Through the analysis of 80 existing stories found on popular websites, we systematically investigate and identify seven characteristics of these stories, which we name “flow-factors,” and we illustrate how they feed into the broader concept of “visual narrative flow.” These flow-factors are navigation input, level of control, navigation progress, story layout, role of visualization, story progression, and navigation feedback. We also describe a series of studies we conducted, which shed initial light on how different visual narrative flows impact the reading experience. We report on two exploratory studies, in which we gathered reactions and preferences of readers for stepper- vs. scroller-driven flows. We then report on a crowdsourced study with 240 participants, in which we explore the effect of the combination of different flow-factors on readers’ engagement. Our results indicate that visuals and navigation feedback (e.g., static vs. animated transitions) have an impact on readers’ engagement, while level of control (e.g., discrete vs. continuous) may not.

Journal ArticleDOI
TL;DR: This work surveys research in visualizing group structures as part of graph diagrams, with a particular focus is the explicit visual encoding of groups, rather than only using graph layout to indicate groups implicitly.
Abstract: Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to indicate groups implicitly. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. The derived taxonomies for group structure and visualization types are also applied to group visualizations of edges. We survey group-only, group–node, group–edge and group–network tasks that are described in the literature as use cases of group visualizations. We discuss results from evaluations of existing visualization techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.

Journal ArticleDOI
TL;DR: In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions.
Abstract: We propose the first system for live dynamic augmentation of human faces Using projector-based illumination, we alter the appearance of human performers during novel performances The key challenge of live augmentation is latency - an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection Using infrared illumination, an optically and computationally aligned high-speed camera detects facial orientation as well as expression The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non-rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions

Journal ArticleDOI
TL;DR: This paper investigates how spatially‐aware mobile displays and a large display wall can be coupled to support graph visualization and interaction, and devised and implemented a comprehensive interaction repertoire that supports basic and advanced graph exploration and manipulation tasks.
Abstract: Going beyond established desktop interfaces, researchers have begun re‐thinking visualization approaches to make use of alternative display environments and more natural interaction modalities. In this paper, we investigate how spatially‐aware mobile displays and a large display wall can be coupled to support graph visualization and interaction. For that purpose, we distribute typical visualization views of classic node‐link and matrix representations between displays. The focus of our work lies in novel interaction techniques that enable users to work with personal mobile devices in combination with the wall. We devised and implemented a comprehensive interaction repertoire that supports basic and advanced graph exploration and manipulation tasks, including selection, details‐on‐demand, focus transitions, interactive lenses, and data editing. A qualitative study has been conducted to identify strengths and weaknesses of our techniques. Feedback showed that combining mobile devices and a wall‐sized display is useful for diverse graph‐related tasks. We also gained valuable insights regarding the distribution of visualization views and interactive tools among the combined displays.

Journal ArticleDOI
TL;DR: This report presents the key research and models that exploit the limitations of perception to tackle visual quality and workload alike, and presents the open problems and promising future research targeting the question of how to minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
Abstract: Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real-time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel high dynamic range, wider color gamut/fidelity, increasing refresh rates better motion depiction, and an increasing number of displayed views per pixel stereo, multi-view, all the way to holographic or lightfield displays. These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.

Journal ArticleDOI
TL;DR: The definition of a general purpose control scheme for steering synthetic vision‐based agents and the proposition of cost functions for evaluating the perceived danger of the current situation are introduced.
Abstract: Most recent crowd simulation algorithms equip agents with a synthetic vision component for steering. They offer promising perspectives through a more realistic simulation of the way humans navigate according to their perception of the surrounding environment. In this paper, we propose a new perception/motion loop to steering agents along collision free trajectories that significantly improves the quality of vision-based crowd simulators. In contrast with solutions where agents avoid collisions in a purely reactive binary way, we suggest exploring the full range of possible adaptations and retaining the locally optimal one. To this end, we introduce a cost function, based on perceptual variables, which estimates an agent's situation considering both the risks of future collision and a desired destination. We then compute the partial derivatives of that function with respect to all possible motion adaptations. The agent then adapts its motion by following the gradient. This paper has thus two main contributions: the definition of a general purpose control scheme for steering synthetic vision-based agents; and the proposition of cost functions for evaluating the perceived danger of the current situation. We demonstrate improvements in several cases.