scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Visualization and Computer Graphics in 2007"


Journal ArticleDOI
TL;DR: Seven general categories of interaction techniques widely used in Infovis are proposed, organized around a user's intent while interacting with a system rather than the low-level interaction techniques provided by a system.
Abstract: Even though interaction is an important part of information visualization (Infovis), it has garnered a relatively low level of attention from the Infovis community. A few frameworks and taxonomies of Infovis interaction techniques exist, but they typically focus on low-level operations and do not address the variety of benefits interaction provides. After conducting an extensive review of Infovis systems and their interactive capabilities, we propose seven general categories of interaction techniques widely used in Infovis: 1) Select, 2) Explore, 3) Reconfigure, 4) Encode, 5) Abstract/Elaborate, 6) Filter, and 7) Connect. These categories are organized around a user's intent while interacting with a system rather than the low-level interaction techniques provided by a system. The categories can act as a framework to help discuss and evaluate interaction techniques and hopefully lay an initial foundation toward a deeper understanding and a science of interaction.

1,018 citations


Journal ArticleDOI
TL;DR: The design and deployment of Many Eyes is described, a public Web site where users may upload data, create interactive visualizations, and carry on discussions to support collaboration around visualizations at a large scale by fostering a social style of data analysis.
Abstract: We describe the design and deployment of Many Eyes, a public Web site where users may upload data, create interactive visualizations, and carry on discussions. The goal of the site is to support collaboration around visualizations at a large scale by fostering a social style of data analysis in which visualizations not only serve as a discovery tool for individuals but also as a medium to spur discussion among users. To support this goal, the site includes novel mechanisms for end-user creation of visualizations and asynchronous collaboration around those visualizations. In addition to describing these technologies, we provide a preliminary report on the activity of our users.

791 citations


Journal ArticleDOI
TL;DR: NodeTrix is presented, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities.
Abstract: The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results.

550 citations


Journal ArticleDOI
TL;DR: This paper provides an overview of many curve-skeleton applications and compile a set of desired properties of such representations and gives a taxonomy of methods and analyzes the advantages and drawbacks of each class of algorithms.
Abstract: Curve-skeletons are thinned 1D representations of 3D objects useful for many visualization tasks including virtual navigation, reduced-model formulation, visualization improvement, animation, etc. There are many algorithms in the literature describing extraction methodologies for different applications; however, it is unclear how general and robust they are. In this paper, we provide an overview of many curve-skeleton applications and compile a set of desired properties of such representations. We also give a taxonomy of methods and analyze the advantages and drawbacks of each class of algorithms.

523 citations


Journal ArticleDOI
TL;DR: The Show Me user experience includes the automatic selection of mark types, a command to add a single field to a view, and a pair of commands to build views for multiple fields, which is used by commercial users.
Abstract: This paper describes Show Me, an integrated set of user interface commands and defaults that incorporate automatic presentation into a commercial visual analysis system called Tableau. A key aspect of Tableau is VizQL, a language for specifying views, which is used by Show Me to extend automatic presentation to the generation of tables of views (commonly called small multiple displays). A key research issue for the commercial application of automatic presentation is the user experience, which must support the flow of visual analysis. User experience has not been the focus of previous research on automatic presentation. The Show Me user experience includes the automatic selection of mark types, a command to add a single field to a view, and a pair of commands to build views for multiple fields. Although the use of these defaults and commands is optional, user interface logs indicate that Show Me is used by commercial users.

514 citations


Journal ArticleDOI
TL;DR: This paper investigates the effectiveness of animated transitions between common statistical data graphics such as bar charts, pie charts, and scatter plots, and proposes design principles for creating effective transitions and illustrates the application in DynaVis, a visualization system featuring animated data graphics.
Abstract: In this paper we investigate the effectiveness of animated transitions between common statistical data graphics such as bar charts, pie charts, and scatter plots. We extend theoretical models of data graphics to include such transitions, introducing a taxonomy of transition types. We then propose design principles for creating effective transitions and illustrate the application of these principles in DynaVis, a visualization system featuring animated data graphics. Two controlled experiments were conducted to assess the efficacy of various transition types, finding that animated transitions can significantly improve graphical perception.

495 citations


Journal ArticleDOI
TL;DR: The aim of the resulting taxonomy is to act as a guide to match techniques to problems where different criteria may have different importance, and more importantly as a means to critique and hence develop existing and new techniques.
Abstract: Information visualisation is about gaining insight into data through a visual representation. This data is often multivariate and increasingly, the datasets are very large. To help us explore all this data, numerous visualisation applications, both commercial and research prototypes, have been designed using a variety of techniques and algorithms. Whether they are dedicated to geo-spatial data or skewed hierarchical data, most of the visualisations need to adopt strategies for dealing with overcrowded displays, brought about by too much data to fit in too small a display space. This paper analyses a large number of these clutter reduction methods, classifying them both in terms of how they deal with clutter reduction and more importantly, in terms of the benefits and losses. The aim of the resulting taxonomy is to act as a guide to match techniques to problems where different criteria may have different importance, and more importantly as a means to critique and hence develop existing and new techniques.

404 citations


Journal ArticleDOI
TL;DR: This paper proposes a new subdomain for infovis research that complements the focus on analytic tasks and expert use and proposes casual information visualization (or casualinfovis) as a complement to more traditional infovIS domains.
Abstract: Information visualization has often focused on providing deep insight for expert user populations and on techniques for amplifying cognition through complicated interactive visual models. This paper proposes a new subdomain for infovis research that complements the focus on analytic tasks and expert use. Instead of work-related and analytically driven infovis, we propose casual information visualization (or casual infovis) as a complement to more traditional infovis domains. Traditional infovis systems, techniques, and methods do not easily lend themselves to the broad range of user populations, from expert to novices, or from work tasks to more everyday situations. We propose definitions, perspectives, and research directions for further investigations of this emerging subfield. These perspectives build from ambient information visualization (Skog et al., 2003), social visualization, and also from artistic work that visualizes information (Viegas and Wattenberg, 2007). We seek to provide a perspective on infovis that integrates these research agendas under a coherent vocabulary and framework for design. We enumerate the following contributions. First, we demonstrate how blurry the boundary of infovis is by examining systems that exhibit many of the putative properties of infovis systems, but perhaps would not be considered so. Second, we explore the notion of insight and how, instead of a monolithic definition of insight, there may be multiple types, each with particular characteristics. Third, we discuss design challenges for systems intended for casual audiences. Finally we conclude with challenges for system evaluation in this emerging subfield.

389 citations


Journal ArticleDOI
TL;DR: A simple and fast mesh denoising method that can remove noise effectively while preserving mesh features such as sharp edges and corners is presented, and the convergence of the vertex position updating approach is proved.
Abstract: We present a simple and fast mesh denoising method, which can remove noise effectively while preserving mesh features such as sharp edges and corners. The method consists of two stages. First, noisy face normals are filtered iteratively by weighted averaging of neighboring face normals. Second, vertex positions are iteratively updated to agree with the denoised face normals. The weight function used during normal filtering is much simpler than that used in previous similar approaches, being simply a trimmed quadratic. This makes the algorithm both fast and simple to implement. Vertex position updating is based on the integration of surface normals using a least-squares error criterion. Like previous algorithms, we solve the least-squares problem by gradient descent; whereas previous methods needed user input to determine the iteration step size, we determine it automatically. In addition, we prove the convergence of the vertex position updating approach. Analysis and experiments show the advantages of our proposed method over various earlier surface denoising methods.

303 citations


Journal ArticleDOI
TL;DR: This paper presents scented widgets, graphical user interface controls enhanced with embedded visualizations that facilitate navigation in information spaces and describes a controlled experiment which finds that users exploring unfamiliar data make up to twice as many unique discoveries using widgets imbued with social navigation data.
Abstract: This paper presents scented widgets, graphical user interface controls enhanced with embedded visualizations that facilitate navigation in information spaces. We describe design guidelines for adding visual cues to common user interface widgets such as radio buttons, sliders, and combo boxes and contribute a general software framework for applying scented widgets within applications with minimal modifications to existing source code. We provide a number of example applications and describe a controlled experiment which finds that users exploring unfamiliar data make up to twice as many unique discoveries using widgets imbued with social navigation data. However, these differences equalize as familiarity with the data increases.

281 citations


Journal ArticleDOI
TL;DR: This paper surveys the state of the art in the major topics of hair modeling: hairstyling, hair simulation, and hair rendering, presenting the unique challenges facing each area and describing solutions that have been presented over the years.
Abstract: Realistic hair modeling is a fundamental part of creating virtual humans in computer graphics. This paper surveys the state of the art in the major topics of hair modeling: hairstyling, hair simulation, and hair rendering. Because of the difficult, often unsolved problems that arise in alt these areas, a broad diversity of approaches is used, each with strengths that make it appropriate for particular applications. We discuss each of these major topics in turn, presenting the unique challenges facing each area and describing solutions that have been presented over the years to handle these complex issues. Finally, we outline some of the remaining computational challenges in hair modeling

Journal ArticleDOI
TL;DR: This technique supports user-controlled terrain synthesis in a wide variety of styles, based upon the visual richness of real-world terrain data, because such features are the dominant visual elements in most terrains.
Abstract: In this paper, we present an example-based system for terrain synthesis. In our approach, patches from a sample terrain (represented by a height field) are used to generate a new terrain. The synthesis is guided by a user-sketched feature map that specifies where terrain features occur in the resulting synthetic terrain. Our system emphasizes large-scale curvilinear features (ridges and valleys) because such features are the dominant visual elements in most terrains. Both the example height field and user's sketch map are analyzed using a technique from the field of geomorphology. The system finds patches from the example data that match the features found in the user's sketch. Patches are joined together using graph cuts and Poisson editing. The order in which patches are placed in the synthesized terrain is determined by breadth-first traversal of a feature tree and this generates improved results over standard raster-scan placement orders. Our technique supports user-controlled terrain synthesis in a wide variety of styles, based upon the visual richness of real-world terrain data.

Journal ArticleDOI
TL;DR: VisLink readily generalizes to support multiple visualizations, empowers inter-representational queries, and enables the reuse of the spatial variables, thus supporting efficient information encoding and providing for powerful visualization bridging.
Abstract: We present VisLink, a method by which visualizations and the relationships between them can be interactively explored. VisLink readily generalizes to support multiple visualizations, empowers inter-representational queries, and enables the reuse of the spatial variables, thus supporting efficient information encoding and providing for powerful visualization bridging. Our approach uses multiple 2D layouts, drawing each one in its own plane. These planes can then be placed and re-positioned in 3D space: side by side, in parallel, or in chosen placements that provide favoured views. Relationships, connections, and patterns between visualizations can be revealed and explored using a variety of interaction techniques including spreading activation and search filters.

Journal ArticleDOI
TL;DR: This paper introduces an effective shape signature which is also pose-oblivious, which means that the signature is also insensitive to transformations which change the pose of a 3D shape such as skeletal articulations.
Abstract: A 3D shape signature is a compact representation for some essence of a shape. Shape signatures are commonly utilized as a fast indexing mechanism for shape retrieval. Effective shape signatures capture some global geometric properties which are scale, translation, and rotation invariant. In this paper, we introduce an effective shape signature which is also pose-oblivious. This means that the signature is also insensitive to transformations which change the pose of a 3D shape such as skeletal articulations. Although some topology-based matching methods can be considered pose-oblivious as well, our new signature retains the simplicity and speed of signature indexing. Moreover, contrary to topology-based methods, the new signature is also insensitive to the topology change of the shape, allowing us to match similar shapes with different genus. Our shape signature is a 2D histogram which is a combination of the distribution of two scalar functions defined on the boundary surface of the 3D shape. The first is a definition of a novel function called the local-diameter function. This function measures the diameter of the 3D shape in the neighborhood of each vertex. The histogram of this function is an informative measure of the shape which is insensitive to pose changes. The second is the centricity function that measures the average geodesic distance from one vertex to all other vertices on the mesh. We evaluate and compare a number of methods for measuring the similarity between two signatures, and demonstrate the effectiveness of our pose-oblivious shape signature within a 3D search engine application for different databases containing hundreds of models

Journal ArticleDOI
Filip Sadlo1, Ronald Peikert1
TL;DR: A method for filtered ridge extraction based on adaptive mesh refinement that allows a substantial speed-up by avoiding the seeding of trajectories in regions where no ridges are present or do not satisfy the prescribed filter criteria such as a minimum finite Lyapunov exponent.
Abstract: This paper presents a method for filtered ridge extraction based on adaptive mesh refinement. It is applicable in situations where the underlying scalar field can be refined during ridge extraction. This requirement is met by the concept of Lagrangian coherent structures which is based on trajectories started at arbitrary sampling grids that are independent of the underlying vector field. The Lagrangian coherent structures are extracted as ridges in finite Lyapunov exponent fields computed from these grids of trajectories. The method is applied to several variants of finite Lyapunov exponents, one of which is newly introduced. High computation time due to the high number of required trajectories is a main drawback when computing Lyapunov exponents of 3-dimensional vector fields. The presented method allows a substantial speed-up by avoiding the seeding of trajectories in regions where no ridges are present or do not satisfy the prescribed filter criteria such as a minimum finite Lyapunov exponent.

Journal ArticleDOI
TL;DR: This work proposes an approach inspired by geographical 'mashups' in which freely-available functionality and data are loosely but flexibly combined using de facto exchange standards to allow Google Earth to be used for visual synthesis and interaction with encodings described in KML.
Abstract: Exploratory visual analysis is useful for the preliminary investigation of large structured, multifaceted spatio-temporal datasets. This process requires the selection and aggregation of records by time, space and attribute, the ability to transform data and the flexibility to apply appropriate visual encodings and interactions. We propose an approach inspired by geographical 'mashups' in which freely-available functionality and data are loosely but flexibly combined using de facto exchange standards. Our case study combines MySQL, PHP and the LandSerf GIS to allow Google Earth to be used for visual synthesis and interaction with encodings described in KML. This approach is applied to the exploration of a log of 1.42 million requests made of a mobile directory service. Novel combinations of interaction and visual encoding are developed including spatial 'tag clouds', 'tag maps', 'data dials' and multi-scale density surfaces. Four aspects of the approach are informally evaluated: the visual encodings employed, their success in the visual exploration of the dataset, the specific tools used and the 'mashup' approach. Preliminary findings will be beneficial to others considering using mashups for visualization. The specific techniques developed may be more widely applied to offer insights into the structure of multifarious spatio-temporal data of the type explored here.

Journal ArticleDOI
TL;DR: A system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models is proposed.
Abstract: Mobile devices such as personal digital assistants, tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications is now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, personal digital assistants (PDAs), and tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed

Journal ArticleDOI
TL;DR: A novel shape matching algorithm is introduced that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals and returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task.
Abstract: Human grasps, especially whole-hand grasps, are difficult to animate because of the high number of degrees of freedom of the hand and the need for the hand to conform naturally to the object surface. Captured human motion data provides us with a rich source of examples of natural grasps. However, for each new object, we are faced with the problem of selecting the best grasp from the database and adapting it to that object. This paper presents a data-driven approach to grasp synthesis. We begin with a database of captured human grasps. To identify candidate grasps for a new object, we introduce a novel shape matching algorithm that matches hand shape to object shape by identifying collections of features having similar relative placements and surface normals. This step returns many grasp candidates, which are clustered and pruned by choosing the grasp best suited for the intended task. For pruning undesirable grasps, we develop an anatomically-based grasp quality measure specific to the human hand. Examples of grasp synthesis are shown for a variety of objects not present in the original database. This algorithm should be useful both as an animator tool for posing the hand and for automatic grasp synthesis in virtual environments.

Journal ArticleDOI
TL;DR: In this article, the authors discuss protocols for measuring egocentric depth judgments in both virtual and augmented environments, and discuss the well-known problem of depth underestimation in virtual environments.
Abstract: A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects the perception of spatial layout and depth. This problem is important because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users will perceive them in the same relationships. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as x-ray vision, where AR users are supposed to perceive graphics as being located behind opaque surfaces. This paper reviews and discusses protocols for measuring egocentric depth judgments in both virtual and augmented environments, and discusses the well-known problem of depth underestimation in virtual environments. It then describes two experiments that measured egocentric depth judgments in AR. Experiment I used a perceptual matching protocol to measure AR depth judgments at medium and far-field distances of 5 to 45 meters. The experiment studied the effects of upper versus lower visual field location, the x-ray vision condition, and practice on the task. The experimental findings include evidence for a switch in bias, from underestimating to overestimating the distance of AR-presented graphics, at ~ 23 meters, as well as a quantification of how much more difficult the x-ray vision condition makes the task. Experiment II used blind walking and verbal report protocols to measure AR depth judgments at distances of 3 to 7 meters. The experiment examined real-world objects, real-world objects seen through the AR display, virtual objects, and combined real and virtual objects. The results give evidence that the egocentric depth of AR objects is underestimated at these distances, but to a lesser degree than has previously been found for most virtual reality environments. The results are consistent with previous studies that have implicated a restricted field-of-view, combined with an inability for observers to scan the ground plane in a near-to-far direction, as explanations for the observed depth underestimation.

Journal ArticleDOI
TL;DR: A new method to derive analytical expressions for the spring parameters from an isotropic linear elastic reference model is described and expressions for several mesh topologies are derived.
Abstract: Mass spring models are frequently used to simulate deformable objects because of their conceptual simplicity and computational speed. Unfortunately, the model parameters are not related to elastic material constitutive laws in an obvious way. Several methods to set optimal parameters have been proposed but, so far, only with limited success. We analyze the parameter identification problem and show the difficulties, which have prevented previous work from reaching wide usage. Our main contribution is a new method to derive analytical expressions for the spring parameters from an isotropic linear elastic reference model. The method is described and expressions for several mesh topologies are derived. These include triangle, rectangle, and tetrahedron meshes. The formulas are validated by comparing the static deformation of the MSM with reference deformations simulated with the finite element method.

Journal ArticleDOI
TL;DR: A new treemap layout algorithm is presented to reduce abrupt layout changes and produce consistent visual patterns and user studies show that the users can better understand the changes in the hierarchy and layout, and more quickly notice the color and size differences using this method.
Abstract: While the treemap is a popular method for visualizing hierarchical data, it is often difficult for users to track layout and attribute changes when the data evolve over time. When viewing the treemaps side by side or back and forth, there exist several problems that can prevent viewers from performing effective comparisons. Those problems include abrupt layout changes, a lack of prominent visual patterns to represent layouts, and a lack of direct contrast to highlight differences. In this paper, we present strategies to visualize changes of hierarchical data using treemaps. A new treemap layout algorithm is presented to reduce abrupt layout changes and produce consistent visual patterns. Techniques are proposed to effectively visualize the difference and contrast between two treemap snapshots in terms of the map items' colors, sizes, and positions. Experimental data show that our algorithm can achieve a good balance in maintaining a treemap's stability, continuity, readability, and average aspect ratio. A software tool is created to compare treemaps and generate the visualizations. User studies show that the users can better understand the changes in the hierarchy and layout, and more quickly notice the color and size differences using our method.

Journal ArticleDOI
TL;DR: A feature-based, multilevel algorithm that draws undirected graphs based on the topological features they contain, which frequently improves the results in terms of speed and visual quality on these data sets with a range of connectivities and sizes.
Abstract: We describe TopoLayout, a feature-based, multilevel algorithm that draws undirected graphs based on the topological features they contain. Topological features are detected recursively inside the graph, and their subgraphs are collapsed into single nodes, forming a graph hierarchy. Each feature is drawn with an algorithm tuned for its topology. As would be expected from a feature-based approach, the runtime and visual quality of TopoLayout depends on the number and types of topological features present in the graph. We show experimental results comparing speed and visual quality for TopoLayout against four other multilevel algorithms on a variety of data sets with a range of connectivities and sizes. TopoLayout frequently improves the results in terms of speed and visual quality on these data sets

Journal ArticleDOI
TL;DR: The methods have been evaluated by radiologists in a study simulating the clinical task of stenosis assessment, in which the animation technique is shown to outperform traditional rendering in terms of assessment accuracy.
Abstract: Direct volume rendering has proved to be an effective visualization method for medical data sets and has reached wide-spread clinical use. The diagnostic exploration, in essence, corresponds to a tissue classification task, which is often complex and time-consuming. Moreover, a major problem is the lack of information on the uncertainty of the classification, which can have dramatic consequences for the diagnosis. In this paper this problem is addressed by proposing animation methods to convey uncertainty in the rendering. The foundation is a probabilistic Transfer Function model which allows for direct user interaction with the classification. The rendering is animated by sampling the probability domain over time, which results in varying appearance for uncertain regions. A particularly promising application of this technique is a "sensitivity lens" applied to focus regions in the data set. The methods have been evaluated by radiologists in a study simulating the clinical task of stenosis assessment, in which the animation technique is shown to outperform traditional rendering in terms of assessment accuracy.

Journal ArticleDOI
TL;DR: The P-Set model of visualization exploration is introduced for describing this process and a framework to encapsulate, share, and analyze visual explorations is introduced to provide an effective means to exploit the information within the visual exploration process.
Abstract: Visualization exploration is the process of extracting insight from data via interaction with visual depictions of that data. Visualization exploration is more than presentation; the interaction with both the data and its depiction is as important as the data and depiction itself. Significant visualization research has focused on the generation of visualizations (the depiction); less effort has focused on the exploratory aspects of visualization (the process). However, without formal models of the process, visualization exploration sessions cannot be fully utilized to assist users and system designers. Toward this end, we introduce the P-Set model of visualization exploration for describing this process and a framework to encapsulate, share, and analyze visual explorations. In addition, systems utilizing the model and framework are more efficient as redundant exploration is avoided. Several examples drawn from visualization applications demonstrate these benefits. Taken together, the model and framework provide an effective means to exploit the information within the visual exploration process

Journal ArticleDOI
TL;DR: A framework for direct volume rendering based on segmenting a volume into regions of equivalent contour topology and applying separate transfer functions to each region and a unique transfer function can be assigned to each subvolume corresponding to a branch of the contour tree.
Abstract: Topology provides a foundation for the development of mathematically sound tools for processing and exploration of scalar fields. Existing topology-based methods can be used to identify interesting features in volumetric data sets, to find seed sets for accelerated isosurface extraction, or to treat individual connected components as distinct entities for isosurfacing or interval volume rendering. We describe a framework for direct volume rendering based on segmenting a volume into regions of equivalent contour topology and applying separate transfer functions to each region. Each region corresponds to a branch of a hierarchical contour tree decomposition, and a separate transfer function can be defined for it. The novel contributions of our work are: 1) a volume rendering framework and interface where a unique transfer function can be assigned to each subvolume corresponding to a branch of the contour tree, 2) a runtime method for adjusting data values to reflect contour tree simplifications, 3) an efficient way of mapping a spatial location into the contour tree to determine the applicable transfer function, and 4) an algorithm for hardware-accelerated direct volume rendering that visualizes the contour tree-based segmentation at interactive frame rates using graphics processing units (GPUs) that support loops and conditional branches in fragment programs

Journal ArticleDOI
TL;DR: A new technique is provided that allows for the systematic creation and cancellation of fixed points and periodic orbits, based on Conley theory, that enables vector field design and editing on the plane and surfaces with desired qualitative properties.
Abstract: Design and control of vector fields is critical for many visualization and graphics tasks such as vector field visualization, fluid simulation, and texture synthesis. The fundamental qualitative structures associated with vector fields are fixed points, periodic orbits, and separatrices. In this paper, we provide a new technique that allows for the systematic creation and cancellation of fixed points and periodic orbits. This technique enables vector field design and editing on the plane and surfaces with desired qualitative properties. The technique is based on Conley theory, which provides a unified framework that supports the cancellation of fixed points and periodic orbits. We also introduce a novel periodic orbit extraction and visualization algorithm that detects, for the first time, periodic orbits on surfaces. Furthermore, we describe the application of our periodic orbit detection and vector field simplification algorithms to engine simulation data demonstrating the utility of the approach. We apply our design system to vector field visualization by creating data sets containing periodic orbits. This helps us understand the effectiveness of existing visualization techniques. Finally, we propose a new streamline-based technique that allows vector field topology to be easily identified.

Journal ArticleDOI
Danyel Fisher1
TL;DR: The imagery acquisition task that motivated Hotmap is discussed, and several examples of information that Hotmap makes visible are presented, including logarithmic color schemes; low-saturation background images; and tuning images to explore both infrequently-viewed and frequently-viewing spaces.
Abstract: Understanding how people use online maps allows data acquisition teams to concentrate their efforts on the portions of the map that are most seen by users. Online maps represent vast databases, and so it is insufficient to simply look at a list of the most-accessed URLs. Hotmap takes advantage of the design of a mapping system's imagery pyramid to superpose a heatmap of the log files over the original maps. Users' behavior within the system can be observed and interpreted. This paper discusses the imagery acquisition task that motivated Hotmap, and presents several examples of information that Hotmap makes visible. We discuss the design choices behind Hotmap, including logarithmic color schemes; low-saturation background images; and tuning images to explore both infrequently-viewed and frequently-viewed spaces.

Journal ArticleDOI
TL;DR: This article presents an interactive design system that allows a user to create a wide variety of symmetric tensor fields over 3D surfaces either from scratch or by modifying a meaningful input tensor field such as the curvature tensor.
Abstract: Designing tensor fields in the plane and on surfaces is a necessary task in many graphics applications, such as painterly rendering, pen-and-ink sketching of smooth surfaces, and anisotropic remeshing. In this article, we present an interactive design system that allows a user to create a wide variety of symmetric tensor fields over 3D surfaces either from scratch or by modifying a meaningful input tensor field such as the curvature tensor. Our system converts each user specification into a basis tensor field and combines them with the input field to make an initial tensor field. However, such a field often contains unwanted degenerate points which cannot always be eliminated due to topological constraints of the underlying surface. To reduce the artifacts caused by these degenerate points, our system allows the user to move a degenerate point or to cancel a pair of degenerate points that have opposite tensor indices. These operations provide control over the number and location of the degenerate points in the field. We observe that a tensor field can be locally converted into a vector field so that there is a one-to-one correspondence between the set of degenerate points in the tensor field and the set of singularities in the vector field. This conversion allows us to effectively perform degenerate point pair cancellation and movement by using similar operations for vector fields. In addition, we adapt the image-based flow visualization technique to tensor fields, therefore allowing interactive display of tensor fields on surfaces. We demonstrate the capabilities of our tensor field design system with painterly rendering, pen-and-ink sketching of surfaces, and anisotropic remeshing

Journal ArticleDOI
TL;DR: A new system is presented that facilitates hierarchical data comparison tasks for this type of collaborative work and supports multi-user input, shared and individual views on the hierarchical data visualization, flexible use of representations, and flexible workspace organization to facilitate group work around visualizations.
Abstract: In many domains, increased collaboration has lead to more innovation by fostering the sharing of knowledge, skills, and ideas. Shared analysis of information visualizations does not only lead to increased information processing power, but team members can also share, negotiate, and discuss their views and interpretations on a dataset and contribute unique perspectives on a given problem. Designing technologies to support collaboration around information visualizations poses special challenges and relatively few systems have been designed. We focus on supporting small groups collaborating around information visualizations in a co-located setting, using a shared interactive tabletop display. We introduce an analysis of challenges and requirements for the design of co-located collaborative information visualization systems. We then present a new system that facilitates hierarchical data comparison tasks for this type of collaborative work. Our system supports multi-user input, shared and individual views on the hierarchical data visualization, flexible use of representations, and flexible workspace organization to facilitate group work around visualizations.

Journal ArticleDOI
TL;DR: This paper describes a generalization of the god-object method for haptic interaction between rigid bodies using a novel constraint-based quasi-static approach, which allows us to suppress force artifacts typically found in previous methods.
Abstract: This paper describes a generalization of the god-object method for haptic interaction between rigid bodies. Our approach separates the computation of the motion of the six degree-of-freedom god-object from the computation of the force applied to the user. The motion of the god-object is computed using continuous collision detection and constraint-based quasi-statics, which enables high-quality haptic interaction between contacting rigid bodies. The force applied to the user is computed using a novel constraint-based quasi-static approach, which allows us to suppress force artifacts typically found in previous methods. The constraint-based force applied to the user, which handles any number of simultaneous contact points, is computed within a few microseconds, while the update of the configuration of the rigid god-object is performed within a few milliseconds for rigid bodies containing up to tens of thousands of triangles. Our approach has been successfully tested on complex benchmarks. Our results show that the separation into asynchronous processes allows us to satisfy the different update rates required by the haptic and visual displays. Force shading and textures can be added and enlarge the range of haptic perception of a virtual environment. This paper is an extension of M. Ortega et al., [2006]