Other affiliations: University of Minnesota
Bio: Lauren Thorson is an academic researcher from Minneapolis College of Art and Design. The author has contributed to research in topics: Data visualization & Information visualization. The author has an hindex of 3, co-authored 5 publications receiving 41 citations. Previous affiliations of Lauren Thorson include University of Minnesota.
14 Oct 2012
TL;DR: If design-based evaluation methodologies (e.g., ideation, sketching, critique) can be taught and embraced within the visualization community then these may become one of the most effective future strategies for both formative and summative evaluations.
Abstract: In this position paper we discuss successes and limitations of current evaluation strategies for scientific visualizations and argue for embracing a mixed methods strategy of evaluation. The most novel contribution of the approach that we advocate is a new emphasis on employing design processes as practiced in related fields (e.g., graphic design, illustration, architecture) as a formalized mode of evaluation for data visualizations. To motivate this position we describe a series of recent evaluations of scientific visualization interfaces and computer graphics strategies conducted within our research group. Complementing these more traditional evaluations our visualization research group also regularly employs sketching, critique, and other design methods that have been formalized over years of practice in design fields. Our experience has convinced us that these activities are invaluable, often providing much more detailed evaluative feedback about our visualization systems than that obtained via more traditional user studies and the like. We believe that if design-based evaluation methodologies (e.g., ideation, sketching, critique) can be taught and embraced within the visualization community then these may become one of the most effective future strategies for both formative and summative evaluations.
TL;DR: In this paper, the authors present a taxonomy of motion visualizations organized by the method (animation, interaction, or static presentation) used to depict both the spatial and temporal dimensions of the data.
Abstract: We present a study of interactive virtual reality visualizations of scientific motions as found in biomechanics experiments. Our approach is threefold. First, we define a taxonomy of motion visualizations organized by the method (animation, interaction, or static presentation) used to depict both the spatial and temporal dimensions of the data. Second, we design and implement a set of eight example visualizations suggested by the taxonomy and evaluate their utility in a quantitative user study. Third, together with biomechanics collaborators, we conduct a qualitative evaluation of the eight example visualizations applied to a current study of human spinal kinematics. Results suggest that visualizations in this style that use interactive control for the time dimension of the data are preferable to others. Within this category, quantitative results support the utility of both animated and interactive depictions for space; however, qualitative feedback suggest that animated depictions for space should be avoided in biomechanics applications. © 2012 Wiley Periodicals, Inc.
TL;DR: An interactive exploratory visualization tool is designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer and demonstrates the tool's effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers.
Abstract: In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection's trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool's effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics.
01 Jan 2011
TL;DR: A case study incorporating the process of ideation of an experienced graphic designer into the workflow of a team of programmers to improve scientific visualization methods for describing complex motion data in studies of human biomechanics.
Abstract: We present a case study incorporating the process of ideation of an experienced graphic designer into the workflow of a team of programmers to improve scientific visualization methods. Our work highlights the current opportunities and reports on the process we have adopted for beneficial collaboration between designers, computer scientists, and other collaborators. The specific design problem we address is creating illustrative visualization rendering algorithms for describing complex motion data, such as those analyzed in studies of human biomechanics.
••04 Mar 2012
TL;DR: In designing virtual environments for presenting scientific motion data, recent research in other contexts suggests that although animated displays are effective for presenting known trends, static displays are more effective for data analysis.
Abstract: Studies of motion are fundamental to science. For centuries, pictures of motion (e.g., the revolutionary photographs by Marey and Muybridge of galloping horses and other animals, da Vinci's detailed drawings of hydrodynamics) have factored importantly in making scientific discoveries possible. Today, there is perhaps no tool more powerful than interactive virtual reality (VR) for conveying complex space-time data to scientists, doctors, and others; however, relatively little is known about how to design virtual environments in order to best facilitate these analyses.
TL;DR: An assessment of the state and historic development of evaluation practices as reported in papers published at the IEEE Visualization conference found that evaluations specific to assessing resulting images and algorithm performance are the most prevalent and generally the studies reporting requirements analyses and domain-specific work practices are too informally reported.
Abstract: We present an assessment of the state and historic development of evaluation practices as reported in papers published at the IEEE Visualization conference. Our goal is to reflect on a meta-level about evaluation in our community through a systematic understanding of the characteristics and goals of presented evaluations. For this purpose we conducted a systematic review of ten years of evaluations in the published papers using and extending a coding scheme previously established by Lam et al. . The results of our review include an overview of the most common evaluation goals in the community, how they evolved over time, and how they contrast or align to those of the IEEE Information Visualization conference. In particular, we found that evaluations specific to assessing resulting images and algorithm performance are the most prevalent (with consistently 80-90% of all papers since 1997). However, especially over the last six years there is a steady increase in evaluation methods that include participants, either by evaluating their performances and subjective feedback or by evaluating their work practices and their improved analysis and reasoning capabilities using visual tools. Up to 2010, this trend in the IEEE Visualization conference was much more pronounced than in the IEEE Information Visualization conference which only showed an increasing percentage of evaluation through user performance and experience testing. Since 2011, however, also papers in IEEE Information Visualization show such an increase of evaluations of work practices and analysis as well as reasoning using visual tools. Further, we found that generally the studies reporting requirements analyses and domain-specific work practices are too informally reported which hinders cross-comparison and lowers external validity.
TL;DR: Four considerations that abstract comparison are presented that identify issues and categorize solutions in a domain independent manner and provide a process for developers to consider support for comparison in the design of visualization tools.
Abstract: Supporting comparison is a common and diverse challenge in visualization. Such support is difficult to design because solutions must address both the specifics of their scenario as well as the general issues of comparison. This paper aids designers by providing a strategy for considering those general issues. It presents four considerations that abstract comparison. These considerations identify issues and categorize solutions in a domain independent manner. The first considers how the common elements of comparison—a target set of items that are related and an action the user wants to perform on that relationship—are present in an analysis problem. The second considers why these elements lead to challenges because of their scale, in number of items, complexity of items, or complexity of relationship. The third considers what strategies address the identified scaling challenges, grouping solutions into three broad categories. The fourth considers which visual designs map to these strategies to provide solutions for a comparison analysis problem. In sequence, these considerations provide a process for developers to consider support for comparison in the design of visualization tools. Case studies show how these considerations can help in the design and evaluation of visualization solutions for comparison problems.
TL;DR: The Five Design Sheet (FdS) methodology enables users to create information visualization interfaces through lo-fidelity methods through sketching and an evaluation of the FdS using the System Usability Scale.
Abstract: Sketching designs has been shown to be a useful way of planning and considering alternative solutions. The use of lo-fidelity prototyping, especially paper-based sketching, can save time, money and converge to better solutions more quickly. However, this design process is often viewed to be too informal. Consequently users do not know how to manage their thoughts and ideas (to first think divergently, to then finally converge on a suitable solution). We present the Five Design Sheet (FdS) methodology. The methodology enables users to create information visualization interfaces through lo-fidelity methods. Users sketch and plan their ideas, helping them express different possibilities, think through these ideas to consider their potential effectiveness as solutions to the task (sheet 1); they create three principle designs (sheets 2,3 and 4); before converging on a final realization design that can then be implemented (sheet 5). In this article, we present (i) a review of the use of sketching as a planning method for visualization and the benefits of sketching, (ii) a detailed description of the Five Design Sheet (FdS) methodology, and (iii) an evaluation of the FdS using the System Usability Scale, along with a case-study of its use in industry and experience of its use in teaching.
TL;DR: Empirical studies on the effect of different perceptual cues (fog, pseudo-chromadepth, kinetic depth, and depicting edges) both individually and in combination on the depth perception of cerebral vascular volumes and compare these to the cue of stereopsis are presented.
Abstract: Cerebral vascular images obtained through angiography are used by neurosurgeons for diagnosis, surgical planning, and intraoperative guidance. The intricate branching of the vessels and furcations, however, make the task of understanding the spatial three-dimensional layout of these images challenging. In this paper, we present empirical studies on the effect of different perceptual cues (fog, pseudo-chromadepth, kinetic depth, and depicting edges) both individually and in combination on the depth perception of cerebral vascular volumes and compare these to the cue of stereopsis. Two experiments with novices and one experiment with experts were performed. The results with novices showed that the pseudo-chromadepth and fog cues were stronger cues than that of stereopsis. Furthermore, the addition of the stereopsis cue to the other cues did not improve relative depth perception in cerebral vascular volumes. In contrast to novices, the experts also performed well with the edge cue. In terms of both novice and expert subjects, pseudo-chromadepth and fog allow for the best relative depth perception. By using such cues to improve depth perception of cerebral vasculature, we may improve diagnosis, surgical planning, and intraoperative guidance.
••10 Nov 2014
TL;DR: The evaluation metrics, components, and techniques that have been utilized in the past decade of VizSec research literature are surveyed and categorized to help establish an agenda for advancing the state-of-the-art in evaluating cyber security visualizations.
Abstract: The Visualization for Cyber Security research community (VizSec) addresses longstanding challenges in cyber security by adapting and evaluating information visualization techniques with application to the cyber security domain. This research effort has created many tools and techniques that could be applied to improve cyber security, yet the community has not yet established unified standards for evaluating these approaches to predict their operational validity. In this paper, we survey and categorize the evaluation metrics, components, and techniques that have been utilized in the past decade of VizSec research literature. We also discuss existing methodological gaps in evaluating visualization in cyber security, and suggest potential avenues for future research in order to help establish an agenda for advancing the state-of-the-art in evaluating cyber security visualizations.