Bio: Fedor Korsakov is an academic researcher from University of Minnesota. The author has contributed to research in topics: Virtual reality & User interface. The author has an hindex of 4, co-authored 6 publications receiving 44 citations.
TL;DR: In this paper, the authors present a taxonomy of motion visualizations organized by the method (animation, interaction, or static presentation) used to depict both the spatial and temporal dimensions of the data.
Abstract: We present a study of interactive virtual reality visualizations of scientific motions as found in biomechanics experiments. Our approach is threefold. First, we define a taxonomy of motion visualizations organized by the method (animation, interaction, or static presentation) used to depict both the spatial and temporal dimensions of the data. Second, we design and implement a set of eight example visualizations suggested by the taxonomy and evaluate their utility in a quantitative user study. Third, together with biomechanics collaborators, we conduct a qualitative evaluation of the eight example visualizations applied to a current study of human spinal kinematics. Results suggest that visualizations in this style that use interactive control for the time dimension of the data are preferable to others. Within this category, quantitative results support the utility of both animated and interactive depictions for space; however, qualitative feedback suggest that animated depictions for space should be avoided in biomechanics applications. © 2012 Wiley Periodicals, Inc.
••29 Nov 2010
TL;DR: The design, implementation, and lessons learned from developing a multi-surface VR visualization environment that makes use of low-cost VR components and demonstrates how these can be combined in a multiple display configuration with lowcost multi-touch hardware are presented.
Abstract: This paper presents the design, implementation, and lessons learned from developing a multi-surface VR visualization environment. The environment combines a head-tracked vertical VR display with a multi-touch table display. An example user interface technique called Shadow Grab is presented to demonstrate the potential of this design. Extending recent efforts to make VR more accessible to a broad audience, the work makes use of low-cost VR components and demonstrates how these can be combined in a multiple display configuration with lowcost multi-touch hardware, drawing upon knowledge from the rapidly growing low-cost/do-it-yourself multi-touch community. Details needed to implement the interactive environment are provided along with discussion of the limitations of the current design and the potential of future design variants.
TL;DR: An interactive exploratory visualization tool is designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer and demonstrates the tool's effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers.
Abstract: In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection's trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool's effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics.
TL;DR: An interdisciplinary team with expertise in technology, design, meditation, and the psychology of pain collaborated to iteratively develop and evaluate several prototype systems, demonstrating the degree to which low-cost VR environments can now create rich virtual experiences involving motion sensing, physiological inputs, stereoscopic imagery, sound, and haptic feedback.
Abstract: Using widely accessible VR technologies, researchers have implemented a series of multimodal spatial interfaces and virtual environments. The results demonstrate the degree to which we can now use low-cost (for example, mobile-phone based) VR environments to create rich virtual experiences involving motion sensing, physiological inputs, stereoscopic imagery, sound, and haptic feedback. Adapting spatial interfaces to these new platforms can open up exciting application areas for VR. In this case, the application area was in-home VR therapy for patients suffering from persistent pain (for example, arthritis and cancer pain). For such therapy to be successful, a rich spatial interface and rich visual aesthetic are particularly important. So, an interdisciplinary team with expertise in technology, design, meditation, and the psychology of pain collaborated to iteratively develop and evaluate several prototype systems. The video at http://youtu.be/mMPE7itReds demonstrates how the sine wave fitting responds to walking motions, for a walking-in-place application.
••04 Mar 2012
TL;DR: In designing virtual environments for presenting scientific motion data, recent research in other contexts suggests that although animated displays are effective for presenting known trends, static displays are more effective for data analysis.
Abstract: Studies of motion are fundamental to science. For centuries, pictures of motion (e.g., the revolutionary photographs by Marey and Muybridge of galloping horses and other animals, da Vinci's detailed drawings of hydrodynamics) have factored importantly in making scientific discoveries possible. Today, there is perhaps no tool more powerful than interactive virtual reality (VR) for conveying complex space-time data to scientists, doctors, and others; however, relatively little is known about how to design virtual environments in order to best facilitate these analyses.
TL;DR: Four considerations that abstract comparison are presented that identify issues and categorize solutions in a domain independent manner and provide a process for developers to consider support for comparison in the design of visualization tools.
Abstract: Supporting comparison is a common and diverse challenge in visualization. Such support is difficult to design because solutions must address both the specifics of their scenario as well as the general issues of comparison. This paper aids designers by providing a strategy for considering those general issues. It presents four considerations that abstract comparison. These considerations identify issues and categorize solutions in a domain independent manner. The first considers how the common elements of comparison—a target set of items that are related and an action the user wants to perform on that relationship—are present in an analysis problem. The second considers why these elements lead to challenges because of their scale, in number of items, complexity of items, or complexity of relationship. The third considers what strategies address the identified scaling challenges, grouping solutions into three broad categories. The fourth considers which visual designs map to these strategies to provide solutions for a comparison analysis problem. In sequence, these considerations provide a process for developers to consider support for comparison in the design of visualization tools. Case studies show how these considerations can help in the design and evaluation of visualization solutions for comparison problems.
••16 Oct 2011
TL;DR: A new system that efficiently combines direct multitouch interaction with co-located 3D stereoscopic visualization, and a rich and unified workspace where users benefit simultaneously from the advantages of both direct and indirect interaction, and from 2D and 3D visualizations is proposed.
Abstract: We propose a new system that efficiently combines direct multitouch interaction with co-located 3D stereoscopic visualization. In our approach, users benefit from well-known 2D metaphors and widgets displayed on a monoscopic touchscreen, while visualizing occlusion-free 3D objects floating above the surface at an optically correct distance. Technically, a horizontal semi-transparent mirror is used to reflect 3D images produced by a stereoscopic screen, while the user's hand as well as a multitouch screen located below this mirror remain visible. By registering the 3D virtual space and the physical space, we produce a rich and unified workspace where users benefit simultaneously from the advantages of both direct and indirect interaction, and from 2D and 3D visualizations. A pilot usability study shows that this combination of technology provides a good user experience.
TL;DR: Interactive Slice World-in-Miniature is presented, a framework for navigating and interrogating volumetric data sets using an interface enabled by a virtual reality environment made of two display surfaces: an interactive multitouch table, and a stereoscopic display wall.
Abstract: We present Interactive Slice World-in-Miniature (WIM), a framework for navigating and interrogating volumetric data sets using an interface enabled by a virtual reality environment made of two display surfaces: an interactive multitouch table, and a stereoscopic display wall. The framework addresses two current challenges in immersive visualization: 1) providing an appropriate overview+detail style of visualization while navigating through volume data, and 2) supporting interactive querying and data exploration, i.e., interrogating volume data. The approach extends the WIM metaphor, simultaneously displaying a large-scale detailed data visualization and an interactive miniature. Leveraging the table+wall hardware, horizontal slices are projected (like a shadow) down onto the table surface, providing a useful 2D data overview to complement the 3D views as well as a data context for interpreting 2D multitouch gestures made on the table. In addition to enabling effective navigation through complex geometries, extensions to the core Slice WIM technique support interacting with a set of multiple slices that persist on the table even as the user navigates around a scene and annotating and measuring data via points, paths, and volumes specified using interactive slices. Applications of the interface to two volume data sets are presented, and design decisions, limitations, and user feedback are discussed.
TL;DR: Empirical studies on the effect of different perceptual cues (fog, pseudo-chromadepth, kinetic depth, and depicting edges) both individually and in combination on the depth perception of cerebral vascular volumes and compare these to the cue of stereopsis are presented.
Abstract: Cerebral vascular images obtained through angiography are used by neurosurgeons for diagnosis, surgical planning, and intraoperative guidance. The intricate branching of the vessels and furcations, however, make the task of understanding the spatial three-dimensional layout of these images challenging. In this paper, we present empirical studies on the effect of different perceptual cues (fog, pseudo-chromadepth, kinetic depth, and depicting edges) both individually and in combination on the depth perception of cerebral vascular volumes and compare these to the cue of stereopsis. Two experiments with novices and one experiment with experts were performed. The results with novices showed that the pseudo-chromadepth and fog cues were stronger cues than that of stereopsis. Furthermore, the addition of the stereopsis cue to the other cues did not improve relative depth perception in cerebral vascular volumes. In contrast to novices, the experts also performed well with the edge cue. In terms of both novice and expert subjects, pseudo-chromadepth and fog allow for the best relative depth perception. By using such cues to improve depth perception of cerebral vasculature, we may improve diagnosis, surgical planning, and intraoperative guidance.
TL;DR: The proposed method is sign invariant to temporal misalignment and can characterize sign language based on a 3D spatiotemporal framework and improve recognition compared with the state-of-the-art 3-D action recognition methods.
Abstract: Recognizing human gestures in sign language is a complex and challenging task. Human sign language gestures are a combination of independent hand and finger articulations, which are sometimes performed in coordination with the head, face, and body. 3-D motion capture of sign language involves recording 3-D sign videos that are often affected by interobject or self-occlusions, lighting, and background. This paper proposes characterization of sign language gestures articulated at different body parts as 3-D motionlets, which describe the signs with a subset of joint motions. A two-phase fast algorithm identifies 3-D query signs from an adaptively ranked database of 3-D sign language. Phase-I process clusters all human joints into motion joints (MJ) and nonmotion joints (NMJ). The relation between MJ and NMJ is analyzed to categorically segment the database into four motionlet classes. Phase-II process investigates the relation within the motion joints to represent shape information of a sign as 3-D motionlets. The 4-class sign database features three adaptive motionlet kernels. A simple kernel matching algorithm is used to rank the database according to the highest-ranked query sign. The proposed method is sign invariant to temporal misalignment and can characterize sign language based on a 3-D spatiotemporal framework. In this paper, five 500-word Indian sign language data sets were used to evaluate the proposed model. The experimental results reveal that the method proposed here improved recognition compared with the state-of-the-art 3-D action recognition methods.