scispace - formally typeset
Search or ask a question

Showing papers by "Kenneth I. Joy published in 2007"



Journal ArticleDOI
TL;DR: In this article, correlation fields, created between pairs of variables, are used with the cumulative distribution functions of variables expressed in a users query to visually reveal statistically important interactions between any three variables, and allow for trends between these variables to be readily identified.
Abstract: Our ability to generate ever-larger, increasingly-complex data, has established the need for scalable methods that identify, and provide insight into, important variable trends and interactions. Query-driven methods are among the small subset of techniques that are able to address both large and highly complex datasets. This paper presents a new method that increases the utility of query-driven techniques by visually conveying statistical information about the trends that exist between variables in a query. In this method, correlation fields, created between pairs of variables, are used with the cumulative distribution functions of variables expressed in a users query. This integrated use of cumulative distribution functions and correlation fields visually reveals, with respect to the solution space of the query, statistically important interactions between any three variables, and allows for trends between these variables to be readily identified. We demonstrate our method by analyzing interactions between variables in two flame-front simulations.

64 citations


Proceedings ArticleDOI
30 Apr 2007
TL;DR: A method for generating procedural volumetric fire in real time that supports efficient collision detection, which, when combined with a sufficiently intelligent particle simulation, enables real-time bi-directional interaction between the fire and its environment.
Abstract: We present a method for generating procedural volumetric fire in real time. By combining curve-based volumetric free-form deformation, hardware-accelerated volumetric rendering and Improved Perlin Noise or M-Noise we are able to render a vibrant and uniquely animated volumetric fire that supports bi-directional environmental macro-level interactivity. Our system is easily customizable by content artists. The fire is animated both on the macro and micro levels. Macro changes are controlled either by a prescripted sequence of movements, or by a realistic particle simulation that takes into account movement, wind, high-energy particle dispersion and thermal buoyancy. Micro fire effects such as individual flame shape, location, and flicker are generated in a pixel shader using three- to four-dimensional Improved Perlin Noise or M-Noise (depending on hardware limitations and performance requirements). Our method supports efficient collision detection, which, when combined with a sufficiently intelligent particle simulation, enables real-time bi-directional interaction between the fire and its environment. The result is a three-dimensional procedural fire that is easily designed and animated by content artists, supports dynamic interaction, and can be rendered in real time.

29 citations


Proceedings ArticleDOI
29 Oct 2007
TL;DR: A cross parameterization scheme that is provably robust in the sense that it can map M to M' without constraints on their relative genus or on the density of the triangulation with respect to the number of tunnels is achieved.
Abstract: We consider the problem of generating a map between two triangulated meshes, M and M', with arbitrary and possibly differing genus. This problem has rarely been tackled in its generality. Early schemes considered only topological spheres. Recent algorithms allow inputs with an arbitrary number of tunnels but require M and M' to have equal genus, mapping tunnel to tunnel. Other schemes which allow more general inputs are not guaranteed to work and the authors do not provide a characterization of the input meshes that can be processed successfully. Moreover, the techniques have difficulty dealing with coarse meshes with many tunnels. In this paper we present the first robust approach to build a map between two meshes of arbitrary unequal genus. We also provide a simplified method for setting the initial alignment between M and M', reducing reliance on landmarks and allowing the user to select "landmark tunnels" in addition to the standard landmark vertices. After computing the map, we automatically derive a continuous deformation from M to M' using a variational implicit approach to describe the evolution of non-landmark tunnels. Overall, we achieve a cross parameterization scheme that is provably robust in the sense that it can mapM toM' without constraints on their relative genus or on the density of the triangulation with respect to the number of tunnels. To demonstrate the practical effectiveness of our scheme we provide a number of examples of inter-surface parameterizations between meshes of different genus and shape.

18 citations


Proceedings ArticleDOI
23 May 2007
TL;DR: Combining visual exploration with feature extraction queries, formulated as a set of function-space constraints, facilitates quantitative analysis and annotation in function fields.
Abstract: We present interactive techniques for identifying and extracting features in function fields. Function fields map points in n-dimensional Euclidean space to 1-dimensional scalar functions. Visual feature identification is ac- complished by interactively rendering scalar distance fields, constructed by applying a function-space distance metric over the function field. Combining visual exploration with feature extraction queries, formulated as a set of function-space constraints, facilitates quantitative analysis and annotation. Numerous application domains give rise to function fields. We present results for two-dimensional hyperspectral images, and a simulated time-varying, three-dimensional air quality dataset.

9 citations


01 Jan 2007
TL;DR: In this survey, the basic concepts of interval arithmetic and some of its extensions are discussed, and successful applications of this theory in particular in computer science are reviewed.
Abstract: Interval arithmetic was introduced by Ramon Moore [Moo66] in the 1960s as an approach to bound rounding errors in mathematical computation. The theory of interval analysis emerged considering the computation of both the exact solution and the error term as a single entity, i.e. the interval. Though a simple idea, it is a very powerful technique with numerous applications in mathematics, computer science, and engineering. In this survey we discuss the basic concepts of interval arithmetic and some of its extensions, and review successful applications of this theory in particular in computer science.

7 citations


Journal ArticleDOI
01 Jul 2007
TL;DR: New uses of visualization frameworks are introduced through the introduction of Equivalence Class Functions (ECFs), which give a new class of derived quantities designed to greatly expand the ability of the end user to explore and visualize data.
Abstract: The challenges of visualization at the extreme scale involve issues of scale, complexity, temporal exploration and uncertainty The Visualization and Analytics Center for Enabling Technologies (VACET) focuses on leveraging scientific visualization and analytics software technology as an enabling technology to increased scientific discovery and insight In this paper, we introduce new uses of visualization frameworks through the introduction of Equivalence Class Functions (ECFs) These functions give a new class of derived quantities designed to greatly expand the ability of the end user to explore and visualize data ECFs are defined over equivalence classes (ie, groupings) of elements from an original mesh, and produce summary values for the classes as output ECFs can be used in the visualization process to directly analyze data, or can be used to synthesize new derived quantities on the original mesh The design of ECFs enable a parallel implementation that allows the use of these techniques on massive data sets that require parallel processing

4 citations


Journal ArticleDOI
TL;DR: A method for applying wavelet based downsampling filters to adaptively refined meshes, using a linear B-spline wavelet lifting scheme to derive narrow filter masks and defining rules for vertex dependencies in wavelet-based adaptive refinement and resolving them in an unambiguous manner.
Abstract: For view-dependent visualization, adaptively refined volumetric meshes are used to adapt resolution to given error constraints. A mesh hierarchy based on the 3√2-subdivision scheme produces structured grids with the highest adaptivity. Downsampling filters reduce aliasing effects and lead to higher quality data representation (in terms of lower approximation error) at coarser levels of resolution. We present a method for applying wavelet based downsampling filters to adaptively refined meshes. We use a linear B-spline wavelet lifting scheme to derive narrow filter masks. Using these narrow masks, the wavelet filters are applicable to adaptively refined meshes without imposing any restrictions on the adaptivity of the meshes, such that all wavelet filtering operations can be performed without further subdivision steps. We define rules for vertex dependencies in wavelet-based adaptive refinement and resolve them in an unambiguous manner. We use the wavelet filters for view-dependent visualization in order to demonstrate the functionality and the benefits of our approach. When using wavelet filters, the approximation quality is higher at each resolution level. Thus, less polyhedra need to be traversed by a visualization method to meet certain error bounds/quality measures.

4 citations


Journal ArticleDOI
01 Jul 2007
TL;DR: This document is the first version of the VACET project management plan and outlines the Center's achievements in the first six weeks of operation along with broad objectives for the upcoming future (12-24 months).
Abstract: The SciDAC2 Visualization and Analytics Center for EnablingTechnologies (VACET) began operation on 10/1/2006. This document, dated11/27/2006, is the first version of the VACET project management plan. Itwas requested by and delivered to ASCR/DOE. It outlines the Center'saccomplishments in the first six weeks of operation along with broadobjectives for the upcoming future (12-24 months).

3 citations


ReportDOI
01 Oct 2007
TL;DR: The focus of this article is on how one group of researcher is tackling the daunting task of enabling knowledgediscovery through visualization and analytics on some of the world slargest and most complex datasets and on some the world's largest computational platforms.
Abstract: The focus of this article is on how one group of researchersthe DOE SciDAC Visualization and Analytics Center for EnablingTechnologies (VACET) is tackling the daunting task of enabling knowledgediscovery through visualization and analytics on some of the world slargest and most complex datasets and on some of the world's largestcomputational platforms. As a Center for Enabling Technology, VACET smission is the creation of usable, production-quality visualization andknowledge discovery software infrastructure that runs on large, parallelcomputer systems at DOE's Open Computing facilities and that providessolutions to challenging visual data exploration and knowledge discoveryneeds of modern science, particularly the DOE sciencecommunity.

3 citations