scispace - formally typeset
Search or ask a question

Showing papers in "Computer Graphics Forum in 2012"


Journal ArticleDOI
TL;DR: The major methods for data analysis, such as establishing confidence intervals, statistical testing and retrospective power analysis are reviewed, and it is concluded that the forced‐choice pairwise comparison method results in the smallest measurement variance and thus produces the most accurate results.
Abstract: To provide a convincing proof that a new method is better than the state of the art, computer graphics projects are often accompanied by user studies, in which a group of observers rank or rate results of several algorithms. Such user studies, known as subjective image quality assessment experiments, can be very time-consuming and do not guarantee to produce conclusive results. This paper is intended to help design efficient and rigorous quality assessment experiments and emphasise the key aspects of the results analysis. To promote good standards of data analysis, we review the major methods for data analysis, such as establishing confidence intervals, statistical testing and retrospective power analysis. Two methods of visualising ranking results together with the meaningful information about the statistical and practical significance are explored. Finally, we compare four most prominent subjective quality assessment methods: single-stimulus, double-stimulus, forced-choice pairwise comparison and similarity judgements. We conclude that the forced-choice pairwise comparison method results in the smallest measurement variance and thus produces the most accurate results. This method is also the most time-efficient, assuming a moderate number of compared conditions. © 2012 Wiley Periodicals, Inc.

233 citations


Journal ArticleDOI
TL;DR: A unified optimization framework for geometry processing based on shape constraints provides a systematic way of building new solvers for geometryprocessing and produces similar or better results than state‐of‐the‐art methods.
Abstract: We introduce a unified optimization framework for geometry processing based on shape constraints. These constraints preserve or prescribe the shape of subsets of the points of a geometric data set, such as polygons, one-ring cells, volume elements, or feature curves. Our method is based on two key concepts: a shape proximity function and shape projection operators. The proximity function encodes the distance of a desired least-squares fitted elementary target shape to the corresponding vertices of the 3D model. Projection operators are employed to minimize the proximity function by relocating vertices in a minimal way to match the imposed shape constraints. We demonstrate that this approach leads to a simple, robust, and efficient algorithm that allows implementing a variety of geometry processing applications, simply by combining suitable projection operators. We show examples for computing planar and circular meshes, shape space exploration, mesh quality improvement, shape-preserving deformation, and conformal parametrization. Our optimization framework provides a systematic way of building new solvers for geometry processing and produces similar or better results than state-of-the-art methods. © 2012 Wiley Periodicals, Inc.

211 citations


Journal ArticleDOI
TL;DR: A new rendering algorithm is presented that is tailored to the unstructured yet dense data the authors capture and can achieve piecewise‐bicubic reconstruction using a triangulation of the captured viewpoints and subdivision rules applied to reconstruction weights.
Abstract: We present a system for interactively acquiring and rendering light fields using a hand-held commodity camera. The main challenge we address is assisting a user in achieving good coverage of the 4D domain despite the challenges of hand-held acquisition. We define coverage by bounding reprojection error between viewpoints, which accounts for all 4 dimensions of the light field. We use this criterion together with a recent Simultaneous Localization and Mapping technique to compute a coverage map on the space of viewpoints. We provide users with real-time feedback and direct them toward under-sampled parts of the light field. Our system is lightweight and has allowed us to capture hundreds of light fields. We further present a new rendering algorithm that is tailored to the unstructured yet dense data we capture. Our method can achieve piecewise-bicubic reconstruction using a triangulation of the captured viewpoints and subdivision rules applied to reconstruction weights. © 2012 Wiley Periodicals, Inc.

208 citations


Journal ArticleDOI
TL;DR: By analyzing the differential characteristics of the flow, it is revealed that MCF locally increases shape anisotropy, which justifies the use of curvature motion for skeleton computation, and leads to the generation of what is called “mean curvature skeletons”.
Abstract: Inspired by recent developments in contraction-based curve skeleton extraction, we formulate the skeletonization problem via mean curvature flow (MCF). While the classical application of MCF is surface fairing, we take advantage of its area-minimizing characteristic to drive the curvature flow towards the extreme so as to collapse the input mesh geometry and obtain a skeletal structure. By analyzing the differential characteristics of the flow, we reveal that MCF locally increases shape anisotropy. This justifies the use of curvature motion for skeleton computation, and leads to the generation of what we call “mean curvature skeletons”. To obtain a stable and efficient discretization, we regularize the surface mesh by performing local remeshing via edge splits and collapses. Simplifying mesh connectivity throughout the motion leads to more efficient computation and avoids numerical instability arising from degeneracies in the triangulation. In addition, the detection of collapsed geometry is facilitated by working with simplified mesh connectivity and monitoring potential non-manifold edge collapses. With topology simplified throughout the flow, minimal post-processing is required to convert the collapsed geometry to a curve. Formulating skeletonization via MCF allows us to incorporate external energy terms easily, resulting in a constrained flow. We define one such energy term using the Voronoi medial skeleton and obtain a medially centred curve skeleton. We call the intermediate results of our skeletonization motion meso-skeletons; these consist of a mixture of curves and surface sheets as appropriate to the local 3D geometry they capture. © 2012 Wiley Periodicals, Inc.

202 citations


Journal ArticleDOI
TL;DR: The state of the art in interactive global illumination (GI) computation, i.e., methods that generate an image of a virtual scene in less than 1s with an as exact as possible, or plausible, solution to the light transport, is reviewed.
Abstract: The interaction of light and matter in the world surrounding us is of striking complexity and beauty. Since the very beginning of computer graphics, adequate modelling of these processes and efficient computation is an intensively studied research topic and still not a solved problem. The inherent complexity stems from the underlying physical processes as well as the global nature of the interactions that let light travel within a scene. This paper reviews the state of the art in interactive global illumination (GI) computation, i.e., methods that generate an image of a virtual scene in less than 1 s with an as exact as possible, or plausible, solution to the light transport. Additionally, the theoretical background and attempts to classify the broad field of methods are described. The strengths and weaknesses of different approaches, when applied to the different visual phenomena, arising from light interaction are compared and discussed. Finally, the paper concludes by highlighting design patterns for interactive GI and a list of open problems. © 2012 Wiley Periodicals, Inc.

189 citations


Journal ArticleDOI
TL;DR: An optical flow algorithm called SimpleFlow is proposed whose running times increase sublinearly in the number of pixels whose probabilistic representation of the motion flow is computed using only local evidence and without resorting to global optimization.
Abstract: Optical flow is a critical component of video editing applications, e.g. for tasks such as object tracking, segmentation, and selection. In this paper, we propose an optical flow algorithm called SimpleFlow whose running times increase sublinearly in the number of pixels. Central to our approach is a probabilistic representation of the motion flow that is computed using only local evidence and without resorting to global optimization. To estimate the flow in image regions where the motion is smooth, we use a sparse set of samples only, thereby avoiding the expensive computation inherent in traditional dense algorithms. We show that our results can be used as is for a variety of video editing tasks. For applications where accuracy is paramount, we use our result to bootstrap a global optimization. This significantly reduces the running times of such methods without sacrificing accuracy. We also demonstrate that the SimpleFlow algorithm can process HD and 4K footage in reasonable times. © 2012 Wiley Periodicals, Inc.

174 citations


Journal ArticleDOI
TL;DR: To fuse multiple features, this work proposes a new formulation of optimization with a consistent penalty, which facilitates both the identification of most similar patches and selection of master features for two similar patches.
Abstract: We present a novel algorithm for automatically co-segmenting a set of shapes from a common family into consistent parts. Starting from over-segmentations of shapes, our approach generates the segmentations by grouping the primitive patches of the shapes directly and obtains their correspondences simultaneously. The core of the algorithm is to compute an affinity matrix where each entry encodes the similarity between two patches, which is measured based on the geometric features of patches. Instead of concatenating the different features into one feature descriptor, we formulate co-segmentation into a subspace clustering problem in multiple feature spaces. Specifically, to fuse multiple features, we propose a new formulation of optimization with a consistent penalty, which facilitates both the identification of most similar patches and selection of master features for two similar patches. Therefore the affinity matrices for various features are sparsity-consistent and the similarity between a pair of patches may be determined by part of (instead of all) features. Experimental results have shown how our algorithm jointly extracts consistent parts across the collection in a good manner. © 2012 Wiley Periodicals, Inc.

163 citations


Journal ArticleDOI
TL;DR: This paper presents a new method for estimating normals on unorganized point clouds that preserves sharp features and is at least as precise and noise‐resistant as state‐of‐the‐art methods that preserve sharp features, while being almost an order of magnitude faster.
Abstract: This paper presents a new method for estimating normals on unorganized point clouds that preserves sharp features. It is based on a robust version of the Randomized Hough Transform (RHT). We consider the filled Hough transform accumulator as an image of the discrete probability distribution of possible normals. The normals we estimate corresponds to the maximum of this distribution. We use a fixed-size accumulator for speed, statistical exploration bounds for robustness, and randomized accumulators to prevent discretization effects. We also propose various sampling strategies to deal with anisotropy, as produced by laser scans due to differences of incidence. Our experiments show that our approach offers an ideal compromise between precision, speed, and robustness: it is at least as precise and noise-resistant as state-of-the-art methods that preserve sharp features, while being almost an order of magnitude faster. Besides, it can handle anisotropy with minor speed and precision losses. © 2012 Wiley Periodicals, Inc.

163 citations


Journal ArticleDOI
TL;DR: An interactive visual analytics system for document clustering, called iVisClustering, is proposed based on a widely‐used topic modeling method, latent Dirichlet allocation (LDA), which provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates.
Abstract: Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widely-used topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways. Keywords can be adjusted so that they characterize each cluster better. In addition, our system can filter out noisy data and re-cluster the data accordingly. Cluster hierarchy can be constructed using a tree structure and for this purpose, the system supports cluster-level interactions such as sub-clustering, removing unimportant clusters, merging the clusters that have similar meanings, and moving certain clusters to any other node in the tree structure. Furthermore, the system provides document-level interactions such as moving mis-clustered documents to another cluster and removing useless documents. Finally, we present how interactive clustering is performed via iVisClustering by using real-world document data sets. © 2012 Wiley Periodicals, Inc.

155 citations


Journal ArticleDOI
TL;DR: A taxonomy of visual cluster separation factors in scatterplots, and an in‐depth qualitative evaluation of two recently proposed and validated separation measures are provided.
Abstract: We provide two contributions, a taxonomy of visual cluster separation factors in scatterplots, and an in-depth qualitative evaluation of two recently proposed and validated separation measures. We initially intended to use these measures to provide guidance for the use of dimension reduction (DR) techniques and visual encoding (VE) choices, but found that they failed to produce reliable results. To understand why, we conducted a systematic qualitative data study covering a broad collection of 75 real and synthetic high-dimensional datasets, four DR techniques, and three scatterplot-based visual encodings. Two authors visually inspected over 800 plots to determine whether or not the measures created plausible results. We found that they failed in over half the cases overall, and in over two-thirds of the cases involving real datasets. Using open and axial coding of failure reasons and separability characteristics, we generated a taxonomy of visual cluster separability factors. We iteratively refined its explanatory clarity and power by mapping the studied datasets and success and failure ranges of the measures onto the factor axes. Our taxonomy has four categories, ordered by their ability to influence successors: Scale, Point Distance, Shape, and Position. Each category is split into Within-Cluster factors such as density, curvature, isotropy, and clumpiness, and Between-Cluster factors that arise from the variance of these properties, culminating in the overarching factor of class separation. The resulting taxonomy can be used to guide the design and the evaluation of cluster separation measures. © 2012 Wiley Periodicals, Inc.

153 citations


Journal ArticleDOI
TL;DR: A novel algorithm that requires no user strokes and works on a single image is presented, based on simple assumptions about its reflectance and luminance, which builds a linear system describing the connections and relations between them.
Abstract: Decomposing an input image into its intrinsic shading and reflectance components is a long-standing ill-posed problem. We present a novel algorithm that requires no user strokes and works on a single image. Based on simple assumptions about its reflectance and luminance, we first find clusters of similar reflectance in the image, and build a linear system describing the connections and relations between them. Our assumptions are less restrictive than widely-adopted Retinex-based approaches, and can be further relaxed in conflicting situations. The resulting system is robust even in the presence of areas where our assumptions do not hold. We show a wide variety of results, including natural images, objects from the MIT dataset and texture images, along with several applications, proving the versatility of our method. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: To turn a video camera with a recent infrared time‐of‐flight depth camera into a practical RGBZ video camera, efficient data filtering techniques that are tailored to the noise characteristics of IR depth cameras are developed.
Abstract: Sophisticated video processing effects require both image and geometry information. We explore the possibility to augment a video camera with a recent infrared time-of-flight depth camera, to capture high-resolution RGB and low-resolution, noisy depth at video frame rates. To turn such a setup into a practical RGBZ video camera, we develop efficient data filtering techniques that are tailored to the noise characteristics of IR depth cameras. We first remove typical artefacts in the RGBZ data and then apply an efficient spatiotemporal denoising and upsampling scheme. This allows us to record temporally coherent RGBZ videos at interactive frame rates and to use them to render a variety of effects in unprecedented quality. We show effects such as video relighting, geometry-based abstraction and stylisation, background segmentation and rendering in stereoscopic 3D. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This work elaborate and calibrate a model from microscopic analysis of real kinematics data collected during experiments, and carefully evaluate the model both at the microscopic and the macroscopic levels.
Abstract: While walking through a crowd, a pedestrian experiences a large number of interactions with his neighbors. The nature of these interactions is varied, and it has been observed that macroscopic phenomena emerge from the combination of these local interactions. Crowd models have hitherto considered collision avoidance as the unique type of interactions between individuals, few have considered walking in groups. By contrast, our paper focuses on interactions due to the following behaviors of pedestrians. Following is frequently observed when people walk in corridors or when they queue. Typical macroscopic stop-and-go waves emerge under such traffic conditions. Our contributions are, first, an experimental study on following behaviors, second, a numerical model for simulating such interactions, and third, its calibration, evaluation and applications. Through an experimental approach, we elaborate and calibrate a model from microscopic analysis of real kinematics data collected during experiments. We carefully evaluate our model both at the microscopic and the macroscopic levels. We also demonstrate our approach on applications where following interactions are prominent. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: Unlike standard textile testing, this system measures complex 3D deformations of a sheet of cloth, not just one‐dimensional force‐displacement curves, so it works under a wider range of deformation conditions.
Abstract: Progress in cloth simulation for computer animation and apparel design has led to a multitude of deformation models, each with its own way of relating geometry, deformation, and forces. As simulators improve, differences between these models become more important, but it is difficult to choose a model and a set of parameters to match a given real material simply by looking at simulation results. This paper provides measurement and fitting methods that allow nonlinear models to be fit to the observed deformation of a particular cloth sample. Unlike standard textile testing, our system measures complex 3D deformations of a sheet of cloth, not just one-dimensional force-displacement curves, so it works under a wider range of deformation conditions. The fitted models are then evaluated by comparison to measured deformations with motions very different from those used for fitting. © 2012 Wiley Periodicals, Inc. (This work was funded in part by the Spanish Ministry of Science and Innovation, project TIN2009–07942, and by the European Research Council, project ERC–2011-StG-280135 Animetrics.)

Journal ArticleDOI
TL;DR: A structured review of over two decades of research on physics‐based character animation is presented, as well as point out various open research areas and possible future directions.
Abstract: Physics simulation offers the possibility of truly responsive and realistic animation. Despite wide adoption of physics simulation for the animation of passive phenomena, such as fluids, cloths and rag-doll characters, commercial applications still resort to kinematics-based approaches for the animation of actively controlled characters. However, following a renewed interest in the use of physics simulation for interactive character animation, many recent publications demonstrate tremendous improvements in robustness, visual quality and usability. We present a structured review of over two decades of research on physics-based character animation, as well as point out various open research areas and possible future directions. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A simple algorithm and data structures for d‐dimensional unbiased maximal Poisson‐disk sampling in fixed dimension d using an order of magnitude less memory and time than the alternatives, which allows for bigger samplings.
Abstract: We provide a simple algorithm and data structures for d-dimensional unbiased maximal Poisson-disk sampling. We use an order of magnitude less memory and time than the alternatives. Our results become more favorable as the dimension increases. This allows us to produce bigger samplings. Domains may be non-convex with holes. The generated point cloud is maximal up to round-off error. The serial algorithm is provably bias-free. For an output sampling of size n in fixed dimension d, we use a linear memory budget and empirical θ(n) runtime. No known methods scale well with dimension, due to the “curse of dimensionality.” The serial algorithm is practical in dimensions up to 5, and has been demonstrated in 6d. We have efficient GPU implementations in 2d and 3d. The algorithm proceeds through a finite sequence of uniform grids. The grids guide the dart throwing and track the remaining disk-free area. The top-level grid provides an efficient way to test if a candidate dart is disk-free. Our uniform grids are like quadtrees, except we delay splits and refine all leaves at once. Since the quadtree is flat it can be represented using very little memory: we just need the indices of the active leaves and a global level. Also it is very simple to sample from leaves with uniform probability. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: An extended binary space partitioning tree is proposed as an efficient representation of such cardboard models which allows us to quickly evaluate the feasibility of newly added planar elements and provide tools for generating cardboard sculptures with guaranteed constructibility.
Abstract: We introduce an algorithm and representation for fabricating 3D shape abstractions using mutually intersecting planar cut-outs. The planes have prefabricated slits at their intersections and are assembled by sliding them together. Often such abstractions are used as a sculptural art form or in architecture and are colloquially called ‘cardboard sculptures’. Based on an analysis of construction rules, we propose an extended binary space partitioning tree as an efficient representation of such cardboard models which allows us to quickly evaluate the feasibility of newly added planar elements. The complexity of insertion order quickly increases with the number of planar elements and manual analysis becomes intractable. We provide tools for generating cardboard sculptures with guaranteed constructibility. In combination with a simple optimization and sampling strategy for new elements, planar shape abstraction models can be designed by iteratively adding elements. As an output, we obtain a fabrication plan that can be printed or sent to a laser cutter. We demonstrate the complete process by designing and fabricating cardboard models of various well-known 3D shapes. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: While walking through a crowd, a pedestrian experiences a large number of interactions with his neighbors as mentioned in this paper, and the nature of these interactions is varied, and it has been observed that macroscopic phen...
Abstract: While walking through a crowd, a pedestrian experiences a large number of interactions with his neighbors. The nature of these interactions is varied, and it has been observed that macroscopic phen...

Journal ArticleDOI
TL;DR: This paper introduces soft maps, a probabilistic relaxation of point‐to‐point correspondence that explicitly incorporates ambiguities in the mapping process and shows how they can be represented using probability matrices and computed through a convex optimization explicitly trading off between continuity, conformity to geometric descriptors, and spread.
Abstract: The problem of mapping between two non-isometric surfaces admits ambiguities on both local and global scales. For instance, symmetries can make it possible for multiple maps to be equally acceptable, and stretching, slippage, and compression introduce difficulties deciding exactly where each point should go. Since most algorithms for point-to-point or even sparse mapping struggle to resolve these ambiguities, in this paper we introduce soft maps, a probabilistic relaxation of point-to-point correspondence that explicitly incorporates ambiguities in the mapping process. In addition to explaining a continuous theory of soft maps, we show how they can be represented using probability matrices and computed for given pairs of surfaces through a convex optimization explicitly trading off between continuity, conformity to geometric descriptors, and spread. Given that our correspondences are encoded in matrix form, we also illustrate how low-rank approximation and other linear algebraic tools can be used to analyze, simplify, and represent both individual and collections of soft maps. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This work presents a new algorithm for accelerating the colour bilateral filter based on a subsampling strategy working in the spatial domain that has an excellent trade‐off between visual quality and speed‐up, a very low memory overhead is required and it is straightforward to implement on the GPU allowing real‐time filtering.
Abstract: In this work we present a new algorithm for accelerating the colour bilateral filter based on a subsampling strategy working in the spatial domain. The base idea is to use a suitable subset of samples of the entire kernel in order to obtain a good estimation of the exact filter values. The main advantages of the proposed approach are that it has an excellent trade-off between visual quality and speed-up, a very low memory overhead is required and it is straightforward to implement on the GPU allowing real-time filtering. We show different applications of the proposed filter, in particular efficient cross-bilateral filtering, real-time edge-aware image editing and fast video denoising. We compare our method against the state of the art in terms of image quality, time performance and memory usage. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This design study presents an analysis and abstraction of the data and task in the domain of fisheries management, and the design and implementation of the Vismon tool to address the identified requirements.
Abstract: In this design study, we present an analysis and abstraction of the data and task in the domain of fisheries management, and the design and implementation of the Vismon tool to address the identified requirements. Vismon was designed to support sophisticated data analysis of simulation results by managers who are highly knowledgeable about the fisheries domain but not experts in simulation software and statistical data analysis. The previous workflow required the scientists who built the models to spearhead the analysis process. The features of Vismon include sensitivity analysis, comprehensive and global trade-offs analysis, and a staged approach to the visualization of the uncertainty of the underlying simulation model. The tool was iteratively refined through a multi-year engagement with fisheries scientists with a two-phase approach, where an initial diverging experimentation phase to test many alternatives was followed by a converging phase where the set of multiple linked views that proved effective were integrated together in a useable way. Several fisheries scientists have used Vismon to communicate with policy makers, and it is scheduled for deployment to policy makers in Alaska. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: The SGD micro‐facet distribution for Cook‐Torrance BRDF is introduced, which accurately models the behavior of most materials and accurately represents all measured BRDFs using a single lobe.
Abstract: Material models are essential to the production of photo-realistic images. Measured BRDFs provide accurate representation with complex visual appearance, but have larger storage cost. Analytical BRDFs such as Cook-Torrance provide a compact representation but fail to represent the effects we observe with measured appearance. Accurately fitting an analytical BRDF to measured data remains a challenging problem. In this paper we introduce the SGD micro-facet distribution for Cook-Torrance BRDF. This distribution accurately models the behavior of most materials. As a consequence, we accurately represent all measured BRDFs using a single lobe. Our fitting procedure is stable and robust, and does not require manual tweaking of the parameters. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This work introduces a discrete paradigm for developable surface modeling that is able to enforce exact developability at every step, ensuring that users do not inadvertently suggest configurations that leave the manifold of admissible folds of a flat two‐dimensional sheet.
Abstract: We introduce a discrete paradigm for developable surface modeling. Unlike previous attempts at interactive developable surface modeling, our system is able to enforce exact developability at every step, ensuring that users do not inadvertently suggest configurations that leave the manifold of admissible folds of a flat two-dimensional sheet. With methods for navigation of this highly nonlinear constraint space in place, we show how to formulate a discrete mean curvature bending energy measuring how far a given discrete developable surface is from being flat. This energy enables relaxation of user-generated configurations and suggests a straightforward subdivision scheme that produces admissible smoothed versions of bent regions of our discrete developable surfaces. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A novel physics‐driven shape optimization method is proposed, which combines physical simulation of inflatable elastic membranes with a dedicated constrained optimization algorithm and is validated by fabricating balloons designed with this method and comparing their inflated shapes to the results predicted by simulation.
Abstract: This paper presents an automatic process for fabrication-oriented design of custom-shaped rubber balloons. We cast computational balloon design as an inverse problem: given a target shape, we compute an optimal balloon that, when inflated, approximates the target as closely as possible. To solve this problem numerically, we propose a novel physics-driven shape optimization method, which combines physical simulation of inflatable elastic membranes with a dedicated constrained optimization algorithm. We validate our approach by fabricating balloons designed with our method and comparing their inflated shapes to the results predicted by simulation. An extensive set of manufactured sample balloons demonstrates the shape diversity that can be achieved by our method. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A computational model for geodesics in the space of thin shells, with a metric that reflects viscous dissipation required to physically deform a thin shell, is offered, which emphasizes the strong impact of physical parameters on the evolution of a shell shape along a geodesic path.
Abstract: Building on concepts from continuum mechanics, we offer a computational model for geodesics in the space of thin shells, with a metric that reflects viscous dissipation required to physically deform a thin shell. Different from previous work, we incorporate bending contributions into our deformation energy on top of membrane distortion terms in order to obtain a physically sound notion of distance between shells, which does not require additional smoothing. Our bending energy formulation depends on the so-called relative Weingarten map, for which we provide a discrete analogue based on principles of discrete differential geometry. Our computational results emphasize the strong impact of physical parameters on the evolution of a shell shape along a geodesic path. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: In computer graphics, triangle meshes are ubiquitous as a representation of surface models and advanced processing algorithms are continuously being proposed, aiming at improving performance (compression ratio, watermark robustness and capacity), while minimizing the introduced distortion.
Abstract: In computer graphics, triangle meshes are ubiquitous as a representation of surface models. Processing of this kind of data, such as compression or watermarking, often involves an unwanted distortion of the surface geometry. Advanced processing algorithms are continuously being proposed, aiming at improving performance (compression ratio, watermark robustness and capacity), while minimizing the introduced distortion. In most cases, the final resulting mesh is intended to be viewed by a human observer, and it is therefore necessary to minimise the amount of distortion perceived by the human visual system. However, only recently there have been studies published on subjective experiments in this field, showing that previously used objective error measures exhibit rather poor correlation with the results of subjective experiments. In this paper, we present results of our own large subjective testing aimed at human perception of triangle mesh distortion. We provide an independent confirmation of the previous result by Lavoue et al. that most current metrics perform poorly, with the exception of the MSDM/MSDM2 metrics. We propose a novel metric based on measuring the distortion of dihedral angles, which provides even higher correlation with the results of our experiments and experiments performed by other researchers. Our metric is about two orders of magnitude faster than MSDM/MSDM2, which makes it much more suitable for usage in iterative optimisation algorithms. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: By directly addressing the block subdivision problem, this work intends to increase the editability and realism of the urban modeling pipeline and to become a standard in parcel generation for future urban modeling methods.
Abstract: We present a method for interactive procedural generation of parcels within the urban modeling pipeline. Our approach performs a partitioning of the interior of city blocks using user-specified subdivision attributes and style parameters. Moreover, our method is both robust and persistent in the sense of being able to map individual parcels from before an edit operation to after an edit operation – this enables transferring most, if not all, customizations despite small to large-scale interactive editing operations. The guidelines guarantee that the resulting subdivisions are functionally and geometrically plausible for subsequent building modeling and construction. Our results include visual and statistical comparisons that demonstrate how the parcel configurations created by our method can closely resemble those found in real-world cities of a large variety of styles. By directly addressing the block subdivision problem, we intend to increase the editability and realism of the urban modeling pipeline and to become a standard in parcel generation for future urban modeling methods. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: A novel approach to build document clouds, named ProjCloud, that aim at solving both semantical layouts and linking with document sets and a new algorithm for building word clouds inside polygons, which employs spectral sorting to maintain the semantic relationship among words.
Abstract: Word clouds have become one of the most widely accepted visual resources for document analysis and visualization, motivating the development of several methods for building layouts of keywords extracted from textual data. Existing methods are effective to demonstrate content, but are not capable of preserving semantic relationships among keywords while still linking the word cloud to the underlying document groups that generated them. Such representation is highly desirable for exploratory analysis of document collections. In this paper we present a novel approach to build document clouds, named ProjCloud that aim at solving both semantical layouts and linking with document sets. ProjCloud generates a semantically consistent layout from a set of documents. Through a multidimensional projection, it is possible to visualize the neighborhood relationship between highly related documents and their corresponding word clouds simultaneously. Additionally, we propose a new algorithm for building word clouds inside polygons, which employs spectral sorting to maintain the semantic relationship among words. The effectiveness and flexibility of our methodology is confirmed when comparisons are made to existing methods. The technique automatically constructs projection based layouts the user may choose to examine in the form of the point clouds or corresponding word clouds, allowing a high degree of control over the exploratory process. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: Kelp Diagrams is presented, a novel method to depict set relations over points, i.e., elements with predefined positions, which is achieved by a routing algorithm that links elements that are part of the same set by constructing minimum cost paths over a tangent visibility graph.
Abstract: We present Kelp Diagrams, a novel method to depict set relations over points, i.e., elements with predefined positions. Our method creates schematic drawings and has been designed to take aesthetic quality, efficiency, and effectiveness into account. This is achieved by a routing algorithm, which links elements that are part of the same set by constructing minimum cost paths over a tangent visibility graph. There are two styles of Kelp Diagrams to depict overlapping sets, a nested and a striped style, each with its own strengths and weaknesses. We compare Kelp Diagrams with two existing methods and show that our approach provides a more consistent and clear depiction of both element locations and their set relations. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: The results suggest that switching from one view to the other might lead to an increase in the numbers of findings of specific types made by the subjects which can be beneficial for certain tasks.
Abstract: We present a qualitative user study analyzing findings made while exploring changes over time in spatial interactions. We analyzed findings made by the study participants with flow maps, one of the most popular representations of spatial interactions, using animation and small-multiples as two alternative ways of representing temporal changes. Our goal was not to measure the subjects’ performance with the two views, but to find out whether there are qualitative differences between the types of findings users make with these two representations. To achieve this goal we performed a deep analysis of the collected findings, the interaction logs, and the subjective feedback from the users. We observed that with animation the subjects tended to make more findings concerning geographically local events and changes between subsequent years. With small-multiples more findings concerning longer time periods were made. Besides, our results suggest that switching from one view to the other might lead to an increase in the numbers of findings of specific types made by the subjects which can be beneficial for certain tasks. © 2012 Wiley Periodicals, Inc.