scispace - formally typeset
Search or ask a question

Showing papers in "Computer Graphics Forum in 2014"


Journal ArticleDOI
TL;DR: In this paper, a self-contained state-of-the-art report on the physics, the models, the numerical methods and the algorithms used in interactive rigid body simulation all of which have evolved and matured over the past 20 years.
Abstract: Interactive rigid body simulation is an important part of many modern computer tools, which no authoring tool nor game engine can do without. Such high-performance computer tools open up new possibilities for changing how designers, engineers, modelers and animators work with their design problems. This paper is a self contained state-of-the-art report on the physics, the models, the numerical methods and the algorithms used in interactive rigid body simulation all of which have evolved and matured over the past 20 years. Furthermore, the paper communicates the mathematical and theoretical details in a pedagogical manner. This paper is not only a stake in the sand on what has been done, it also seeks to give the reader deeper insights to help guide their future research.

196 citations


Journal ArticleDOI
TL;DR: The concept of position‐based dynamics is introduced, dynamic simulation based on shape matching and data‐driven upsampling approaches are presented and several applications for these methods are presented.
Abstract: The dynamic simulation of mechanical effects has a long history in computer graphics. The classical methods in this field discretize Newton's second law in a variety of Lagrangian or Eulerian ways, and formulate forces appropriate for each mechanical effect: joints for rigid bodies; stretching, shearing or bending for deformable bodies and pressure, or viscosity for fluids, to mention just a few. In the last years, the class of position-based methods has become popular in the graphics community. These kinds of methods are fast, stable and controllable which make them well-suited for use in interactive environments. Position-based methods are not as accurate as force-based methods in general but they provide visual plausibility. Therefore, the main application areas of these approaches are virtual reality, computer games and special effects in movies. This state-of-the-art report covers the large variety of position-based methods that were developed in the field of physically based simulation. We will introduce the concept of position-based dynamics, present dynamic simulation based on shape matching and discuss data-driven upsampling approaches. Furthermore, we will present several applications for these methods.

178 citations


Journal ArticleDOI
TL;DR: A novel framework to evaluate multi‐agent crowd simulation algorithms based on real‐world observations of crowd movements to enable fair comparisons by automatically estimating the parameters that enable the simulation algorithms to best fit the given data.
Abstract: We present a novel framework to evaluate multi-agent crowd simulation algorithms based on real-world observations of crowd movements. A key aspect of our approach is to enable fair comparisons by automatically estimating the parameters that enable the simulation algorithms to best fit the given data. We formulate parameter estimation as an optimization problem, and propose a general framework to solve the combinatorial optimization problem for all parameterized crowd simulation algorithms. Our framework supports a variety of metrics to compare reference data and simulation outputs. The reference data may correspond to recorded trajectories, macroscopic parameters, or artist-driven sketches. We demonstrate the benefits of our framework for example-based simulation, modeling of cultural variations, artist-driven crowd animation, and relative comparison of some widely-used multi-agent simulation algorithms.

169 citations


Journal ArticleDOI
TL;DR: This work proposes an inverse modelling approach for stochastic trees that takes polygonal tree models as input and estimates the parameters of a procedural model so that it produces trees similar to the input.
Abstract: Procedural tree models have been popular in computer graphics for their ability to generate a variety of output trees from a set of input parameters and to simulate plant interaction with the environment for a realistic placement of trees in virtual scenes. However, defining such models and their parameters is a difficult task. We propose an inverse modelling approach for stochastic trees that takes polygonal tree models as input and estimates the parameters of a procedural model so that it produces trees similar to the input. Our framework is based on a novel parametric model for tree generation and uses Monte Carlo Markov Chains to find the optimal set of parameters. We demonstrate our approach on a variety of input models obtained from different sources, such as interactive modelling systems, reconstructed scans of real trees and developmental models.

125 citations


Journal ArticleDOI
TL;DR: An optimization framework for 3D printing that seeks to save printing time and the support material required to print 3D shapes using the PackMerger framework, which converts the input 3D watertight mesh into a shell by hollowing its inner parts.
Abstract: We propose an optimization framework for 3D printing that seeks to save printing time and the support material required to print 3D shapes. Three-dimensional printing technology is rapidly maturing and may revolutionize how we manufacture objects. The total cost of printing, however, is governed by numerous factors which include not only the price of the printer but also the amount of material and time to fabricate the shape. Our PackMerger framework converts the input 3D watertight mesh into a shell by hollowing its inner parts. The shell is then divided into segments. The location of splits is controlled based on several parameters, including the size of the connection areas or volume of each segment. The pieces are then tightly packed using optimization. The optimization attempts to minimize the amount of support material and the bounding box volume of the packed segments while keeping the number of segments minimal. The final packed configuration can be printed with substantial time and material savings, while also allowing printing of objects that would not fit into the printer volume. We have tested our system on three different printers and it shows a reduction of 5-30% of the printing time while simultaneously saving 15-65% of the support material. The optimization time was approximately 1 min. Once the segments are printed, they need to be assembled.

122 citations


Journal ArticleDOI
TL;DR: A trivially parallelizable preprocessing step, which compresses a point cloud into a collection of nearly‐planar patches related by geometric transformations, enables us to robustly filter out noise and greatly reduces the computational cost and memory requirements of the method.
Abstract: We present a method to automatically segment indoor scenes by detecting repeated objects. Our algorithm scales to datasets with 198 million points and does not require any training data. We propose a trivially parallelizable preprocessing step, which compresses a point cloud into a collection of nearly-planar patches related by geometric transformations. This representation enables us to robustly filter out noise and greatly reduces the computational cost and memory requirements of our method, enabling execution at interactive rates. We propose a patch similarity measure based on shape descriptors and spatial configurations of neighboring patches. The patches are clustered in a Euclidean embedding space based on the similarity matrix to yield the segmentation of the input point cloud. The generated segmentation can be used to compress the raw point cloud, create an object database, and increase the clarity of the point cloud visualization.

106 citations


Journal ArticleDOI
TL;DR: 4D Video Textures introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio that achieves >90% reduction in size and halves the rendering cost.
Abstract: 4D Video Textures 4DVT introduce a novel representation for rendering video-realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free-viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video-realistic interactive animation through two contributions: a layered view-dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high-level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user-study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.

101 citations


Journal ArticleDOI
TL;DR: An easy‐to‐follow, introductory tutorial of the many‐light theory is given; a comprehensive, unified survey of the topic is provided with a comparison of the main algorithms; limitations regarding materials and light transport phenomena are discussed and a vision to motivate and guide future research is presented.
Abstract: Recent years have seen increasing attention and significant progress in many-light rendering, a class of methods for efficient computation of global illumination. The many-light formulation offers a unified mathematical framework for the problem reducing the full lighting transport simulation to the calculation of the direct illumination from many virtual light sources. These methods are unrivaled in their scalability: they are able to produce plausible images in a fraction of a second but also converge to the full solution over time. In this state-of-the-art report, we give an easy-to-follow, introductory tutorial of the many-light theory; provide a comprehensive, unified survey of the topic with a comparison of the main algorithms; discuss limitations regarding materials and light transport phenomena and present a vision to motivate and guide future research. We will cover both the fundamental concepts as well as improvements, extensions and applications of many-light rendering.

98 citations


Journal ArticleDOI
TL;DR: This paper proposes a vote‐based approach to detect symmetry in 3D shapes, with special interest in models with large missing parts and shows the applicability of the algorithm in the repair and completion of challenging reassembled objects in the context of cultural heritage.
Abstract: Symmetry is a common characteristic in natural and man-made objects. Its ubiquitous nature can be exploited to facilitate the analysis and processing of computational representations of real objects. In particular, in computer graphics, the detection of symmetries in 3D geometry has enabled a number of applications in modeling and reconstruction. However, the problem of symmetry detection in incomplete geometry remains a challenging task. In this paper, we propose a vote-based approach to detect symmetry in 3D shapes, with special interest in models with large missing parts. Our algorithm generates a set of candidate symmetries by matching local maxima of a surface function based on the heat diffusion in local domains, which guarantee robustness to missing data. In order to deal with local perturbations, we propose a multi-scale surface function that is useful to select a set of distinctive points over which the approximate symmetries are defined. In addition, we introduce a vote-based scheme that is aware of the partiality, and therefore reduces the number of false positive votes for the candidate symmetries. We show the effectiveness of our method in a varied set of 3D shapes and different levels of partiality. Furthermore, we show the applicability of our algorithm in the repair and completion of challenging reassembled objects in the context of cultural heritage.

97 citations


Journal ArticleDOI
TL;DR: Unorganized collections of 3D models are analyzed to facilitate explorative shape synthesis by providing high‐level feedback of possible synthesizable shapes by jointly analyzing arrangements and shapes of parts across models.
Abstract: Recent advances in modeling tools enable non-expert users to synthesize novel shapes by assembling parts extracted from model databases. A major challenge for these tools is to provide users with relevant parts, which is especially difficult for large repositories with significant geometric variations. In this paper we analyze unorganized collections of 3D models to facilitate explorative shape synthesis by providing high-level feedback of possible synthesizable shapes. By jointly analyzing arrangements and shapes of parts across models, we hierarchically embed the models into low-dimensional spaces. The user can then use the parameterization to explore the existing models by clicking in different areas or by selecting groups to zoom on specific shape clusters. More importantly, any point in the embedded space can be lifted to an arrangement of parts to provide an abstracted view of possible shape variations. The abstraction can further be realized by appropriately deforming parts from neighboring models to produce synthesized geometry. Our experiments show that users can rapidly generate plausible and diverse shapes using our system, which also performs favorably with respect to previous modeling tools.

91 citations


Journal ArticleDOI
TL;DR: This survey review and classify the existing techniques for advanced volumetric illumination based on their technical realization, their performance behaviour as well as their perceptual capabilities will define future challenges in the area of interactive advanced voluetric illumination.
Abstract: Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.

Journal ArticleDOI
TL;DR: This report reviews the existing compressed GPU volume rendering approaches, covering sampling grid layouts, compact representation models, compression techniques, GPU rendering architectures and fast decoding techniques.
Abstract: Great advancements in commodity graphics hardware have favoured graphics processing unit GPU-based volume rendering as the main adopted solution for interactive exploration of rectilinear scalar volumes on commodity platforms. Nevertheless, long data transfer times and GPU memory size limitations are often the main limiting factors, especially for massive, time-varying or multi-volume visualization, as well as for networked visualization on the emerging mobile devices. To address this issue, a variety of level-of-detail LOD data representations and compression techniques have been introduced. In order to improve capabilities and performance over the entire storage, distribution and rendering pipeline, the encoding/decoding process is typically highly asymmetric, and systems should ideally compress at data production time and decompress on demand at rendering time. Compression and LOD pre-computation does not have to adhere to real-time constraints and can be performed off-line for high-quality results. In contrast, adaptive real-time rendering from compressed representations requires fast, transient and spatially independent decompression. In this report, we review the existing compressed GPU volume rendering approaches, covering sampling grid layouts, compact representation models, compression techniques, GPU rendering architectures and fast decoding techniques.

Journal ArticleDOI
TL;DR: The main advantages of the method are high speed, robust handling of a large variety of routine clinical images, and simple and minimal user interaction.
Abstract: The diagnosis of certain spine pathologies, such as scoliosis, spondylolisthesis and vertebral fractures, is part of the daily clinical routine. Very frequently, magnetic resonance image data are used to diagnose these kinds of pathologies in order to avoid exposing patients to harmful radiation, like X-ray. We present a method which detects and segments all acquired vertebral bodies, with minimal user intervention. This allows an automatic diagnosis to detect scoliosis, spondylolisthesis and crushed vertebrae. Our approach consists of three major steps. First, vertebral centres are detected using a Viola-Jones like method, and then the vertebrae are segmented in a parallel manner, and finally, geometric diagnostic features are deduced in order to diagnose the three diseases. Our method was evaluated on 26 lumbar datasets containing 234 reference vertebrae. Vertebra detection has 7.1% false negatives and 1.3% false positives. The average Dice coefficient to manual reference is 79.3% and mean distance error is 1.76iź?mm. No severe case of the three illnesses was missed, and false alarms occurred rarely-0% for scoliosis, 3.9% for spondylolisthesis and 2.6% for vertebral fractures. The main advantages of our method are high speed, robust handling of a large variety of routine clinical images, and simple and minimal user interaction.

Journal ArticleDOI
TL;DR: It is demonstrated that the proposed IISPH‐FLIP solver can simulate incompressible fluids with a quantifiable, imperceptible density deviation below 0.1%.
Abstract: We propose to use Implicit Incompressible Smoothed Particle Hydrodynamics IISPH for pressure projection and boundary handling in Fluid-Implicit-Particle FLIP solvers for the simulation of incompressible fluids. This novel combination addresses two issues of existing SPH and FLIP solvers, namely mass preservation in FLIP and efficiency and memory consumption in SPH. First, the SPH component enables the simulation of incompressible fluids with perfect mass preservation. Second, the FLIP component efficiently enriches the SPH component with detail that is comparable to a standard SPH simulation with the same number of particles, while improving the performance by a factor of 7 and significantly reducing the memory consumption. We demonstrate that the proposed IISPH-FLIP solver can simulate incompressible fluids with a quantifiable, imperceptible density deviation below 0.1%. We show large-scale scenarios with up to 160 million particles that have been processed on a single desktop PC using only 15GB of memory. One- and two-way coupled solids are illustrated.

Journal ArticleDOI
TL;DR: This work details a method that leverages the two color heads of recent low‐end fused deposition modeling (FDM) 3D printers to produce continuous tone imagery, and shows that by applying small geometric offsets, tone can be varied without the need to switch color print heads within a single layer.
Abstract: In this work we detail a method that leverages the two color heads of recent low-end fused deposition modeling FDM 3D printers to produce continuous tone imagery. The challenge behind producing such two-tone imagery is how to finely interleave the two colors while minimizing the switching between print heads, making each color printed span as long and continuous as possible to avoid artifacts associated with printing short segments. The key insight behind our work is that by applying small geometric offsets, tone can be varied without the need to switch color print heads within a single layer. We can now effectively print two-tone texture mapped models capturing both geometric and color information in our output 3D prints.

Journal ArticleDOI
TL;DR: Fused Filament Fabrication is an additive manufacturing process by which a 3D object is created from plastic filament through a hot nozzle where it melts.
Abstract: Fused Filament Fabrication is an additive manufacturing process by which a 3D object is created from plastic filament. The filament is pushed through a hot nozzle where it melts. The nozzle deposits plastic layer after layer to create the final object. This process has been popularized by the RepRap community.

Journal ArticleDOI
TL;DR: A unique database of 150 BRDFs representing a wide range of materials; the majority exhibiting anisotropic behavior is presented, and an adaptive sampling method based on analysis of the measuredBRDFs is proposed, exhibiting better performance and stability than competing sparse sampling approaches.
Abstract: BRDFs are commonly used to represent given materials' appearance in computer graphics and related fields. Although, in the recent past, BRDFs have been extensively measured, compressed, and fitted by a variety of analytical models, most research has been primarily focused on simplified isotropic BRDFs. In this paper, we present a unique database of 150 BRDFs representing a wide range of materials; the majority exhibiting anisotropic behavior. Since time-consuming BRDF measurement represents a major obstacle in the digital material appearance reproduction pipeline, we tested several approaches estimating a very limited set of samples capable of high quality appearance reconstruction. Initially, we aligned all measured BRDFs according to the location of the anisotropic highlights. Then we propose an adaptive sampling method based on analysis of the measured BRDFs. For each BRDF, a unique sampling pattern was computed, given a predefined count of samples. Further, template-based methods are introduced based on reusing of the precomputed sampling patterns. This approach enables a more efficient measurement of unknown BRDFs while preserving the visual fidelity for the majority of tested materials. Our method exhibits better performance and stability than competing sparse sampling approaches; especially for higher numbers of samples.

Journal ArticleDOI
TL;DR: This paper explores an indirect top‐down approach, where instead of part geometry, part arrangements extracted from each model are compared, and shows that this leads to the detection of recurring arrangements of parts, which are otherwise difficult to discover in a direct unsupervised setting.
Abstract: Extracting semantically related parts across models remains challenging, especially without supervision. The common approach is to co-analyze a model collection, while assuming the existence of descriptive geometric features that can directly identify related parts. In the presence of large shape variations, common geometric features, however, are no longer sufficiently descriptive. In this paper, we explore an indirect top-down approach, where instead of part geometry, part arrangements extracted from each model are compared. The key observation is that while a direct comparison of part geometry can be ambiguous, part arrangements, being higher level structures, remain consistent, and hence can be used to discover latent commonalities among semantically related shapes. We show that our indirect analysis leads to the detection of recurring arrangements of parts, which are otherwise difficult to discover in a direct unsupervised setting. We evaluate our algorithm on ground truth datasets and report advantages over geometric similarity-based bottom-up co-segmentation algorithms.

Journal ArticleDOI
TL;DR: A novel quad layout algorithm that focuses on the embedding optimization, thereby complementing recent methods focusing on the structure optimization aspect and being suited for fully automatic workflows.
Abstract: Quad layouting, i.e. the partitioning of a surface into a coarse network of quadrilateral patches, is a fundamental step in application scenarios ranging from animation and simulation to reverse engineering and meshing. This process involves determining the layout's combinatorial structure as well as its geometric embedding in the surface. We present a novel quad layout algorithm that focuses on the embedding optimization, thereby complementing recent methods focusing on the structure optimization aspect. It takes as input a description of the target layout structure and computes a complete embedding in form of a parameterization globally optimized for isometry and, in particular, principal direction alignment. Besides being suited for fully automatic workflows, our method can also incorporate user constraints and support the tedious but common procedure of manual layouting.

Journal ArticleDOI
TL;DR: In this paper, a data-driven method for the real-time synthesis of believable steering behaviors for virtual crowds is presented, which interlinks the input examples into a structure called the perception-action graph PAG which can be used at run-time to efficiently synthesize believable virtual crowds.
Abstract: We present a data-driven method for the real-time synthesis of believable steering behaviours for virtual crowds. The proposed method interlinks the input examples into a structure we call the perception-action graph PAG which can be used at run-time to efficiently synthesize believable virtual crowds. A virtual character's state is encoded using a temporal representation, the Temporal Perception Pattern TPP. The graph nodes store groups of similar TPPs whereas edges connecting the nodes store actions trajectories that were partially responsible for the transformation between the TPPs. The proposed method is being tested on various scenarios using different input data and compared against a nearest neighbours approach which is commonly employed in other data-driven crowd simulation systems. The results show up to an order of magnitude speed-up with similar or better simulation quality.

Journal ArticleDOI
TL;DR: This paper proposes a new approach for color transfer between two images that performs a white‐balance step on both images to remove color casts caused by different illuminations in the source and target image and performs a full gamut‐based mapping technique rather than processing each channel separately.
Abstract: This paper proposes a new approach for color transfer between two images. Our method is unique in its consideration of the scene illumination and the constraint that the mapped image must be within the color gamut of the target image. Specifically, our approach first performs a white-balance step on both images to remove color casts caused by different illuminations in the source and target image. We then align each image to share the same 'white axis' and perform a gradient preserving histogram matching technique along this axis to match the tone distribution between the two images. We show that this illuminant-aware strategy gives a better result than directly working with the original source and target image's luminance channel as done by many previous methods. Afterwards, our method performs a full gamut-based mapping technique rather than processing each channel separately. This guarantees that the colors of our transferred image lie within the target gamut. Our experimental results show that this combined illuminant-aware and gamut-based strategy produces more compelling results than previous methods. We detail our approach and demonstrate its effectiveness on a number of examples.

Journal ArticleDOI
TL;DR: The proposed framework is used to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer.
Abstract: Style transfer aims to apply the style of an exemplar model to a target one, while retaining the target's structure. The main challenge in this process is to algorithmically distinguish style from structure, a high-level, potentially ill-posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source-to-target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source-to-target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.

Journal ArticleDOI
TL;DR: A new framework for the visual analysis of crowd simulations is introduced, allowing for potentially erroneous behaviors on a per‐agent basis either by automatically detecting outliers based on individual evaluation metrics or by accounting for multiple evaluation criteria in a principled fashion using Principle Component Analysis and the notion of Pareto Optimality.
Abstract: We present a novel approach for analyzing the quality of multi-agent crowd simulation algorithms. Our approach is data-driven, taking as input a set of user-defined metrics and reference training data, either synthetic or from video footage of real crowds. Given a simulation, we formulate the crowd analysis problem as an anomaly detection problem and exploit state-of-the-art outlier detection algorithms to address it. To that end, we introduce a new framework for the visual analysis of crowd simulations. Our framework allows us to capture potentially erroneous behaviors on a per-agent basis either by automatically detecting outliers based on individual evaluation metrics or by accounting for multiple evaluation criteria in a principled fashion using Principle Component Analysis and the notion of Pareto Optimality. We discuss optimizations necessary to allow real-time performance on large datasets and demonstrate the applicability of our framework through the analysis of simulations created by several widely-used methods, including a simulation from a commercial game.

Journal ArticleDOI
TL;DR: This work uses interactively‐defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences and puppet the target character in real time, which provides new ways to control characters for real‐time animation.
Abstract: It is now possible to capture the 3D motion of the human body on consumer hardware and to puppet in real time skeleton-based virtual characters. However, many characters do not have humanoid skeletons. Characters such as spiders and caterpillars do not have boned skeletons at all, and these characters have very different shapes and motions. In general, character control under arbitrary shape and motion transformations is unsolved - how might these motions be mapped? We control characters with a method which avoids the rigging-skinning pipeline - source and target characters do not have skeletons or rigs. We use interactively-defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences. Then, we puppet the target character in real time. We demonstrate the versatility of our method through results on diverse virtual characters with different input motion controllers. Our method provides a fast, flexible, and intuitive interface for arbitrary motion mapping which provides new ways to control characters for real-time animation.

Journal ArticleDOI
TL;DR: This approach is the first that processes ray entries directly and does not require depth reconstruction or matching of image features, so Arbitrarily complex scenes can be captured while preserving correct occlusion boundaries, anisotropic reflections, refractions, and other light effects that go beyond diffuse reflections of Lambertian surfaces.
Abstract: We present a novel approach to recording and computing panorama light fields. In contrast to previous methods that estimate panorama light fields from focal stacks or naive multi-perspective image stitching, our approach is the first that processes ray entries directly and does not require depth reconstruction or matching of image features. Arbitrarily complex scenes can therefore be captured while preserving correct occlusion boundaries, anisotropic reflections, refractions, and other light effects that go beyond diffuse reflections of Lambertian surfaces.

Journal ArticleDOI
TL;DR: This paper proposes to exploit the self‐similarity of the underlying shapes for compressing point cloud surfaces which can contain millions of points at a very high precision, and demonstrates the validity of this approach on several point clouds from fine‐arts and mechanical objects, as well as a urban scene.
Abstract: Most surfaces, be it from a fine-art artifact or a mechanical object, are characterized by a strong self-similarity. This property finds its source in the natural structures of objects but also in the fabrication processes: regularity of the sculpting technique, or machine tool. In this paper, we propose to exploit the self-similarity of the underlying shapes for compressing point cloud surfaces which can contain millions of points at a very high precision. Our approach locally resamples the point cloud in order to highlight the self-similarity of the shape, while remaining consistent with the original shape and the scanner precision. It then uses this self-similarity to create an ad hoc dictionary on which the local neighborhoods will be sparsely represented, thus allowing for a light-weight representation of the total surface. We demonstrate the validity of our approach on several point clouds from fine-arts and mechanical objects, as well as a urban scene. In addition, we show that our approach also achieves a filtering of noise whose magnitude is smaller than the scanner precision.

Journal ArticleDOI
TL;DR: An algorithmic approach for automatically laying out game levels from user‐specified blocks that uses configuration spaces defining feasible relative positions of building blocks within a layout and a graph‐decomposition based layout strategy that leverages graph connectivity to speed up convergence and avoid local minima is proposed.
Abstract: The design of video game environments, or levels, aims to control gameplay by steering the player through a sequence of designer-controlled steps, while simultaneously providing a visually engaging experience. Traditionally these levels are painstakingly designed by hand, often from pre-existing building blocks, or space templates. In this paper, we propose an algorithmic approach for automatically laying out game levels from user-specified blocks. Our method allows designers to retain control of the gameplay flow via user-specified level connectivity graphs, while relieving them from the tedious task of manually assembling the building blocks into a valid, plausible layout. Our method produces sequences of diverse layouts for the same input connectivity, allowing for repeated replay of a given level within a visually different, new environment. We support complex graph connectivities and various building block shapes, and are able to compute complex layouts in seconds. The two key components of our algorithm are the use of configuration spaces defining feasible relative positions of building blocks within a layout and a graph-decomposition based layout strategy that leverages graph connectivity to speed up convergence and avoid local minima. Together these two tools quickly steer the solution toward feasible layouts. We demonstrate our method on a variety of real-life inputs, and generate appealing layouts conforming to user specifications.

Journal ArticleDOI
TL;DR: This paper addresses the problem of representing dynamic 3D meshes in a compact way, so that they can be stored and transmitted efficiently, and outperforms the current state of the art in terms of low data rate at a given perceived distortion, as measured by the STED and KG error metrics.
Abstract: This paper addresses the problem of representing dynamic 3D meshes in a compact way, so that they can be stored and transmitted efficiently. We focus on sequences of triangle meshes with shared connectivity, avoiding the necessity of having a skinning structure. Our method first computes an average mesh of the whole sequence in edge shape space. A discrete geometric Laplacian of this average surface is then used to encode the coefficients that describe the trajectories of the mesh vertices. Optionally, a novel spatio-temporal predictor may be applied to the trajectories to further improve the compression rate. We demonstrate that our approach outperforms the current state of the art in terms of low data rate at a given perceived distortion, as measured by the STED and KG error metrics.

Journal ArticleDOI
Chunqiang Yuan1, Xiaohui Liang1, Shiyu Hao1, Yue Qi1, Qinping Zhao1 
TL;DR: The proposed calculation method for estimating the shape of a cumulus cloud from a single image suitable for flight simulations and games can generate realistic cumulus clouds that are similar to those found in the images in terms of the shape distribution.
Abstract: Clouds are important components of the fascinating natural images. However, extracting cloud shapes from images remains a challenging task. This paper presents a calculation method for estimating the shape of a cumulus cloud from a single image suitable for flight simulations and games. The shape of the cloud is assumed to be symmetric. Based on this assumption, the intensities of pixels are correlated with the geometry of a cloud's surface via a simplified single scattering model. A propagation scheme is designed to derive the surface progressively, and mesh editing techniques are used to improve the surface. Finally, the cloud is represented by a particle system. The results show that the proposed method can generate realistic cumulus clouds that are similar to those found in the images in terms of the shape distribution.

Journal ArticleDOI
TL;DR: The main idea of the paper is that a city cannot be meaningfully simulated without taking its neighbourhood into account, and a simple traffic simulation is used to grow new major roads and to influence the locations of minor road growth.
Abstract: We present a model for growing procedural road networks in and close to cities. The main idea of our paper is that a city cannot be meaningfully simulated without taking its neighbourhood into account. A simple traffic simulation that considers this neighbourhood is then used to grow new major roads and to influence the locations of minor road growth. Waterways are introduced and used to help position the city nuclei on the map. The resulting cities are formed by allowing several smaller settlements to grow together and to form a rich road structure, much like in real world, and require only minimal per-city input, allowing for batch generation.