scispace - formally typeset
Search or ask a question

Showing papers in "Computer Graphics Forum in 2015"


Journal ArticleDOI
TL;DR: This work presents a novel reconstruction approach based on retrieving objects from a 3D shape database while scanning an environment in real‐time, and is able to retrieve objects in cluttered and noisy scenes even when the database contains only similar models, but no exact matches.
Abstract: In recent years, real-time 3D scanning technology has developed significantly and is now able to capture large environments with considerable accuracy. Unfortunately, the reconstructed geometry still suffers from incompleteness, due to occlusions and lack of view coverage, resulting in unsatisfactory reconstructions. In order to overcome these fundamental physical limitations, we present a novel reconstruction approach based on retrieving objects from a 3D shape database while scanning an environment in real-time. With this approach, we are able to replace scanned RGB-D data with complete, hand-modeled objects from shape databases. We align and scale retrieved models to the input data to obtain a high-quality virtual representation of the real-world environment that is quite faithful to the original geometry. In contrast to previous methods, we are able to retrieve objects in cluttered and noisy scenes even when the database contains only similar models, but no exact matches. In addition, we put a strong focus on object retrieval in an interactive scanning context - our algorithm runs directly on 3D scanning data structures, and is able to query databases of thousands of models in an online fashion during scanning.

202 citations


Journal ArticleDOI
TL;DR: A collection of regressors are jointly learned, which collectively yield the smallest super‐resolving error for all training data, and this method is conceptually simple and computationally efficient, yet very effective.
Abstract: Learning regressors from low-resolution patches to high-resolution patches has shown promising results for image super-resolution. We observe that some regressors are better at dealing with certain cases, and others with different cases. In this paper, we jointly learn a collection of regressors, which collectively yield the smallest super-resolving error for all training data. After training, each training sample is associated with a label to indicate its 'best' regressor, the one yielding the smallest error. During testing, our method bases on the concept of 'adaptive selection' to select the most appropriate regressor for each input patch. We assume that similar patches can be super-resolved by the same regressor and use a fast, approximate kNN approach to transfer the labels of training patches to test patches. The method is conceptually simple and computationally efficient, yet very effective. Experiments on four datasets show that our method outperforms competing methods.

177 citations


Journal ArticleDOI
TL;DR: “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis are distinguished from “a posteriori’ methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process.
Abstract: Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between "a priori" methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and "a posteriori" methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state-of-the-art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real-world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.

167 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel mesh normal filtering framework based on the joint bilateral filter, with applications in mesh denoising, and compute the guidance normal on a face using a neighboring patch with the most consistent normal orientations, which provides a reliable estimation of the true normal even with a high‐level of noise.
Abstract: The joint bilateral filter is a variant of the standard bilateral filter, where the range kernel is evaluated using a guidance signal instead of the original signal. It has been successfully applied to various image processing problems, where it provides more flexibility than the standard bilateral filter to achieve high quality results. On the other hand, its success is heavily dependent on the guidance signal, which should ideally provide a robust estimation about the features of the output signal. Such a guidance signal is not always easy to construct. In this paper, we propose a novel mesh normal filtering framework based on the joint bilateral filter, with applications in mesh denoising. Our framework is designed as a two-stage process: first, we apply joint bilateral filtering to the face normals, using a properly constructed normal field as the guidance; afterwards, the vertex positions are updated according to the filtered face normals. We compute the guidance normal on a face using a neighboring patch with the most consistent normal orientations, which provides a reliable estimation of the true normal even with a high-level of noise. The effectiveness of our approach is validated by extensive experimental results.

146 citations


Journal ArticleDOI
TL;DR: This paper builds on high‐quality monocular capture of 3D facial performance, lighting and albedo of the dubbing and target actors, and uses audio analysis in combination with a space‐time retrieval method to synthesize a new photo‐realistically rendered and highly detailed 3D shape model of the mouth region to replace the target performance.
Abstract: In many countries, foreign movies and TV productions are dubbed, i.e., the original voice of an actor is replaced with a translation that is spoken by a dubbing actor in the country's own language. Dubbing is a complex process that requires specific translations and accurately timed recitations such that the new audio at least coarsely adheres to the mouth motion in the video. However, since the sequence of phonemes and visemes in the original and the dubbing language are different, the video-to-audio match is never perfect, which is a major source of visual discomfort. In this paper, we propose a system to alter the mouth motion of an actor in a video, so that it matches the new audio track. Our paper builds on high-quality monocular capture of 3D facial performance, lighting and albedo of the dubbing and target actors, and uses audio analysis in combination with a space-time retrieval method to synthesize a new photo-realistically rendered and highly detailed 3D shape model of the mouth region to replace the target performance. We demonstrate plausible visual quality of our results compared to footage that has been professionally dubbed in the traditional way, both qualitatively and through a user study.

145 citations


Journal ArticleDOI
TL;DR: In this article, a review article provides an overview of the efforts made on tackling this demanding task and discusses how these findings can be synthesized in computer graphics and can be utilized in the domains of Human-Robot Interaction and Human-Computer Interaction for allowing humans to interact with virtual agents and other artificial entities.
Abstract: A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: 'The face is the portrait of the mind; the eyes, its informers'. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye gaze, during the expression of emotion or during conversation. We discuss how these findings are synthesized in computer graphics and can be utilized in the domains of Human-Robot Interaction and Human-Computer Interaction for allowing humans to interact with virtual agents and other artificial entities. We conclude with a summary of guidelines for animating the eye and head from the perspective of a character animator.

137 citations


Journal ArticleDOI
TL;DR: An error measure with increased sensitivity to stitching artifacts in regions with pronounced structure is introduced and the basic concept of local warping for parallax removal is extended, forming the first system for spatiotemporally stable panoramic video stitching from unstructured camera array input.
Abstract: We describe an algorithm for generating panoramic video from unstructured camera arrays. Artifact-free panorama stitching is impeded by parallax between input views. Common strategies such as multi-level blending or minimum energy seams produce seamless results on quasi-static input. However, on video input these approaches introduce noticeable visual artifacts due to lack of global temporal and spatial coherence. In this paper we extend the basic concept of local warping for parallax removal. Firstly, we introduce an error measure with increased sensitivity to stitching artifacts in regions with pronounced structure. Using this measure, our method efficiently finds an optimal ordering of pair-wise warps for robust stitching with minimal parallax artifacts. Weighted extrapolation of warps in non-overlap regions ensures temporal stability, while at the same time avoiding visual discontinuities around transitions between views. Remaining global deformation introduced by the warps is spread over the entire panorama domain using constrained relaxation, while staying as close as possible to the original input views. In combination, these contributions form the first system for spatiotemporally stable panoramic video stitching from unstructured camera array input.

130 citations


Journal ArticleDOI
TL;DR: A taxonomy of deghosting algorithms is proposed which can be used to group existing and future algorithms into meaningful classes, and the results of a subjective experiment are shared which aims to evaluate various state‐of‐the‐art de ghosting algorithms.
Abstract: Obtaining a high quality high dynamic range HDR image in the presence of camera and object movement has been a long-standing challenge. Many methods, known as HDR deghosting algorithms, have been developed over the past ten years to undertake this challenge. Each of these algorithms approaches the deghosting problem from a different perspective, providing solutions with different degrees of complexity, solutions that range from rudimentary heuristics to advanced computer vision techniques. The proposed solutions generally differ in two ways: 1 how to detect ghost regions and 2 what to do to eliminate ghosts. Some algorithms choose to completely discard moving objects giving rise to HDR images which only contain the static regions. Some other algorithms try to find the best image to use for each dynamic region. Yet others try to register moving objects from different images in the spirit of maximizing dynamic range in dynamic regions. Furthermore, each algorithm may introduce different types of artifacts as they aim to eliminate ghosts. These artifacts may come in the form of noise, broken objects, under- and over-exposed regions, and residual ghosting. Given the high volume of studies conducted in this field over the recent years, a comprehensive survey of the state of the art is required. Thus, the first goal of this paper is to provide this survey. Secondly, the large number of algorithms brings about the need to classify them. Thus the second goal of this paper is to propose a taxonomy of deghosting algorithms which can be used to group existing and future algorithms into meaningful classes. Thirdly, the existence of a large number of algorithms brings about the need to evaluate their effectiveness, as each new algorithm claims to outperform its precedents. Therefore, the last goal of this paper is to share the results of a subjective experiment which aims to evaluate various state-of-the-art deghosting algorithms.

115 citations


Journal ArticleDOI
TL;DR: This work presents a new general‐purpose and fully automatic self‐tuning non‐parametric texture synthesis method that extends Texture Optimization by introducing several key improvements that result in superior synthesis ability.
Abstract: The goal of example-based texture synthesis methods is to generate arbitrarily large textures from limited exemplars in order to fit the exact dimensions and resolution required for a specific modeling task. The challenge is to faithfully capture all of the visual characteristics of the exemplar texture, without introducing obvious repetitions or unnatural looking visual elements. While existing non-parametric synthesis methods have made remarkable progress towards this goal, most such methods have been demonstrated only on relatively low-resolution exemplars. Real-world high resolution textures often contain texture details at multiple scales, which these methods have difficulty reproducing faithfully. In this work, we present a new general-purpose and fully automatic self-tuning non-parametric texture synthesis method that extends Texture Optimization by introducing several key improvements that result in superior synthesis ability. Our method is able to self-tune its various parameters and weights and focuses on addressing three challenging aspects of texture synthesis: i irregular large scale structures are faithfully reproduced through the use of automatically generated and weighted guidance channels; ii repetition and smoothing of texture patches is avoided by new spatial uniformity constraints; iii a smart initialization strategy is used in order to improve the synthesis of regular and near-regular textures, without affecting textures that do not exhibit regularities. We demonstrate the versatility and robustness of our completely automatic approach on a variety of challenging high-resolution texture exemplars.

106 citations


Journal ArticleDOI
Cem Yuksel1
TL;DR: A greedy sample elimination algorithm that assigns a weight to each sample in a given set and eliminates the ones with greater weights in order to pick a subset of a desired size with Poisson disks property without having to specify a Poisson disk radius is introduced.
Abstract: In this paper we describe sample elimination for generating Poisson disk sample sets with a desired size. We introduce a greedy sample elimination algorithm that assigns a weight to each sample in a given set and eliminates the ones with greater weights in order to pick a subset of a desired size with Poisson disk property without having to specify a Poisson disk radius. This new algorithm is simple, computationally efficient, and it can work in any sampling domain, producing sample sets with more pronounced blue noise characteristics than dart throwing. Most importantly, it allows unbiased progressive adaptive sampling and it scales better to high dimensions than previous methods. However, it cannot guarantee maximal coverage. We provide a statistical analysis of our algorithm in 2D and higher dimensions as well as results from our tests with different example applications.

103 citations


Journal ArticleDOI
TL;DR: This survey proposes a new categorization of GPU‐based large‐scale volume visualization techniques based on the notions of actual output‐resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution.
Abstract: This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera- and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. 'output-sensitive' algorithms and system designs. This leads to recent output-sensitive approaches that are 'ray-guided', 'visualization-driven' or 'display-aware'. In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks-the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context-the notion of address translation. For our purposes here, we view parallel distributed visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey.

Journal ArticleDOI
TL;DR: This work presents an adaptive slicing scheme for reducing the manufacturing time for 3D printing systems and develops a saliency‐based segmentation scheme to partition an object into subparts and then optimize the slicing of each subpart separately.
Abstract: We present an adaptive slicing scheme for reducing the manufacturing time for 3D printing systems. Based on a new saliency-based metric, our method optimizes the thicknesses of slicing layers to save printing time and preserve the visual quality of the printing results. We formulate the problem as a constrained i¾?0 optimization and compute the slicing result via a two-step optimization scheme. To further reduce printing time, we develop a saliency-based segmentation scheme to partition an object into subparts and then optimize the slicing of each subpart separately. We validate our method with a large set of 3D shapes ranging from CAD models to scanned objects. Results show that our method saves printing time by 30-40% and generates 3D objects that are visually similar to the ones printed with the finest resolution possible.

Journal ArticleDOI
TL;DR: A very fast segmentation technique which still achieves very high quality results is demonstrated and is proposed to replace the time consuming iterative refinement of global colour models in traditional GrabCut formulation by a densely connected crf.
Abstract: Figure-ground segmentation from bounding box input, provided either automatically or manually, has been extremely popular in the last decade and influenced various applications. A lot of research has focused on high-quality segmentation, using complex formulations which often lead to slow techniques, and often hamper practical usage. In this paper we demonstrate a very fast segmentation technique which still achieves very high quality results. We propose to replace the time consuming iterative refinement of global colour models in traditional GrabCut formulation by a densely connected crf. To motivate this decision, we show that a dense crf implicitly models unnormalized global colour models for foreground and background. Such relationship provides insightful analysis to bridge between dense crf and GrabCut functional. We extensively evaluate our algorithm using two famous benchmarks. Our experimental results demonstrated that the proposed algorithm achieves an order of magnitude 10× speed-up with respect to the closest competitor, and at the same time achieves a considerably higher accuracy.

Journal ArticleDOI
TL;DR: This survey reviews, classify and analyze algorithms for computing and simplifying Morse complexes in the context of such applications with an emphasis on discrete Morse theory and on algorithms based on it.
Abstract: Morse theory offers a natural and mathematically-sound tool for shape analysis and understanding. It allows studying the behavior of a scalar function defined on a manifold. Starting from a Morse function, we can decompose the domain of the function into meaningful regions associated with the critical points of the function. Such decompositions, called Morse complexes, provide a segmentation of a shape and are extensively used in terrain modeling and in scientific visualization. Discrete Morse theory, a combinatorial counterpart of smooth Morse theory defined over cell complexes, provides an excellent basis for computing Morse complexes in a robust and efficient way. Moreover, since a discrete Morse complex computed over a given complex has the same homology as the original one, but fewer cells, discrete Morse theory is a fundamental tool for efficiently detecting holes in shapes through homology and persistent homology. In this survey, we review, classify and analyze algorithms for computing and simplifying Morse complexes in the context of such applications with an emphasis on discrete Morse theory and on algorithms based on it.

Journal ArticleDOI
TL;DR: The optimized ordering of vertices and selection of colours in combination with interactive highlighting techniques increases the traceability of communities along the time axis and allows users to investigate the community structure together with the underlying dynamic graph.
Abstract: The community structure of graphs is an important feature that gives insight into the high-level organization of objects within the graph. In real-world systems, the graph topology is oftentimes not static but changes over time and hence, also the community structure changes. Previous timeline-based approaches either visualize the dynamic graph or the dynamic community structure. In contrast, our approach combines both in a single image and therefore allows users to investigate the community structure together with the underlying dynamic graph. Our optimized ordering of vertices and selection of colours in combination with interactive highlighting techniques increases the traceability of communities along the time axis. Users can identify visual signatures, estimate the reliability of the derived community structure and investigate whether community evolution interacts with changes in the graph topology. The utility of our approach is demonstrated in two application examples.

Journal ArticleDOI
TL;DR: The multi‐view depth image representation is adopted and the Multi‐View Deep Extreme Learning Machine (MVD‐ELM) is proposed to achieve fast and quality projective feature learning for 3D shapes to lead to a more accurate 3D feature learning.
Abstract: Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi-view depth image representation and propose Multi-View Deep Extreme Learning Machine MVD-ELM to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multi-view learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning.

Journal ArticleDOI
TL;DR: This report discusses current methods in motion capturing hands, data‐driven and physics‐based algorithms to synthesize their motions, and techniques to make the appearance of the hand model surface more realistic, and describes emerging trends and applications for virtual hand animation.
Abstract: The human hand is a complex biological system able to perform numerous tasks with impressive accuracy and dexterity. Gestures furthermore play an important role in our daily interactions, and humans are particularly skilled at perceiving and interpreting detailed signals in communications. Creating believable hand motions for virtual characters is an important and challenging task. Many new methods have been proposed in the Computer Graphics community within the last years, and significant progress has been made towards creating convincing, detailed hand and finger motions. This state of the art report presents a review of the research in the area of hand and finger modeling and animation. Starting with the biological structure of the hand and its implications for how the hand moves, we discuss current methods in motion capturing hands, data-driven and physics-based algorithms to synthesize their motions, and techniques to make the appearance of the hand model surface more realistic. We then focus on areas in which detailed hand motions are crucial such as manipulation and communication. Our report concludes by describing emerging trends and applications for virtual hand animation.

Journal ArticleDOI
TL;DR: This report presents a coherent summary of the state of the art in virtual cutting of deformable bodies, focusing on the distinct geometrical and topological representations of the deformable body, as well as the specific numerical discretizations of the governing equations of motion.
Abstract: Virtual cutting of deformable bodies has been an important and active research topic in physically based modelling and simulation for more than a decade. A particular challenge in virtual cutting is the robust and efficient incorporation of cuts into an accurate computational model that is used for the simulation of the deformable body. This report presents a coherent summary of the state of the art in virtual cutting of deformable bodies, focusing on the distinct geometrical and topological representations of the deformable body, as well as the specific numerical discretizations of the governing equations of motion. In particular, we discuss virtual cutting based on tetrahedral, hexahedral and polyhedral meshes, in combination with standard, polyhedral, composite and extended finite element discretizations. A separate section is devoted to meshfree methods. Furthermore, we discuss cutting-related research problems such as collision detection and haptic rendering in the context of interactive cutting scenarios. The report is complemented with an application study to assess the performance of virtual cutting simulators.

Journal ArticleDOI
TL;DR: This paper presents an up‐to‐date and comprehensive review of the state of the art of non‐immersive interaction techniques for Navigation, Selection & Manipulation, and System Control, including a basic introduction to the topic, the challenges and an examination of a number of popular approaches.
Abstract: Various interaction techniques have been developed for interactive 3D environments. This paper presents an up-to-date and comprehensive review of the state of the art of non-immersive interaction techniques for Navigation, Selection & Manipulation, and System Control, including a basic introduction to the topic, the challenges and an examination of a number of popular approaches. We also introduce 3D Interaction Testbed 3DIT to firstly allow a 'hands-on' understanding of 3D interaction principles, and secondly to create an open platform for defining evaluation methods, stimuli as well as representative tasks akin to those found in other disciplines of science. We hope that this survey can aid both researchers and developers of interactive 3D applications in having a clearer overview of the topic and in particular can be useful for practitioners and researchers that are new to the field of interactive 3D graphics.

Journal ArticleDOI
TL;DR: This paper deals with the problem of computing the distance to a surface and considers several distance function approximation methods which are based on solving partial differential equations (PDEs) and finding solutions to variational problems, including Poisson‐like equations and generalized double‐layer potentials.
Abstract: In this paper, we deal with the problem of computing the distance to a surface a curve in two dimensional and consider several distance function approximation methods which are based on solving partial differential equations PDEs and finding solutions to variational problems. In particular, we deal with distance function estimation methods related to the Poisson-like equations and generalized double-layer potentials. Our numerical experiments are backed by novel theoretical results and demonstrate efficiency of the considered PDE-based distance function approximations.

Journal ArticleDOI
TL;DR: This work considers the problem of manufacturing free‐form geometry with classical manufacturing techniques, such as mold casting or 3‐axis milling, and determines a set of constraints that are necessary for manufacturability and decompose and deform the shape to satisfy the constraints per segment.
Abstract: We consider the problem of manufacturing free-form geometry with classical manufacturing techniques, such as mold casting or 3-axis milling. We determine a set of constraints that are necessary for manufacturability and then decompose and, if necessary, deform the shape to satisfy the constraints per segment. We show that many objects can be generated from a small number of mold-pieces if slight deformations are acceptable. We provide examples of actual molds and the resulting manufactured objects.

Journal ArticleDOI
TL;DR: An overview of the available appearance capture techniques is provided to guide practitioners and researchers in assessing the tradeoffs between current approaches and identifying directions for future advances in facial appearance capture.
Abstract: Facial appearance capture is now firmly established within academic research and used extensively across various application domains, perhaps most prominently in the entertainment industry through the design of virtual characters in video games and films. While significant progress has occurred over the last two decades, no single survey currently exists that discusses the similarities, differences, and practical considerations of the available appearance capture techniques as applied to human faces. A central difficulty of facial appearance capture is the way light interacts with skin-which has a complex multi-layered structure-and the interactions that occur below the skin surface can, by definition, only be observed indirectly. In this report, we distinguish between two broad strategies for dealing with this complexity. "Image-based methods" try to exhaustively capture the exact face appearance under different lighting and viewing conditions, and then render the face through weighted image combinations. "Parametric methods" instead fit the captured reflectance data to some parametric appearance model used during rendering, allowing for a more lightweight and flexible representation but at the cost of potentially increased rendering complexity or inexact reproduction. The goal of this report is to provide an overview that can guide practitioners and researchers in assessing the tradeoffs between current approaches and identifying directions for future advances in facial appearance capture.

Journal ArticleDOI
TL;DR: A time‐varying, multi‐layered biophysically‐based model of the optical properties of human skin, suitable for simulating appearance changes due to aging, inspired on tissue optics studies.
Abstract: This paper presents a time-varying, multi-layered biophysically-based model of the optical properties of human skin, suitable for simulating appearance changes due to aging. We have identified the key aspects that cause such changes, both in terms of the structure of skin and its chromophore concentrations, and rely on the extensive medical and optical tissue literature for accurate data. Our model can be expressed in terms of biophysical parameters, optical parameters commonly used in graphics and rendering such as spectral absorption and scattering coefficients, or more intuitively with higher-level parameters such as age, gender, skin care or skin type. It can be used with any rendering algorithm that uses diffusion profiles, and it allows to automatically simulate different types of skin at different stages of aging, avoiding the need for artistic input or costly capture processes. While the presented skin model is inspired on tissue optics studies, we also provided a simplified version valid for non-diagnostic applications.

Journal ArticleDOI
TL;DR: A novel approach for the decimation of triangle surface meshes that takes as input a triangle surface mesh and a set of planar proxies detected in a pre‐processing analysis step, and structured via an adjacency graph to approximate the local mesh geometry as well as the geometry and structure of proxies.
Abstract: We present a novel approach for the decimation of triangle surface meshes. Our algorithm takes as input a triangle surface mesh and a set of planar proxies detected in a pre-processing analysis step, and structured via an adjacency graph. It then performs greedy mesh decimation through a series of edge collapse, designed to approximate the local mesh geometry as well as the geometry and structure of proxies. Such structure-preserving approach is well suited to planar abstraction, i.e. extreme decimation approximating well the planar parts while filtering out the others. Our experiments on a variety of inputs illustrate the potential of our approach in terms of improved accuracy and preservation of structure.

Journal ArticleDOI
TL;DR: This work investigates the similarities between various emotional states with regards to the arousal and valence of the Russell's circumplex model, using a variety of features that encode, in addition to the raw geometry, stylistic characteristics of motion based on Laban Movement Analysis.
Abstract: The increasing availability of large motion databases, in addition to advancements in motion synthesis, has made motion indexing and classification essential for better motion composition. However, in order to achieve good connectivity in motion graphs, it is important to understand human behaviour; human movement though is complex and difficult to completely describe. In this paper, we investigate the similarities between various emotional states with regards to the arousal and valence of the Russell's circumplex model. We use a variety of features that encode, in addition to the raw geometry, stylistic characteristics of motion based on Laban Movement Analysis LMA. Motion capture data from acted dance performances were used for training and classification purposes. The experimental results show that the proposed features can partially extract the LMA components, providing a representative space for indexing and classification of dance movements with regards to the emotion. This work contributes to the understanding of human behaviour and actions, providing insights on how people express emotional states using their body, while the proposed features can be used as complement to the standard motion similarity, synthesis and classification methods.

Journal ArticleDOI
TL;DR: This work proposes a stable and efficient particle‐based method for simulating highly viscous fluids that can generate coiling and buckling phenomena and handle variable viscosity and proposes a new method for extracting coefficients of the matrix contributed by second‐ring neighbor particles to efficiently solve the linear system using a conjugate gradient solver.
Abstract: We propose a stable and efficient particle-based method for simulating highly viscous fluids that can generate coiling and buckling phenomena and handle variable viscosity. In contrast to previous methods that use explicit integration, our method uses an implicit formulation to improve the robustness of viscosity integration, therefore enabling use of larger time steps and higher viscosities. We use Smoothed Particle Hydrodynamics to solve the full form of viscosity, constructing a sparse linear system with a symmetric positive definite matrix, while exploiting the variational principle that automatically enforces the boundary condition on free surfaces. We also propose a new method for extracting coefficients of the matrix contributed by second-ring neighbor particles to efficiently solve the linear system using a conjugate gradient solver. Several examples demonstrate the robustness and efficiency of our implicit formulation over previous methods and illustrate the versatility of our method.

Journal ArticleDOI
TL;DR: Numerically, the SfO problem is approached by splitting it into two optimization sub‐problems: metric‐from‐operator (reconstruction of the discrete metric from the intrinsic operator) and embedding‐ from‐metric (finding a shape embedding that would realize a given metric, a setting of the multidimensional scaling problem).
Abstract: We formulate the problem of shape-from-operator SfO, recovering an embedding of a mesh from intrinsic operators defined through the discrete metric edge lengths. Particularly interesting instances of our SfO problem include: shape-from-Laplacian, allowing to transfer style between shapes; shape-from-difference operator, used to synthesize shape analogies; and shape-from-eigenvectors, allowing to generate 'intrinsic averages' of shape collections. Numerically, we approach the SfO problem by splitting it into two optimization sub-problems: metric-from-operator reconstruction of the discrete metric from the intrinsic operator and embedding-from-metric finding a shape embedding that would realize a given metric, a setting of the multidimensional scaling problem. We study numerical properties of our problem, exemplify it on several applications, and discuss its imitations.

Journal ArticleDOI
TL;DR: A painting machine that works with visual feedback and applies acrylic paint from a repository to a canvas until the created painting resembles a given input image or scene is described.
Abstract: We describe a painting machine and associated algorithms. Our modified industrial robot works with visual feedback and applies acrylic paint from a repository to a canvas until the created painting resembles a given input image or scene. The color differences between canvas and input are used to direct the application of new strokes. We present two optimization-based algorithms that place such strokes in relation to already existing ones. Using these methods we are able to create different painting styles, one that tries to match the input colors with almost transparent strokes and another one that creates dithering patterns of opaque strokes that approximate the input color. The machine produces paintings that mimic those created by human painters and allows us to study the painting process as well as the creation of artworks.

Journal ArticleDOI
TL;DR: This approach combines both macroscopic and microscopic controls of the crowd transformation to maximally maintain subgroups' local stability and dynamic collective behaviour, while minimizing the overall effort of the agents during the transformation.
Abstract: This paper introduces a new crowd formation transform approach to achieve visually pleasing group formation transition and control. Its core idea is to transform crowd formation shapes with a least effort pair assignment using the Kuhn-Munkres algorithm, discover clusters of agent subgroups using affinity propagation and Delaunay triangulation algorithms and apply subgroup-based social force model SFM to the agent subgroups to achieve alignment, cohesion and collision avoidance. Meanwhile, mutual information of the dynamic crowd is used to guide agents' movement at runtime. This approach combines both macroscopic involving least effort position assignment and clustering and microscopic involving SFM controls of the crowd transformation to maximally maintain subgroups' local stability and dynamic collective behaviour, while minimizing the overall effort i.e. travelling distance of the agents during the transformation. Through simulation experiments and comparisons, we demonstrate that this approach is efficient and effective to generate visually pleasing and smooth transformations and outperform several existing crowd simulation approaches including reciprocal velocity avoidances, optimal reciprocal collision avoidance and OpenSteer.

Journal ArticleDOI
TL;DR: An original algorithm to split a 3D model in parts that can be efficiently packed within a box, with the objective of reassembling them after delivery.
Abstract: Modern 3D printing technologies and the upcoming mass-customization paradigm call for efficient methods to produce and distribute arbitrarily shaped 3D objects. This paper introduces an original algorithm to split a 3D model in parts that can be efficiently packed within a box, with the objective of reassembling them after delivery. The first step consists in the creation of a hierarchy of possible parts that can be tightly packed within their minimum bounding boxes. In a second step, the hierarchy is exploited to extract the single segmentation whose parts can be most tightly packed. The fact that shape packing is an NP-complete problem justifies the use of heuristics and approximated solutions whose efficacy and efficiency must be assessed. Extensive experimentation demonstrates that our algorithm produces satisfactory results for arbitrarily shaped objects while being comparable to ad hoc methods when specific shapes are considered.