scispace - formally typeset
Search or ask a question

Showing papers on "Graphics published in 2017"


Journal ArticleDOI
TL;DR: Vega-Lite combines a traditional grammar of graphics, providing visual encoding rules and a composition algebra for layered and multi-view displays, with a novel grammar of interaction, that enables rapid specification of interactive data visualizations.
Abstract: We present Vega-Lite, a high-level grammar that enables rapid specification of interactive data visualizations. Vega-Lite combines a traditional grammar of graphics, providing visual encoding rules and a composition algebra for layered and multi-view displays, with a novel grammar of interaction. Users specify interactive semantics by composing selections. In Vega-Lite, a selection is an abstraction that defines input event processing, points of interest, and a predicate function for inclusion testing. Selections parameterize visual encodings by serving as input data, defining scale extents, or by driving conditional logic. The Vega-Lite compiler automatically synthesizes requisite data flow and event handling logic, which users can override for further customization. In contrast to existing reactive specifications, Vega-Lite selections decompose an interaction design into concise, enumerable semantic units. We evaluate Vega-Lite through a range of examples, demonstrating succinct specification of both customized interaction methods and common techniques such as panning, zooming, and linked selection.

622 citations



Proceedings ArticleDOI
04 Dec 2017
TL;DR: The proposed method uses a Convolutional Neural Network with a custom pooling layer to optimize current best-performing algorithms feature extraction scheme and outperforms state of the art methods for both local and full image classification.
Abstract: This paper presents a deep-learning method for distinguishing computer generated graphics from real photographic images The proposed method uses a Convolutional Neural Network (CNN) with a custom pooling layer to optimize current best-performing algorithms feature extraction scheme Local estimates of class probabilities are computed and aggregated to predict the label of the whole picture We evaluate our work on recent photo-realistic computer graphics and show that it outperforms state of the art methods for both local and full image classification

262 citations


Proceedings Article
01 Jan 2017
TL;DR: The Visual Interaction Network is introduced, a general-purpose model for learning the dynamics of a physical system from raw visual observations, consisting of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks.
Abstract: From just a glance, humans can make rich predictions about the future of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains or require information about the underlying state. We introduce the Visual Interaction Network, a general-purpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments.

232 citations


Proceedings Article
01 Dec 2017
TL;DR: A paradigm for understanding physical scenes without human annotations is introduced that quickly recognizes the physical world state from appearance and motion cues, and has the flexibility to incorporate both differentiable and non-differentiable physics and graphics engines.
Abstract: We introduce a paradigm for understanding physical scenes without human annotations. At the core of our system is a physical world representation that is first recovered by a perception module and then utilized by physics and graphics engines. During training, the perception module and the generative models learn by visual de-animation --- interpreting and reconstructing the visual information stream. During testing, the system first recovers the physical world state, and then uses the generative models for reasoning and future prediction. Even more so than forward simulation, inverting a physics or graphics engine is a computationally hard problem; we overcome this challenge by using a convolutional inversion network. Our system quickly recognizes the physical world state from appearance and motion cues, and has the flexibility to incorporate both differentiable and non-differentiable physics and graphics engines. We evaluate our system on both synthetic and real datasets involving multiple physical scenes, and demonstrate that our system performs well on both physical state estimation and reasoning problems. We further show that the knowledge learned on the synthetic dataset generalizes to constrained real images.

174 citations


Proceedings Article
02 Aug 2017
TL;DR: In this article, a spherical convolutional network is proposed to translate a planar CNN to process 360° imagery directly in its equirectangular projection, which yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution.
Abstract: While 360° cameras offer tremendous new possibilities in vision, graphics, and augmented reality, the spherical images they produce make core feature extraction non-trivial. Convolutional neural networks (CNNs) trained on images from perspective cameras yield “flat" filters, yet 360° images cannot be projected to a single plane without significant distortion. A naive solution that repeatedly projects the viewing sphere to all tangent planes is accurate, but much too computationally intensive for real problems. We propose to learn a spherical convolutional network that translates a planar CNN to process 360° imagery directly in its equirectangular projection. Our approach learns to reproduce the flat filter outputs on 360° data, sensitive to the varying distortion effects across the viewing sphere. The key benefits are 1) efficient feature extraction for 360° images and video, and 2) the ability to leverage powerful pre-trained networks researchers have carefully honed (together with massive labeled image training sets) for perspective images. We validate our approach compared to several alternative methods in terms of both raw CNN output accuracy as well as applying a state-of-the-art “flat" object detector to 360° data. Our method yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution.

137 citations


Journal ArticleDOI
TL;DR: This work implements a quantum optimal control algorithm based on automatic differentiation and harnesses the acceleration afforded by graphics processing units (GPUs) to enable more intricate control on the evolution path, suppression of departures from the truncated model subspace, and minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.
Abstract: We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.

131 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work proposes a new approach to learn an interpretable distributed representation of scenes, using a deterministic rendering function as the decoder and a object proposal based encoder that is trained by minimizing both the supervised prediction and the unsupervised reconstruction errors.
Abstract: We study the problem of holistic scene understanding. We would like to obtain a compact, expressive, and interpretable representation of scenes that encodes information such as the number of objects and their categories, poses, positions, etc. Such a representation would allow us to reason about and even reconstruct or manipulate elements of the scene. Previous works have used encoder-decoder based neural architectures to learn image representations, however, representations obtained in this way are typically uninterpretable, or only explain a single object in the scene. In this work, we propose a new approach to learn an interpretable distributed representation of scenes. Our approach employs a deterministic rendering function as the decoder, mapping a naturally structured and disentangled scene description, which we named scene XML, to an image. By doing so, the encoder is forced to perform the inverse of the rendering operation (a.k.a. de-rendering) to transform an input image to the structured scene XML that the decoder used to produce the image. We use a object proposal based encoder that is trained by minimizing both the supervised prediction and the unsupervised reconstruction errors. Experiments demonstrate that our approach works well on scene de-rendering with two different graphics engines, and our learned representation can be easily adapted for a wide range of applications like image editing, inpainting, visual analogy-making, and image captioning.

124 citations


Journal ArticleDOI
TL;DR: This paper presents Data-Driven Guides (DDG), a technique for designing expressive information graphics in a graphic design environment that provides guides to encode data using three fundamental visual encoding channels: length, area, and position.
Abstract: In recent years, there is a growing need for communicating complex data in an accessible graphical form. Existing visualization creation tools support automatic visual encoding, but lack flexibility for creating custom design; on the other hand, freeform illustration tools require manual visual encoding, making the design process time-consuming and error-prone. In this paper, we present Data-Driven Guides (DDG), a technique for designing expressive information graphics in a graphic design environment. Instead of being confined by predefined templates or marks, designers can generate guides from data and use the guides to draw, place and measure custom shapes. We provide guides to encode data using three fundamental visual encoding channels: length, area, and position. Users can combine more than one guide to construct complex visual structures and map these structures to data. When underlying data is changed, we use a deformation technique to transform custom shapes using the guides as the backbone of the shapes. Our evaluation shows that data-driven guides allow users to create expressive and more accurate custom data-driven graphics.

116 citations


Posted Content
TL;DR: In this paper, a spherical convolutional network is proposed to translate a planar CNN to process 360° imagery directly in its equirectangular projection, sensitive to the varying distortion effects across the viewing sphere.
Abstract: While 360° cameras offer tremendous new possibilities in vision, graphics, and augmented reality, the spherical images they produce make core feature extraction non-trivial. Convolutional neural networks (CNNs) trained on images from perspective cameras yield "flat" filters, yet 360° images cannot be projected to a single plane without significant distortion. A naive solution that repeatedly projects the viewing sphere to all tangent planes is accurate, but much too computationally intensive for real problems. We propose to learn a spherical convolutional network that translates a planar CNN to process 360° imagery directly in its equirectangular projection. Our approach learns to reproduce the flat filter outputs on 360° data, sensitive to the varying distortion effects across the viewing sphere. The key benefits are 1) efficient feature extraction for 360° images and video, and 2) the ability to leverage powerful pre-trained networks researchers have carefully honed (together with massive labeled image training sets) for perspective images. We validate our approach compared to several alternative methods in terms of both raw CNN output accuracy as well as applying a state-of-the-art "flat" object detector to 360° data. Our method yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution.

115 citations


Posted Content
TL;DR: A point tracking system powered by two deep convolutional neural networks that are trained with simple synthetic data, alleviating the requirement of expensive external camera ground truthing and advanced graphics rendering pipelines.
Abstract: We present a point tracking system powered by two deep convolutional neural networks. The first network, MagicPoint, operates on single images and extracts salient 2D points. The extracted points are "SLAM-ready" because they are by design isolated and well-distributed throughout the image. We compare this network against classical point detectors and discover a significant performance gap in the presence of image noise. As transformation estimation is more simple when the detected points are geometrically stable, we designed a second network, MagicWarp, which operates on pairs of point images (outputs of MagicPoint), and estimates the homography that relates the inputs. This transformation engine differs from traditional approaches because it does not use local point descriptors, only point locations. Both networks are trained with simple synthetic data, alleviating the requirement of expensive external camera ground truthing and advanced graphics rendering pipelines. The system is fast and lean, easily running 30+ FPS on a single CPU.

Posted Content
TL;DR: The Visual Interaction Network is introduced, a general-purpose model for learning the dynamics of a physical system from raw visual observations, consisting of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks.
Abstract: From just a glance, humans can make rich predictions about the future state of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains and require direct measurements of the underlying states. We introduce the Visual Interaction Network, a general-purpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions and dynamics, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. Our results demonstrate that the perceptual module and the object-based dynamics predictor module can induce factored latent representations that support accurate dynamical predictions. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments.

Journal ArticleDOI
TL;DR: 3DLite1, a novel approach to reconstruct 3D environments using consumer RGB-D sensors, making a step towards directly utilizing captured 3D content in graphics applications, such as video games, VR, or AR, computes a lightweight,Lowpolygonal geometric abstraction of the scanned geometry.
Abstract: We present 3DLite1, a novel approach to reconstruct 3D environments using consumer RGB-D sensors, making a step towards directly utilizing captured 3D content in graphics applications, such as video games, VR, or AR. Rather than reconstructing an accurate one-to-one representation of the real world, our method computes a lightweight, low-polygonal geometric abstraction of the scanned geometry. We argue that for many graphics applications it is much more important to obtain high-quality surface textures rather than highly-detailed geometry. To this end, we compensate for motion blur, auto-exposure artifacts, and micro-misalignments in camera poses by warping and stitching image fragments from low-quality RGB input data to achieve high-resolution, sharp surface textures. In addition to the observed regions of a scene, we extrapolate the scene geometry, as well as the mapped surface textures, to obtain a complete 3D model of the environment. We show that a simple planar abstraction of the scene geometry is ideally suited for this completion task, enabling 3DLite to produce complete, lightweight, and visually compelling 3D scene models. We believe that these CAD-like reconstructions are an important step towards leveraging RGB-D scanning in actual content creation pipelines.

Journal ArticleDOI
TL;DR: Recent research on how computer vision techniques benefit computer graphics techniques and vice versa is surveyed, and research on analysis, manipulation, synthesis, and interaction is covered.
Abstract: The computer graphics and computer vision communities have been working closely together in recent years, and a variety of algorithms and applications have been developed to analyze and manipulate the visual media around us. There are three major driving forces behind this phenomenon: 1) the availability of big data from the Internet has created a demand for dealing with the ever-increasing, vast amount of resources; 2) powerful processing tools, such as deep neural networks, provide effective ways for learning how to deal with heterogeneous visual data; 3) new data capture devices, such as the Kinect, the bridge between algorithms for 2D image understanding and 3D model analysis. These driving forces have emerged only recently, and we believe that the computer graphics and computer vision communities are still in the beginning of their honeymoon phase. In this work we survey recent research on how computer vision techniques benefit computer graphics techniques and vice versa, and cover research on analysis, manipulation, synthesis, and interaction. We also discuss existing problems and suggest possible further research directions.

Journal ArticleDOI
TL;DR: In this paper, the authors present recent advances in this field of transient imaging from a graphics and vision perspective, including capture techniques, analysis, applications and simulation, as well as a comprehensive overview of the current state of the art.

Proceedings ArticleDOI
25 Dec 2017
TL;DR: A triplet network is used to learn a feature embedding capable of measuring style similarity independent of structure, delivering significant gains over previous networks for style discrimination.
Abstract: We propose a novel measure of visual similarity for image retrieval that incorporates both structural and aesthetic (style) constraints. Our algorithm accepts a query as sketched shape, and a set of one or more contextual images specifying the desired visual aesthetic. A triplet network is used to learn a feature embedding capable of measuring style similarity independent of structure, delivering significant gains over previous networks for style discrimination. We incorporate this model within a hierarchical triplet network to unify and learn a joint space from two discriminatively trained streams for style and structure. We demonstrate that this space enables, for the first time, styleconstrained sketch search over a diverse domain of digital artwork comprising graphics, paintings and drawings. We also briefly explore alternative query modalities.

Journal ArticleDOI
TL;DR: Compared with some existed methods, the selection of features is effective and fewer features are required for representing the differences between NI and CG, and the classification time is significantly reduced and the robustness is maintained.
Abstract: The aim of the work presented in this paper is to discriminate natural images (NI) and computer generated graphics (CG). The texture differences are analyzed to the residual images of NI and CG. The residual images are first extracted by using multiple linear regressions, and then the fitting degree of the regression model is investigated. Through the analysis of the difference of their residual images, 9 dimensions of histogram features and 9 dimensions of multi-fractal spectrum features are extracted to represent their texture differences. Combined with 6 dimensions of regression model fitness features, natural images and computer generated graphics are discriminated by using a support vector machine (SVM) classifier. Experimental results and analysis show that it can achieve an average identification accuracy of 98.69%, and it is robust against JPEG compression, rotation, additive noise and image resizing. Compared with some existed methods, the selection of features is effective and fewer features are required for representing the differences between NI and CG. Meanwhile, the classification time is significantly reduced and the robustness is maintained. It has great potential to be used in image source pipeline identification.

Journal ArticleDOI
TL;DR: The parallel map projection framework presented in this study is based on a layered architecture that couples capabilities of cloud computing and high-performance computing accelerated by Graphics Processing Units and provides considerable acceleration for re-projecting vector-based big spatial data.

Journal ArticleDOI
TL;DR: Opt as discussed by the authors is a language for writing objective functions over image- or graph-structured unknowns concisely and at a high level, which automatically transforms these specifications into state-of-the-art GPU solvers based on Gauss-Newton or Levenberg-Marquardt methods.
Abstract: Many graphics and vision problems can be expressed as non-linear least squares optimizations of objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance on modern GPUs in interactive applications. In this work, we propose a new language, Opt,1 for writing these objective functions over image- or graph-structured unknowns concisely and at a high level. Our compiler automatically transforms these specifications into state-of-the-art GPU solvers based on Gauss-Newton or Levenberg-Marquardt methods. Opt can generate different variations of the solver, so users can easily explore tradeoffs in numerical precision, matrix-free methods, and solver approaches.In our results, we implement a variety of real-world graphics and vision applications. Their energy functions are expressible in tens of lines of code and produce highly optimized GPU solver implementations. These solvers are competitive in performance with the best published hand-tuned, application-specific GPU solvers, and orders of magnitude beyond a general-purpose auto-generated solver.


Journal ArticleDOI
TL;DR: JPEG is celebrating the 25th anniversary of its approval as a standard this year, and what are the fundamental components that have given it longevity?
Abstract: JPEG is celebrating the 25th anniversary of its approval as a standard this year. Where did JPEG come from, and what are the fundamental components that have given it longevity?

Proceedings ArticleDOI
01 Oct 2017
TL;DR: An inverse graphics approach to the problem of scene understanding is developed, obtaining a rich representation that includes descriptions of the objects in the scene and their spatial layout, as well as global latent variables like the camera parameters and lighting.
Abstract: We develop an inverse graphics approach to the problem of scene understanding, obtaining a rich representation that includes descriptions of the objects in the scene and their spatial layout, as well as global latent variables like the camera parameters and lighting. The framework's stages include object detection, the prediction of the camera and lighting variables, and prediction of object-specific variables (shape, appearance and pose). This acts like the encoder of an autoencoder, with graphics rendering as the decoder Importantly the scene representation is interpretable and is of variable dimension to match the detected number of objects plus the global variables. For the prediction of the camera latent variables we introduce a novel architecture termed Probabilistic HoughNets (PHNs), which provides a principled approach to combining information from multiple detections. We demonstrate the quality of the reconstructions obtained quantitatively on synthetic data, and qualitatively on real scenes.

Proceedings Article
08 Feb 2017
TL;DR: A machine learning based system that extracts and recognizes the various data fields present in a bar chart for semantic labeling and is tested on a set of over 200 bar charts extracted from over 1,000 scientific articles in PDF format.
Abstract: Large scholarly repositories are designed to provide scientists and researchers with a wealth of information that is retrieved from data present in a variety of formats. A typical scholarly document contains information in a combined layout of texts and graphic images. Common types of graphics found in these documents are scientific charts that are used to represent data values in a visual format. Experimental results are rarely described without the aid of one form of a chart or another, whether it is 2D plot, bar chart, pie chart, etc. Metadata of these graphics is usually the only content that is made available for search by user queries. By processing the image content and extracting the data represented in the graphics, search engines will be able to handle more specific queries related to the data itself. In this paper we describe a machine learning based system that extracts and recognizes the various data fields present in a bar chart for semantic labeling. Our approach comprises of a graphics and text separation and extraction phase, followed by a component role classification for both text and graphic components that are in turn used for semantic analysis and representation of the chart. The proposed system is tested on a set of over 200 bar charts extracted from over 1,000 scientific articles in PDF format.

Journal ArticleDOI
TL;DR: This work performs a literature review to find out how researchers have already applied word-sized representations for reporting empirical and bibliographic data and calls the visualization community to be a pioneer in exploring new visualization-enriched and interactive publication formats.
Abstract: Generating visualizations at the size of a word creates dense information representations often called sparklines . The integration of word-sized graphics into text could avoid additional cognitive load caused by splitting the readers’ attention between figures and text. In scientific publications, these graphics make statements easier to understand and verify because additional quantitative information is available where needed. In this work, we perform a literature review to find out how researchers have already applied such word-sized representations. Illustrating the versatility of the approach, we leverage these representations for reporting empirical and bibliographic data in three application examples. For interactive Web-based publications, we explore levels of interactivity and discuss interaction patterns to link visualization and text. We finally call the visualization community to be a pioneer in exploring new visualization-enriched and interactive publication formats.

Posted Content
TL;DR: FluxMaker is presented, an inexpensive scalable system that renders dynamic information on top of static tactile graphics with movable tactile markers that confirms advantages in application domains such as education and data exploration.
Abstract: For people with visual impairments, tactile graphics are an important means to learn and explore information. However, raised line tactile graphics created with traditional materials such as embossing are static. While available refreshable displays can dynamically change the content, they are still too expensive for many users, and are limited in size. These factors limit wide-spread adoption and the representation of large graphics or data sets. In this paper, we present FluxMaker, an inexpensive scalable system that renders dynamic information on top of static tactile graphics with movable tactile markers. These dynamic tactile markers can be easily reconfigured and used to annotate static raised line tactile graphics, including maps, graphs, and diagrams. We developed a hardware prototype that actuates magnetic tactile markers driven by low-cost and scalable electromagnetic coil arrays, which can be fabricated with standard printed circuit board manufacturing. We evaluate our prototype with six participants with visual impairments and found positive results across four application areas: location finding or navigating on tactile maps, data analysis, and physicalization, feature identification for tactile graphics, and drawing support. The user study confirms advantages in application domains such as education and data exploration.

Proceedings ArticleDOI
19 Oct 2017
TL;DR: FluxMaker as discussed by the authors is an inexpensive scalable system that renders dynamic information on top of static tactile graphics with movable tactile markers, which can be easily reconfigured and used to annotate static raised line tactile graphics, including maps, graphs and diagrams.
Abstract: For people with visual impairments, tactile graphics are an important means to learn and explore information. However, raised line tactile graphics created with traditional materials such as embossing are static. While available refreshable displays can dynamically change the content, they are still too expensive for many users, and are limited in size. These factors limit wide-spread adoption and the representation of large graphics or data sets. In this paper, we present FluxMaker, an inexpensive scalable system that renders dynamic information on top of static tactile graphics with movable tactile markers. These dynamic tactile markers can be easily reconfigured and used to annotate static raised line tactile graphics, including maps, graphs, and diagrams. We developed a hardware prototype that actuates magnetic tactile markers driven by low-cost and scalable electromagnetic coil arrays, which can be fabricated with standard printed circuit board manufacturing. We evaluate our prototype with six participants with visual impairments and found positive results across four application areas: location finding or navigating on tactile maps, data analysis, and physicalization, feature identification for tactile graphics, and drawing support. The user study confirms advantages in application domains such as education and data exploration.

Journal ArticleDOI
01 May 2017
TL;DR: The study found that multimedia element for animation video was use significant have capable to increase imagination and visualization of student.
Abstract: The rapid development of information technology today has given a new breath toward usage of computer in education. One of the increasingly popular nowadays is a multimedia technology that merges a variety of media such as text, graphics, animation, video and audio controlled by a computer. With this technology, a wide range of multimedia element can be developed to improve the quality of education. For that reason, this study aims to investigate the use of multimedia element based on animated video that was developed for Engineering Drawing subject according to the syllabus of Vocational College of Malaysia. The design for this study was a survey method using a quantitative approach and involved 30 respondents from Industrial Machining students. The instruments used in study is questionnaire with correlation coefficient value (0.83), calculated on Alpha-Cronbach. Data was collected and analyzed descriptive analyzed using SPSS. The study found that multimedia element for animation video was use significant have capable to increase imagination and visualization of student. The implications of this study provide information of use of multimedia element will student effect imagination and visualization. In general, these findings contribute to the formation of multimedia element of materials appropriate to enhance the quality of learning material for engineering drawing.

Journal ArticleDOI
TL;DR: Three different approaches to visualize networks by building on the grammar of graphics framework implemented in the ggplot2 package are explored, which allow users to enhance networks with additional information on edges and nodes and convert network data objects to the more familiar data frames.
Abstract: This paper explores three different approaches to visualize networks by building on the grammar of graphics framework implemented in the ggplot2 package. The goal of each approach is to provide the user with the ability to apply the flexibility of ggplot2 to the visualization of network data, including through the mapping of network attributes to specific plot aesthetics. By incorporating networks in the ggplot2 framework, these approaches (1) allow users to enhance networks with additional information on edges and nodes, (2) give access to the strengths of ggplot2, such as layers and facets, and (3) convert network data objects to the more familiar data frames.

Journal ArticleDOI
TL;DR: 103 ten-year-old children learnt to tie complex nautical knots from either a video of hand movements or from a static graphics presentation, showing that long-section animation did not always lose its superiority over static graphics in this type of learning task.

Journal ArticleDOI
TL;DR: It is demonstrated how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach.
Abstract: The use of computers in statistical physics is common because the sheer number of equations that describe the behavior of an entire system particle by particle often makes it impossible to solve them exactly. Monte Carlo methods form a particularly important class of numerical methods for solving problems in statistical physics. Although these methods are simple in principle, their proper use requires a good command of statistical mechanics, as well as considerable computational resources. The aim of this paper is to demonstrate how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach. As an example, we use the public goods game on a square lattice where two strategies compete for common resources in a social dilemma situation. We show that the second-order phase transition to an absorbing phase in the system belongs to the directed percolation universality class, and we compare the time needed to arrive at this result by means of the main processor and by means of a suitable graphics card. Parallel computing on graphics processing units has been developed actively during the last decade, to the point where today the learning curve for entry is anything but steep for those familiar with programming. The subject is thus ripe for inclusion in graduate and advanced undergraduate curricula, and we hope that this paper will facilitate this process in the realm of physics education. To that end, we provide a documented source code for an easy reproduction of presented results and for further development of Monte Carlo simulations of similar systems.