scispace - formally typeset
Search or ask a question

Showing papers on "Graphics published in 2012"


Book
28 Mar 2012
TL;DR: Level of Detail for 3D Graphics brings together, for the first time, the mechanisms, principles, practices, and theory needed by every graphics developer seeking to apply LOD methods.
Abstract: From the Publisher: Level of detail (LOD) techniques are increasingly used by professional real-time developers to strike the balance between breathtaking virtual worlds and smooth, flowing animation. Level of Detail for 3D Graphics brings together, for the first time, the mechanisms, principles, practices, and theory needed by every graphics developer seeking to apply LOD methods. Continuing advances in level of detail management have brought this powerful technology to the forefront of 3D graphics optimization research. This book, written by the very researchers and developers who have built LOD technology, is both a state-of-the-art chronicle of LOD advances and a practical sourcebook, which will enable graphics developers from all disciplines to apply these formidable techniques to their own work. Features Is a complete, practical resource for programmers wishing to incorporate LOD technology into their own systems. Is an important reference for professionals in game development, computer animation, information visualization, real-time graphics and simulation, data capture and preview, CAD display, and virtual worlds. Is accessible to anyone familiar with the essentials of computer science and interactive computer graphics. Covers the full range of LOD methods from mesh simplification to error metrics, as well as advanced issues of human perception, temporal detail, and visual fidelity measurement. Includes an accompanying Web site rich in supplementary material including source code, tools, 3D models, public domain software, documentation, LOD updates, and more. Author Biography:David Luebke David is an Assistant Professor in the Department of Computer Science at the University of Virginia. His principal research interest is the problem of rendering very complex scenes at interactive rates. His research focuses on software techniques such as polygonal simplification and occlusion culling to reduce the complexity of such scenes to manageable levels. Luebke's dissertation research, summarized in a SIGGRAPH '97 paper, introduced a dynamic, view-dependent approach to polygonal simplification for interactive rendering of extremely complex CAD models. He earned his Ph.D. at the University of North Carolina, and his Bachelors degree at the Colorado College. Martin Reddy Martin is a Senior Computer Scientist at SRI International where he works in the area of terrain visualization. This work involves the real-time display of massive terrain databases that are distributed over wide-area networks. His research interests include level of detail, visual perception, and computer graphics. His doctoral research involved the application of models of visual perception to real-time computer graphics systems, enabling the selection of level of detail based upon measures of human perception. He received his B.Sc. from the University of Strathclyde and his Ph.D. from the University of Edinburgh, UK. He is on the Board of Directors of the Web3D Consortium and chair of the GeoVRML Working Group. Jonathan D. Cohen Jon is an Assistant Professor in the Department of Computer Science at The Johns Hopkins University. He earned his Doctoral and Masters degrees from The University of North Carolina at Chapel Hill and earned his Bachelors degree from Duke University. His interests include polygonal simplification and other software acceleration techniques, parallel rendering architectures, collision detection, and high-quality interactive computer graphics. Amitabh Varshney Amitabh is an Associate Professor in the Department of Computer Science at the University of Maryland. His research interests lie in interactive computer graphics, scientific visualization, molecular graphics, and CAD. Varshney has worked on several aspects of level-of-detail simplifications including topology-preserving and topology-reducing simplifications, view-dependent simplifications, parallelization of simplification computation, as well as using triangle strips in multiresolution rendering. Varshney received his PhD and MS from the University of North Carolina at Chapel Hill in 1994 and 1991 respectively. He received his B. Tech. in Computer Science from the Indian Institute of Technology at Delhi in 1989. Benjamin Watson Ben is an Assistant Professor in Computer Science at Northwestern University. He earned his doctoral and Masters degrees at Georgia Tech's GVU Center, and his Bachelors degree at the University of California, Irvine. His dissertation focused on user performance effects of dynamic level of detail management. His other research interests include object simplification, medical applications of virtual reality, and 3D user interfaces. Robert Huebner Robert is the Director of Technology at Nihilistic Software, an independent development studio located in Marin County, California. Prior to co-founding Nihilistic, Robert has worked on a number of successful game titles including "Jedi Knight: Dark Forces 2" for LucasArts Entertainment, "Descent" for Parallax Software, and "Starcraft" for Blizzard Entertainment. Nihilistic's first title, "Vampire The Masquerade: Redemption" was released for the PC in 2000 and sold over 500,000 copies worldwide. Nihilistic's second project will be released in the Winter of 2002 on next-generation game consoles. Robert has spoken on game technology topics at SIGGRAPH, the Game Developer's Conference (GDC), and Electronic Entertainment Expo (E3). He also serves on the advisory board for the Game Developer's Conference and the International Game Developer's Association (IGDA). Robert's e-mail address is .

680 citations


Journal ArticleDOI
TL;DR: This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics.
Abstract: A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see” details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics.

330 citations


Book
22 Aug 2012
TL;DR: The first book to offer a broad, hands-on introduction to information graphics and visualization, The Functional Art reveals, shows how to use color, type, and other graphic tools to make your information graphics more effective, not just better looking.
Abstract: Unlike any time before in our lives, we have access to vast amounts of free information. With the right tools, we can start to make sense of all this data to see patterns and trends that would otherwise be invisible to us. By transforming numbers into graphical shapes, we allow readers to understand the stories those numbers hide. In this practical introduction to understanding and using information graphics, youll learn how to use data visualizations as tools to see beyond lists of numbers and variables and achieve new insights into the complex world around us. Regardless of the kind of data youre working withbusiness, science, politics, sports, or even your own personal financesthis book will show you how to use statistical charts, maps, and explanation diagrams to spot the stories in the data and learn new things from it. Youll also get to peek into the creative process of some of the worlds most talented designers and visual journalists, including Cond Nast Travelers John Grimwade, National Geographic Magazines Fernando Baptista, The New York Times Steve Duenes, The Washington Posts Hannah Fairfield, Hans Rosling of the Gapminder Foundation, Stanfords Geoff McGhee, and European superstars Moritz Stefaner, Jan Willem Tulp, Stefanie Posavec, and Gregor Aisch. The book also includes a DVD-ROM containing over 90 minutes of video lessons that expand on core concepts explained within the book and includes even more inspirational information graphics from the worlds leading designers. The first book to offer a broad, hands-on introduction to information graphics and visualization, The Functional Art reveals: Why data visualization should be thought of as functional art rather than fine art How to use color, type, and other graphic tools to make your information graphics more effective, not just better looking The science of how our brains perceive and remember information Best practices for creating interactive information graphics A comprehensive look at the creative process behind successful information graphics An extensive gallery of inspirational work from the worlds top designers and visual artists On the DVD-ROM: In this introductory video course on information graphics, Alberto Cairo goes into greater detail with even more visual examples of how to create effective information graphics that function as practical tools for aiding perception. Youll learn how to: incorporate basic design principles in your visualizations, create simple interfaces for interactive graphics, and choose the appropriate type of graphic forms for your data. Cairo also deconstructs successful information graphics from The New York Times and National Geographic magazine with sketches and images not shown in the book.

248 citations


Journal ArticleDOI
TL;DR: The methodology for adapting a standard micromagnetic code to run on graphics processing units (GPUs) and exploit the potential for parallel calculations of this platform is discussed and GPMagnet, a general purpose finite-difference GPU-based micronagnetic tool, is used as an example.
Abstract: The methodology for adapting a standard micromagnetic code to run on graphics processing units (GPUs) and exploit the potential for parallel calculations of this platform is discussed. GPMagnet, a general purpose finite-difference GPU-based micromagnetic tool, is used as an example. Speed-up factors of two orders of magnitude can be achieved with GPMagnet with respect to a serial code. This allows for running extensive simulations, nearly inaccessible with a standard micromagnetic solver, at reasonable computational times.

228 citations


Patent
27 Apr 2012
TL;DR: In this paper, the authors describe a system that can scan the surrounding environment and construct a 3D image, map, or representation of the surrounding environments using, for example, invisible light projected into the environment.
Abstract: The systems and methods described herein include a device that can scan the surrounding environment and construct a 3D image, map, or representation of the surrounding environment using, for example, invisible light projected into the environment. In some implementations, the device can also project into the surrounding environment one or more visible radiation pattern patterns (e.g., a virtual object, text, graphics, images, symbols, color patterns, etc.) that are based at least in part on the 3D map of the surrounding environment.

208 citations


Journal ArticleDOI
Manuel Lang1, Oliver Wang1, Tunc Ozan Aydin1, Aljoscha Smolic1, Markus Gross1 
01 Jul 2012
TL;DR: This method extends recent work in edge-aware filtering, approximating costly global regularization with a fast iterative joint filtering operation, and uses an iterative approach that simultaneously uses and estimates per-pixel optical flow vectors.
Abstract: We present an efficient and simple method for introducing temporal consistency to a large class of optimization driven image-based computer graphics problems. Our method extends recent work in edge-aware filtering, approximating costly global regularization with a fast iterative joint filtering operation. Using this representation, we can achieve tremendous efficiency gains both in terms of memory requirements and running time. This enables us to process entire shots at once, taking advantage of supporting information that exists across far away frames, something that is difficult with existing approaches due to the computational burden of video data. Our method is able to filter along motion paths using an iterative approach that simultaneously uses and estimates per-pixel optical flow vectors. We demonstrate its utility by creating temporally consistent results for a number of applications including optical flow, disparity estimation, colorization, scribble propagation, sparse data up-sampling, and visual saliency computation.

160 citations


Patent
07 Feb 2012
TL;DR: In this paper, a framework for performing graphics animation and compositing operations has a layer tree for interfacing with the application and a render tree for interconnection with a render engine, which can be content, windows, views, video, images, text, media or any other type of object for a user interface of an application.
Abstract: A framework for performing graphics animation and compositing operations has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media, or any other type of object for a user interface of an application. The application commits change to the state of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, an animation is determined for animating the change in state. In determining the animation, the framework can define a set of predetermined animations based on motion, visibility, and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer for display on the computer system. Those portions of the render tree that have changed relative to prior versions can be tracked to improve resource management.

150 citations


Journal ArticleDOI
TL;DR: An efficient implementation of a state-of-the-art high-resolution explicit scheme for the shallow water equations on graphics processing units that supports real-time visualization with both photorealistic and non-photorealistic display of the physical quantities.

149 citations


Journal ArticleDOI
TL;DR: The bilinear model is applied to natural spatiotemporal phenomena, including face, body, and cloth motion data, and compared in terms of compaction, generalization ability, predictive precision, and efficiency to existing models.
Abstract: A variety of dynamic objects, such as faces, bodies, and cloth, are represented in computer graphics as a collection of moving spatial landmarks. Spatiotemporal data is inherent in a number of graphics applications including animation, simulation, and object and camera tracking. The principal modes of variation in the spatial geometry of objects are typically modeled using dimensionality reduction techniques, while concurrently, trajectory representations like splines and autoregressive models are widely used to exploit the temporal regularity of deformation. In this article, we present the bilinear spatiotemporal basis as a model that simultaneously exploits spatial and temporal regularity while maintaining the ability to generalize well to new sequences. This factorization allows the use of analytical, predefined functions to represent temporal variation (e.g., B-Splines or the Discrete Cosine Transform) resulting in efficient model representation and estimation. The model can be interpreted as representing the data as a linear combination of spatiotemporal sequences consisting of shape modes oscillating over time at key frequencies. We apply the bilinear model to natural spatiotemporal phenomena, including face, body, and cloth motion data, and compare it in terms of compaction, generalization ability, predictive precision, and efficiency to existing models. We demonstrate the application of the model to a number of graphics tasks including labeling, gap-filling, denoising, and motion touch-up.

135 citations


01 Jan 2012
TL;DR: XML3D as discussed by the authors is a declarative approach that leverages existing web technologies including HTML, Cascading Style Sheets (CSS), the Document Object Model (DOM), and AJAX for dynamic content.
Abstract: Web technologies provide the basis to distribute digital information worldwide and in realtime but they have also established the Web as a ubiquitous application platform. The Web evolved from simple text data to include advanced layout, images, audio, and recently streaming video. Today, as our digital environment becomes increasingly three-dimensional (e.g. 3D cinema, 3D video, consumer 3D displays, and high-performance 3D processing even in mobile devices) it becomes obvious that we must extend the core Web technologies to support interactive 3D content. Instead of adapting existing graphics technologies to the Web, XML3D uses a more radical approach: We take today's Web technology and try to find the minimum set of additions that fully support interactive 3D content as an integral part of mixed 2D/3D Web documents. XML3D enables portable cross-platform authoring, distribution, and rendering of and interaction with 3D data. As a declarative approach XML3D fully leverages existing web technologies including HTML, Cascading Style Sheets (CSS), the Document Object Model (DOM), and AJAX for dynamic content. All 3D content is exposed in the DOM, fully supporting DOM scripting and events, thus allowing Web designers to easily apply their existing skills. The design of XML3D is based on modern programmable graphics hardware, e.g. supports efficient mapping to GPUs without maintaining copies. It also leverages a new approach to specify shaders independently of specific rendering techniques or graphics APIs. We demonstrated the feasibility of our approach by integrating XML3D support into two major open browser frameworks from Mozilla and WebKit as well as providing a portable implementation based on JavaScript and WebGL.

128 citations


Proceedings ArticleDOI
07 Oct 2012
TL;DR: Two example applications are shown that are enabled by the unique capabilities of the Beamatron, an augmented reality game in which a player can drive a virtual toy car around a room, and a ubiquitous computing demo that uses speech and gesture to move projected graphics throughout the room.
Abstract: Steerable displays use a motorized platform to orient a projector to display graphics at any point in the room. Often a camera is included to recognize markers and other objects, as well as user gestures in the display volume. Such systems can be used to superimpose graphics onto the real world, and so are useful in a number of augmented reality and ubiquitous computing scenarios. We contribute the Beamatron, which advances steerable displays by drawing on recent progress in depth camera-based interactions. The Beamatron consists of a computer-controlled pan and tilt platform on which is mounted a projector and Microsoft Kinect sensor. While much previous work with steerable displays deals primarily with projecting corrected graphics onto a discrete set of static planes, we describe computational techniques that enable reasoning in 3D using live depth data. We show two example applications that are enabled by the unique capabilities of the Beamatron: an augmented reality game in which a player can drive a virtual toy car around a room, and a ubiquitous computing demo that uses speech and gesture to move projected graphics throughout the room.

Proceedings ArticleDOI
22 Oct 2012
TL;DR: Results showed similar error performance between modes for all measures, indicating that the vibro-audio interface is a viable multimodal solution for providing access to dynamic visual information and supporting accurate spatial learning and the development of mental representations of graphical material.
Abstract: This paper evaluates an inexpensive and intuitive approach for providing non-visual access to graphic material, called a vibro-audio interface. The system works by allowing users to freely explore graphical information on the touchscreen of a commercially available tablet and synchronously triggering vibration patterns and auditory information whenever an on-screen visual element is touched. Three studies were conducted that assessed legibility and comprehension of the relative relations and global structure of a bar graph (Exp 1), Pattern recognition via a letter identification task (Exp 2), and orientation discrimination of geometric shapes (Exp 3). Performance with the touch-based device was compared to the same tasks performed using standard hardcopy tactile graphics. Results showed similar error performance between modes for all measures, indicating that the vibro-audio interface is a viable multimodal solution for providing access to dynamic visual information and supporting accurate spatial learning and the development of mental representations of graphical material.

Journal ArticleDOI
TL;DR: The multimedia principle states that adding graphics to text can improve student learning, but all graphics are not equally effective, so the multimedia effect is qualified by a version of the coherence principle: Adding relevant graphics to words helps learning but adding irrelevant graphics does not.

Book
30 Sep 2012
TL;DR: Multimedia Signals and Systems is an introductory text, designed for students or professionals and researchers in other fields, with a need to learn the basics of signals and systems.
Abstract: Multimedia signals include different data types (text, sound, graphics, picture, animations, video, etc.), which can be time-dependent (sound, video and animation) or spatially-dependent (images, text and graphics). Hence, the multimedia systems represent an interdisciplinary cross-section of the following areas: digital signal processing, computer architecture, computer networks and telecommunications. Multimedia Signals and Systems is an introductory text, designed for students or professionals and researchers in other fields, with a need to learn the basics of signals and systems. A considerable emphasis is placed on the analysis and processing of multimedia signals (audio, images, video). Additionally, the book connects these principles to other important elements of multimedia systems such as the analysis of optical media, computer networks, QoS, and digital watermarking.

Patent
12 Mar 2012
TL;DR: In this article, the playback apparatus realizes stereoscopic viewing by overlaying planar or stereoscopic graphics over stereoscopic video in a way that reduces eye strain using following method in abstract: a graphics plane holds data composed of graphics data.
Abstract: The playback apparatus realizes stereoscopic viewing by overlaying planar or stereoscopic graphics over stereoscopic video in a way that reduces eye strain using following method in abstract: A graphics plane holds therein data composed of graphics data. A shift engine shifts, in a case when a composition unit composites the graphics data with a left-view video frame, coordinates of each of the pixels is shifted in a first horizontal direction, and in a case when the composition unit composites the graphics data with a right-view video frame, coordinates of each of the pixels is shifted in a second horizontal direction that is opposite to the first direction.

Journal ArticleDOI
TL;DR: A framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand is presented and results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive.
Abstract: We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visualization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users' ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization design. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty.

Patent
07 Jun 2012
TL;DR: In this article, an input device for processing video information has an input unit (112) for receiving the video information having low dynamic range [LDR] video data and a video processor (113) for generating a display signal for display in a LDR display mode or HDR display mode.
Abstract: A device for processing video information has an input unit (112) for receiving the video information having low dynamic range [LDR] video data and/or high dynamic range [HDR] video data, and a video processor (113) for generating a display signal for display in a LDR display mode or HDR display mode. Graphics data is processed for generating an overlay for overlaying the video data. The input unit receives graphics processing control data comprised in the video information, the graphics processing control data including at least one HDR processing instruction for overlaying the graphics data in the HDR display mode. The video processor is constituted for adapting the processing when overlaying the graphics data in dependence on the specific display mode and the HDR processing instruction. Advantageously the source of the video information is enabled to control the processing of graphics in HDR display mode via the HDR processing instruction.

Proceedings ArticleDOI
05 May 2012
TL;DR: This work presents the PolyZoom technique, where users progressively build hierarchies of focus regions, stacked on each other such that each subsequent level shows a higher magnification.
Abstract: The most common techniques for navigating in multiscale visual spaces are pan, zoom, and bird's eye views. However, these techniques are often tedious and cumbersome to use, especially when objects of interest are located far apart. We present the PolyZoom technique where users progressively build hierarchies of focus regions, stacked on each other such that each subsequent level shows a higher magnification. Correlation graphics show the relation between parent and child viewports in the hierarchy. To validate the new technique, we compare it to standard navigation techniques in two user studies, one on multiscale visual search and the other on multifocus interaction. Results show that PolyZoom performs better than current standard techniques.

Patent
20 Dec 2012
TL;DR: In this paper, a dynamic GPU allocation system (DGAS) is proposed for virtualization logic running on a server computing system that computes GPU benefit factors for the virtual machines on a dynamic basis, and combines the computed GBFs with static priorities to determine a ranked ordering of virtual machines.
Abstract: Methods, techniques, and systems for dynamically allocating graphics processing units among virtual machines are provided. Example embodiments provide a dynamic GPU allocation system (“DGAS”), which enables the efficient allocation of physical GPU resources to one or more virtual machines. In one embodiment, the DGAS comprises virtualization logic running on a server computing system that computes GPU benefit factors for the virtual machines on a dynamic basis, and combines the computed GBFs with static priorities to determine a ranked ordering of virtual machines. The available GPU resources are then allocated to some subset of these ranked virtual machines as physical GPU capacity is matched with the requirements of the subset. Physical GPU resources are then allocated to the subset of virtual machines that have the highest promise of GPU utilization.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: A compact data structure with cache-efficient memory layout for the representation of graph instances that are based on regular N-D grids with topologically identical neighborhood systems is proposed, allowing for 3 to 12 times higher grid resolutions and a 3- to 9-fold speedup compared to existing approaches.
Abstract: Finding minimal cuts on graphs with a grid-like structure has become a core task for solving many computer vision and graphics related problems. However, computation speed and memory consumption oftentimes limit the effective use in applications requiring high resolution grids or interactive response. In particular, memory bandwidth represents one of the major bottlenecks even in today's most efficient implementations. We propose a compact data structure with cache-efficient memory layout for the representation of graph instances that are based on regular N-D grids with topologically identical neighborhood systems. For this common class of graphs our data structure allows for 3 to 12 times higher grid resolutions and a 3- to 9-fold speedup compared to existing approaches. Our design is agnostic to the underlying algorithm, and hence orthogonal to other optimizations such as parallel and hierarchical processing. We evaluate the performance gain on a variety of typical problems including 2D/3D segmentation, colorization, and stereo. All experiments show an unconditional improvement in terms of speed and memory consumption, with graceful performance degradation for graphs with increasing topological irregularities.

Patent
11 Jan 2012
TL;DR: In this article, a scaler in the data processing system performs scaling operations on the image data in the first framebuffer, stores the scaled data in a second framebuffer and displays an image generated from the scaled image data on an external display device coupled to the system.
Abstract: A data processing system composites graphics content, generated by an application program running on the data processing system, to generate image data. The data processing system stores the image data in a first framebuffer and displays an image generated from the image data in the first framebuffer on an internal display device of the data processing system. A scaler in the data processing system performs scaling operations on the image data in the first framebuffer, stores the scaled image data in a second framebuffer and displays an image generated from the scaled image data in the second framebuffer on an external display device coupled to the data processing system. The scaler performs the scaling operations asynchronously with respect to the compositing of the graphics content. The data processing system automatically mirrors the image on the external display device unless the application program is publishing additional graphics content for display on the external display device.

Journal ArticleDOI
TL;DR: A review of techniques for making photo‐realistic or artistic computer‐generated imagery from videos, as well as methods for creating summary and/or abstract visual representations to reveal important features and events in videos.
Abstract: In recent years, a collection of new techniques which deal with video as input data, emerged in computer graphics and visualization. In this survey, we report the state of the art in video-based graphics and video visualization. We provide a review of techniques for making photo-realistic or artistic computer-generated imagery from videos, as well as methods for creating summary and/or abstract visual representations to reveal important features and events in videos. We provide a new taxonomy to categorize the concepts and techniques in this newly emerged body of knowledge. To support this review, we also give a concise overview of the major advances in automated video analysis, as some techniques in this field (e.g. feature extraction, detection, tracking and so on) have been featured in video-based modelling and rendering pipelines for graphics and visualization. © 2012 Wiley Periodicals, Inc.

Journal ArticleDOI
16 Oct 2012
TL;DR: An advanced low-latency remote rendering system that assists mobile devices to render interactive 3D graphics in real-time, and takes advantage of an image based rendering technique: 3D image warping, to synthesize the mobile display from the depth images generated on the server.
Abstract: Mobile devices are gradually changing people's computing behaviors. However, due to the limitations of physical size and power consumption, they are not capable of delivering a 3D graphics rendering experience comparable to desktops. Many applications with intensive graphics rendering workloads are unable to run on mobile platforms directly. This issue can be addressed with the idea of remote rendering: the heavy 3D graphics rendering computation runs on a powerful server and the rendering results are transmitted to the mobile client for display. However, the simple remote rendering solution inevitably suffers from the large interaction latency caused by wireless networks, and is not acceptable for many applications that have very strict latency requirements.In this article, we present an advanced low-latency remote rendering system that assists mobile devices to render interactive 3D graphics in real-time. Our design takes advantage of an image based rendering technique: 3D image warping, to synthesize the mobile display from the depth images generated on the server. The research indicates that the system can successfully reduce the interaction latency while maintaining the high rendering quality by generating multiple depth images at the carefully selected viewpoints. We study the problem of viewpoint selection, propose a real-time reference viewpoint prediction algorithm, and evaluate the algorithm performance with real-device experiments.

Journal ArticleDOI
TL;DR: The detrendeR should make it easier to perform detrending and chronology building of tree-ring series, taking advantage of the R statistical programming environment.

Patent
31 Jan 2012
TL;DR: In this article, a method and system for rendering graphics based on user customizations in computer graphics application is described, where the customizations relate to various properties of one or more graphical elements in the graphic.
Abstract: A method and system for rendering graphics based on user customizations in a computer graphics application are disclosed. The customizations relate to various properties of one or more graphical elements in the graphic. Such properties include positioning, size, formatting and other visual attributes associated with the graphical elements. These properties may be defined as either semantic properties or presentation properties. Semantic properties are persistent across all graphic definitions. Presentation properties are specific to the graphic definition to which each particular graphic belongs. Thus, a customization to a semantic property of a displayed graphic is preserved in memory for application not only to the currently displayed graphic, but also to all other graphic definitions that may be displayed in the future. In contrast, a customization to a presentation property is only preserved for the currently displayed graphic, and thus not preserved for all other graphic definitions.

Journal ArticleDOI
TL;DR: This work advocates a tighter integration of human computation into online, interactive algorithms and presents three specific examples for the design of micro perceptual human computation algorithms to extract depth layers and image normals from a single photograph, and to augment an image with high-level semantic information such as symmetry.
Abstract: Human Computation (HC) utilizes humans to solve problems or carry out tasks that are hard for pure computational algorithms Many graphics and vision problems have such tasks Previous HC approaches mainly focus on generating data in batch, to gather benchmarks, or perform surveys demanding nontrivial interactions We advocate a tighter integration of human computation into online, interactive algorithms We aim to distill the differences between humans and computers and maximize the advantages of both in one algorithm Our key idea is to decompose such a problem into a massive number of very simple, carefully designed, human micro-tasks that are based on perception, and whose answers can be combined algorithmically to solve the original problem Our approach is inspired by previous work on micro-tasks and perception experiments We present three specific examples for the design of micro perceptual human computation algorithms to extract depth layers and image normals from a single photograph, and to augment an image with high-level semantic information such as symmetry

Patent
David Wyatt1, Thomas E. Dewey1
26 Mar 2012
TL;DR: In this paper, a method and apparatus for supporting a self-refreshing display device coupled to a graphics controller is described, and a technique for setting the operating state of the graphics controller during initialization from a deep sleep state is described.
Abstract: A method and apparatus for supporting a self-refreshing display device coupled to a graphics controller are disclosed. A technique for setting the operating state of the graphics controller during initialization from a deep sleep state is described. The graphics controller may set the operating state based on a signal that controls whether the graphics controller executes a warm-boot initialization procedure or a cold-boot initialization procedure. In the warm-boot initialization procedure, instructions and values stored in a non-volatile memory connected to the graphics controller may be used to set the operating state of the graphics controller. In one embodiment, the graphics controller may determine whether any changes have been made to the physical configuration of the computer system and, if the physical configuration has changed, the graphics controller may set the operating state based on values received from a software driver.

Proceedings ArticleDOI
Jeremy Laviole1, Martin Hachet1
04 Mar 2012
TL;DR: A system that allow users to visualize, manipulate and edit a 3D scene projected onto a paper sheet, which combines computer-assisted drawing and free form user expressiveness on a standard sheet of paper, opens new perspectives for enhancing user creation.
Abstract: Standard physical pen-and-paper creation and computer graphics tools tend to evolve in separate tracks. In this paper, we propose a new interface, PapARt, that bridges the gap between these two worlds. We developed a system that allow users to visualize, manipulate and edit a 3D scene projected onto a paper sheet. Using multitouch and tangible interfaces, users can directly interact with the 3D scene to prepare their drawings. Then, thanks to the projection of the 3D scene directly on the final surface medium, they can draw using standard tools while relying on the underlying 3D scene. Hence, users benefit from both the power of interactive 3D graphics and fast and easy interaction metaphors, while keeping a direct link with the physical material. PapARt has been tested during a large scale exhibition for general public. Such an interface, which combines computer-assisted drawing and free form user expressiveness on a standard sheet of paper, opens new perspectives for enhancing user creation.

Proceedings ArticleDOI
TL;DR: This paper describes how ArrayFire enables development of GPU computing applications and highlights some of its key functionality using examples of how it works in real code.
Abstract: ArrayFire is a GPU matrix library for the rapid development of general purpose GPU (GPGPU) computing applications within C, C++, Fortran, and Python. ArrayFire contains a simple API and provides full GPU compute capability on CUDA and OpenCL capable devices. ArrayFire provides thousands of GPU-tuned functions including linear algebra, convolutions, reductions, and FFTs as well as signal, image, statistics, and graphics libraries. We will further describe how ArrayFire enables development of GPU computing applications and highlight some of its key functionality using examples of how it works in real code.

Journal ArticleDOI
TL;DR: An augmented reality visualization interface to simultaneously present visual and laser sensors information further enhanced by stereoscopic viewing and 3-D graphics is proposed to enable an operator to intuitively comprehend scene layout and proximity information and so to respond in an accurate and timely manner.
Abstract: This paper proposes an augmented reality visualization interface to simultaneously present visual and laser sensors information further enhanced by stereoscopic viewing and 3-D graphics. The use of graphic elements is proposed to represent laser measurements that are aligned to video information in 3-D space. This methodology enables an operator to intuitively comprehend scene layout and proximity information and so to respond in an accurate and timely manner. The use of graphic elements to assist teleoperation, sometime discussed in the literature, is here proposed following an innovative approach that aligns virtual and real objects in 3-D space and color them suitably to facilitate comprehension of objects proximity during navigation. This paper is developed based on authors' previous experience on stereoscopic teleoperation. The approach is experimented on a real telerobotic system, where a user operates a mobile robot located several kilometers apart. The result showed simplicity and effectiveness of the proposed approach.