scispace - formally typeset
Search or ask a question

Showing papers on "Software rendering published in 2010"


Book
12 Jul 2010
TL;DR: Physically Based Rendering, 2nd Edition describes both the mathematical theory behind a modern photorealistic rendering system as well as its practical implementation, and combines human-readable documentation and source code into a single reference that is specifically designed to aid comprehension.
Abstract: Physically Based Rendering, 2nd Edition describes both the mathematical theory behind a modern photorealistic rendering system as well as its practical implementation. A method - known as 'literate programming'- combines human-readable documentation and source code into a single reference that is specifically designed to aid comprehension. The result is a stunning achievement in graphics education. Through the ideas and software in this book, you will learn to design and employ a full-featured rendering system for creating stunning imagery. New sections on subsurface scattering, Metropolis light transport, precomputed light transport, multispectral rendering, and much more.Includes a companion site complete with source code for the rendering system described in the book, with support for Windows, OS X, and Linux. Please visit, www.pbrt.org. Code and text are tightly woven together through a unique indexing feature that lists each function, variable, and method on the page that they are first described.

349 citations


Patent
08 Mar 2010
TL;DR: In this article, the authors propose an approach to optimize the operation of a media player during rendering of media files, which includes authoring software to create a data structure and to populate the created data structure with obtained metadata.
Abstract: Optimizing operation of a media player during rendering of media files. The invention includes authoring software to create a data structure and to populate the created data structure with obtained metadata. The invention also includes rendering software to retrieve the metadata from the data structure and to identify media files to render. In one embodiment, the invention is operable as part of a compressed media format having a set of small files containing metadata, menus, and playlists in a compiled binary format designed for playback on feature-rich personal computer media players as well as low cost media players.

321 citations


Patent
30 Dec 2010
TL;DR: In this article, the authors describe a system where software rendering software is interposed in the data communication path between a browser running on a user computer and the internet data sources (for example, internet accessible server computers) that the user browser wants to receive information from.
Abstract: A computer network communication method and system wherein software rendering software is interposed in the data communication path between a browser running on a user computer and the internet data sources (for example, internet-accessible server computers) that the user browser wants to receive information from. The software rendering application gets data from internet data sources, but this data may contain malware. To provide enhanced security, the software rendering application renders this data to form a new browser readable code set (for example, an xml page with CSS layers), and this new and safe browser readable code set is sent along to the browser on the user computer for appropriate presentation to the user. As part of the rendering process, dedicated and distinct virtual machines may be used to render certain portion of the data, such as executable code. These virtual machines may be watched, and quickly destroyed if it is detected that they have encountered some type of malware.

179 citations


Proceedings ArticleDOI
01 Jan 2010
TL;DR: This paper presents the Tuvok architecture, a cross-platform open-source volume rendering system that delivers high quality, state of the art renderings at production level code quality, and is the first open source system to deliver all of these capabilities at once.
Abstract: In this paper we present the Tuvok architecture, a cross-platform open-source volume rendering system that delivers high quality, state of the art renderings at production level code quality. Due to its progressive rendering algorithm, Tuvok can interactively visualize arbitrarily large data sets even on low-end 32bit systems, though it can also take full advantage of high-end workstations with large amounts of memory and modern GPUs. To achieve this Tuvok uses an optimized out-of-core, bricked, level of detail data representation. From a software development perspective, Tuvok is composed of three independent components, a UI subsystem based on Qt, a rendering subsystem based on OpenGL and DirectX, and an IO subsystem. The IO subsystem not only handles the out-of-core data processing and paging but also includes support for many widely used file formats such as DICOM and ITK volumes. For rendering, Tuvok implements a wide variety of different rendering methods, ranging from 2D texture stack based approaches for low end hardware, to 3D slice based implementations and GPU based ray casters. All of these modes work with one- or multi-dimensional transfer functions, isosurface, and ClearView rendering modes. We also present ImageVis3D, a volume rendering application that uses the Tuvok subsystems. While these features may be found individually in other volume rendering packages, to our best knowledge this is the first open source system to deliver all of these capabilities at once.

73 citations


05 Mar 2010

68 citations


Proceedings ArticleDOI
19 Feb 2010
TL;DR: FreePipe is presented, a system for programmable parallel rendering that can run entirely on current graphics hardware and has performance comparable with the traditional graphics pipeline.
Abstract: In the past decade, modern GPUs have provided increasing programmability with vertex, geometry and fragment shaders. However, many classical problems have not been efficiently solved using the current graphics pipeline where some stages are still fixed functions on chip. In particular, multi-fragment effects, especially order-independent transparency, require programmability of the blending stage, that makes it difficult to be solved in a single geometry pass. In this paper we present FreePipe, a system for programmable parallel rendering that can run entirely on current graphics hardware and has performance comparable with the traditional graphics pipeline. Within this framework, two schemes for the efficient rendering of multi-fragment effects in a single geometry pass have been developed by exploiting CUDA atomic operations. Both schemes have achieved significant speedups compared to the state-of-the-art methods that are based on traditional graphics pipelines.

62 citations


Proceedings ArticleDOI
21 Jun 2010
TL;DR: It is argued that a multi-GPU MapReduce library is a good fit for parallel volume renderering because it is easy to program for, scales well, and eliminates the need to focus on I/O algorithms thus allowing the focus to be on visualization algorithms instead.
Abstract: In this paper we present a multi-GPU parallel volume rendering implemention built using the MapReduce programming model. We give implementation details of the library, including specific optimizations made for our rendering and compositing design. We analyze the theoretical peak performance and bottlenecks for all tasks required and show that our system significantly reduces computation as a bottleneck in the ray-casting phase. We demonstrate that our rendering speeds are adequate for interactive visualization (our system is capable of rendering a 10243 floating-point sampled volume in under one second using 8 GPUs), and that our system is capable of delivering both in-core and out-of-core visualizations. We argue that a multi-GPU MapReduce library is a good fit for parallel volume renderering because it is easy to program for, scales well, and eliminates the need to focus on I/O algorithms thus allowing the focus to be on visualization algorithms instead. We show that our system scales with respect to the size of the volume, and (given enough work) the number of GPUs.

61 citations


Proceedings ArticleDOI
Amy A. Gooch1, Jeremy Long1, Li Ji1, Anthony Estey1, Bruce Gooch1 
07 Jun 2010
TL;DR: A high level overview of the past and current state of non-photorealistic rendering is provided and a call to arms the community to create the areas of research that make computation ofnon-photorean rendering generate never before realized results.
Abstract: The field of non-photorealistic rendering is reaching a mature state. In its infancy, researchers explored the mimicry of methods and tools used by traditional artists to generate works of art, through techniques like watercolor or oil painting simulations. As the field has moved past mimicry, ideas from artists and artistic techniques have been adapted and altered for performance in the media of computer graphics, creating algorithmic aesthetics such as generative art or the automatic composition of objects in a scene, as well as abstraction in rendering and geometry. With these two initial stages of non-photorealistic rendering well established, the field must find new territory to cover. In this paper, we provide a high level overview of the past and current state of non-photorealistic rendering and call to arms the community to create the areas of research that make computation of non-photorealistic rendering generate never before realized results.

48 citations


Journal ArticleDOI
28 Jun 2010
TL;DR: A new lightweight grammar representation is proposed that compactly encodes facade structures and allows fast per‐pixel access and is called F‐shade, a prototype rendering system that renders an urban model from the compact representation directly on the GPU.
Abstract: In this paper we propose a real-time rendering approach for procedural cities. Our first contribution is a new lightweight grammar representation that compactly encodes facade structures and allows fast per-pixel access. We call this grammar F-shade. Our second contribution is a prototype rendering system that renders an urban model from the compact representation directly on the GPU. Our suggested approach explores an interesting connection from procedural modeling to real-time rendering. Evaluating procedural descriptions at render time uses less memory than the generation of intermediate geometry. This enables us to render large urban models directly from GPU memory.

41 citations


Proceedings ArticleDOI
15 Dec 2010
TL;DR: This paper presents a meta-modelling architecture suitable for global illumination rendering on GPUs, and some major commercial rendering software also started adopting global illumination on GPUs.
Abstract: Accurate global illumination rendering using GPUs is gaining attention because of the highly parallel nature of global illumination algorithms. For example, computing the radiance of each pixel using path tracing is embarrassingly parallel. Some major commercial rendering software also started adopting global illumination on GPUs.

40 citations


Patent
20 Aug 2010
TL;DR: In this article, a processor of a computing device may monitor the achieved frame rate, and if the frame rate falls below a minimum threshold, the processor may note a current speed or rate of movement of the image and begin rendering less computationally complex graphic items.
Abstract: Methods and devices enable rendering of graphic images at a minimum frame rate even when processing resource limitations and rendering processing may not support the minimum frame rate presentation. While graphics are being rendered, a processor of a computing device may monitor the achieved frame rate. If the frame rate falls below a minimum threshold, the processor may note a current speed or rate of movement of the image and begin rendering less computationally complex graphic items. Rendering of less computationally complex items continues until the processor notes that the speed of rendered items is less than the noted speed. At this point, normal graphical rendering may be recommenced. The aspects may be applied to more than one type of less computationally complex item or rendering format. The various aspects may be applied to a wide variety of animations and moving graphics, as well as scrolling text, webpages, etc.

Patent
Toby Sharp1
19 Mar 2010
TL;DR: In this article, the authors describe an architecture for volume rendering at a data center having a cluster of rendering servers connected using a high bandwidth connection to a database of medical volumes. But the architecture is not described in detail.
Abstract: Architecture for volume rendering is described. In an embodiment volume rendering is carried out at a data centre having a cluster of rendering servers connected using a high bandwidth connection to a database of medical volumes. For example, each rendering server has multiple graphics processing units each with a dedicated device thread. For example, a surgeon working from home on her netbook or thin client is able to have a medical volume rendered remotely at one of the rendering servers and the resulting 2D image sent to her over a relatively low bandwidth connection. In an example a master rendering server carries out load balancing at the cluster. In an example each rendering server uses a dedicated device thread for each graphics processing unit in its control and has multiple calling threads which are able to send rendering instructions to appropriate ones of the device threads.

Journal ArticleDOI
09 Jun 2010
TL;DR: A flexible and highly efficient hardware‐assisted volume renderer grounded on the original Projected Tetrahedra algorithm, which applies a CUDA‐based visibility ordering achieving rendering and sorting performance of over 6 M Tet/s for unstructured datasets.
Abstract: We present a flexible and highly efficient hardware-assisted volume renderer grounded on the original Projected Tetrahedra (PT) algorithm. Unlike recent similar approaches, our method is exclusively based on the rasterization of simple geometric primitives and takes full advantage of graphics hardware. Both vertex and geometry shaders are used to compute the tetrahedral projection, while the volume ray integral is evaluated in a fragment shader; hence, volume rendering is performed entirely on the GPU within a single pass through the pipeline. We apply a CUDA-based visibility ordering achieving rendering and sorting performance of over 6 M Tet/s for unstructured datasets. Furthermore, as each tetrahedron is processed independently, we employ a data-parallel solution which is neither bound by GPU memory size nor does it rely on auxiliary volume information. In addition, isosurfaces can be readily extracted during the rendering process, and time-varying data are handled without extra burden.

Journal ArticleDOI
TL;DR: The potential merit of the platform of speeding up DAS development is demonstrated, and the proposed control algorithms for actuators possess good tracking capability, as well as that the developed ACC algorithm is capable of improving driver comfort and reducing driver workload.
Abstract: This study presents a driving simulation platform with low cost for the development of driving assistance systems (DAS). The platform uses a combination of two simulation loops: hardware-in-the-loop (HIL) and driver-in-the-loop (DIL). Its hardware consists of a simulation computer, a monitor computer, a vision computer, DAS actuators and a car mock-up. Its main software includes a monitor software running in the monitor computer, a vision rendering software running in the vision computer and Matlab/Simulink-based simulation model running in simulation computer. When designing its monitor software, a graphical user interface driven-by S-function method is adopted to eliminate the delay in the displaying of the simulation data. The vision rendering software uses a parametric adjustment method based on the principle of optical projection, improving the driver's perception of being immersed in the virtual traffic scene. The application of the developed platform is demonstrated by HIL experiments of vehicle actuators and DIL experiments of adaptive cruise control (ACC) algorithm. These experiments not only demonstrate the potential merit of the platform of speeding up DAS development, but also illustrate that the proposed control algorithms for actuators possess good tracking capability, as well as that the developed ACC algorithm is capable of improving driver comfort and reducing driver workload.

Proceedings ArticleDOI
19 Jul 2010
TL;DR: This work presents the implementation of a framework for parallel remote rendering using commodity Graphics Processing Units (GPUs) in the proxy servers and shows that this framework substantially improves the performance of rendering computation of 3D video.
Abstract: Demand for 3D visualization is increasing in mobile devices as users have come to expect more realistic immersive experiences. However, limited networking and computing resources on mobile devices remain challenges. A solution is to have a proxy-based framework that offloads the burden of rendering computation from mobile devices to more powerful servers. We present the implementation of a framework for parallel remote rendering using commodity Graphics Processing Units (GPUs) in the proxy servers. Experiments show that this framework substantially improves the performance of rendering computation of 3D video.

Proceedings ArticleDOI
07 Jun 2010
TL;DR: This method uses a combination of refined lines and blocks, as well as a small number of tones, to produce abstracted artistic rendering with sufficient elements from the original image to judge the level of abstraction.
Abstract: Many nonphotorealistic rendering techniques exist to produce artistic effects from given images. Inspired by various artistic work such as Warhol's, interesting artistic effects can be produced by using a minimal rendering, where the minimum refers to the number of tones as well as the number and complexity of the primitives used for rendering. To achieve this goal, based on various computer vision techniques, our method uses a combination of refined lines and blocks, as well as a small number of tones, to produce abstracted artistic rendering with sufficient elements from the original image. There is always a trade-off between reducing the amount of information and the ability to represent the shape and details of the original images. Judging the level of abstraction is semantic-based, so we believe that giving users this flexibility is probably a good choice. By changing some intuitive parameters, a wide range of visually pleasing results can be produced. Our method is usually fully automatic, but a small amount of user interaction can optionally be incorporated to obtain selective abstraction.

Proceedings ArticleDOI
26 Jul 2010
TL;DR: The fundamental techniques for real-time hair rendering are explained and then alternative approaches along with tips and tricks to achieve better performance and/or quality are presented.
Abstract: Hair rendering and simulation have always been challenging tasks, especially in real-time. Due to their high computational demands, they have been vastly omitted in real-time applications and studied by a relatively small group of graphics researchers and programmers. With recent advancements in both graphics hardware and software methods, real-time hair rendering and simulation are now possible with reasonable performance and quality. However, achieving acceptable levels of performance and quality requires specific expertise and experience in real-time hair rendering. The aim of this course is to bring the accumulated knowledge in research and technology demos to real world software such as video games and other commercial or research oriented real-time applications. We begin with explaining the fundamental techniques for real-time hair rendering and then present alternative approaches along with tips and tricks to achieve better performance and/or quality. We also provide an overview of various hair simulation techniques and present implementation details of the most efficient techniques suitable for real-time applications. Moreover, we provide example source codes as a part of our lecture notes.

Proceedings ArticleDOI
02 May 2010
TL;DR: This approach achieves parallel rendering by division of the rendering task either in sort-last (database) or sort-first (screen domain) manner and presents an optimal method for implicit load balancing in the former mode.
Abstract: In this paper, we introduce a novel out-of-core parallel and scalable technique for rendering massive terrain datasets. The parallel rendering task decomposition is implemented on top of an existing terrain renderer using an open source framework for cluster-parallel rendering. Our approach achieves parallel rendering by division of the rendering task either in sort-last (database) or sort-first (screen domain) manner and presents an optimal method for implicit load balancing in the former mode. The efficiency of our approach is validated using massive elevation models.

Proceedings Article
31 May 2010
TL;DR: This paper presents a flexible hybrid method designed to render heightfield data, such as terrains, on GPU that combines two traditional techniques, namely mesh-based rendering and per-pixel ray-casting, and introduces an adaptive mechanism that depends on viewing conditions and heightfield characteristics.
Abstract: This paper presents a flexible hybrid method designed to render heightfield data, such as terrains, on GPU. It combines two traditional techniques, namely mesh-based rendering and per-pixel ray-casting. A heuristic is proposed to dynamically choose between these two techniques. To balance rendering performance against quality, an adaptive mechanism is introduced that depends on viewing conditions and heightfield characteristics. It manages the precision of the ray-casting rendering, while mesh rendering is reserved for the finest level of details. Our method is GPU accelerated and achieves real-time rendering performance with high accuracy. Moreover, contrary to most terrains rendering methods, our technique does not rely on time-consuming pre-processing steps to update complex data structures. As a consequence, it gracefully handles dynamic heightfields, making it useful for interactive terrain edition or real-time simulation processes.

Patent
08 Nov 2010
TL;DR: In this paper, an automated Internet-based graphics application profile management system is presented, in which graphics performance of client machines running graphics-based applications is optimized using an automated internet-based GAP management system, which includes an Internetbased communication server, operably connected to the infrastructure of the Internet, and to a central database server through an application server.
Abstract: A multi-user computer network, in which graphics performance of client machines running graphics-based applications is optimized using an automated Internet-based graphics application profile management system. The automated Internet-based graphics application profile management system includes an Internet-based communication server, operably connected to the infrastructure of the Internet, and to a central database server, through an application server. The central database server stores graphic application profiles (GAPs) for different graphics-based applications that are capable of running on the client machines. The graphics application profiles are stored in a profile database in the multi-GPU graphics rendering subsystem of each client machine. The Internet-based communication server communicates with each client machine over the Internet, and automatically programs updated graphics application profiles (GAPs) in the profile database of each client machine. This allows for the graphics performance of each client machine to be optimized during the run-time of the graphics-based applications.

Book ChapterDOI
14 Jun 2010
TL;DR: GigaVoxels is a voxel-based rendering pipeline that makes the display of very large volumetric datasets very efficient and obtains interactive to real-time framerates and demonstrates the use of extreme amounts of voxels in rendering, which is applicable in many different contexts.
Abstract: GigaVoxels is a voxel-based rendering pipeline that makes the display of very large volumetric datasets very efficient. It is adapted to memory bound environments and it is designed for the data-parallel architecture of the GPU. It is capable of rendering objects at a detail level that matches the screen resolution and interactively adapts to the current point of view. Invisible parts are never even considered for contribution to the final image. As a result, the algorithm obtains interactive to real-time framerates and demonstrates the use of extreme amounts of voxels in rendering, which is applicable in many different contexts. This is also confirmed by many game developers who seriously consider voxels as a potential standard primitive in video games. We will also show in this chapter that voxels are already powerful primitives that, for some rendering tasks, achieve higher performance than triangle-based representations.

BookDOI
01 Mar 2010
TL;DR: This book provides an in-depth look at the new OpenGL ES (The Standard for Embedded Accelerated 3D Graphics), and shows what these new embedded systems graphics libraries can provide for 3D graphics and games developers.
Abstract: The first book to explain the principals behind mobile 3D hardware implementation, helping readers understand advanced algorithms, produce low-cost, low-power SoCs, or become familiar with embedded systems As mobile broadcasting and entertainment applications evolve, there is increasing interest in 3D graphics within the field of mobile electronics, particularly for handheld devices. In Mobile 3D Graphics SoC, Yoo provides a comprehensive understanding of the algorithms of mobile 3D graphics and their real chip implementation methods. 3D graphics SoC (System on a Chip) architecture and its interaction with embedded system software are explained with numerous examples. Yoo divides the book into three sections: general methodology of low power SoC, design of low power 3D graphics SoC, and silicon implementation of 3D graphics SoCs and their application to mobile electronics. Full examples are presented at various levels such as system level design and circuit level optimization along with design technology. Yoo incorporates many real chip examples, including many commercial 3D graphics chips, and provides cross-comparisons of various architectures and their performance. Furthermore, while advanced 3D graphics techniques are well understood and supported by industry standards, this is less true in the emerging mobile applications and games market. This book redresses this imbalance, providing an in-depth look at the new OpenGL ES (The Standard for Embedded Accelerated 3D Graphics), and shows what these new embedded systems graphics libraries can provide for 3D graphics and games developers.

Patent
28 Jun 2010
TL;DR: In this paper, a multithreaded rendering software pipeline architecture dynamically reallocates regions of an image space to raster threads based upon performance data collected by the Raster threads.
Abstract: A multithreaded rendering software pipeline architecture dynamically reallocates regions of an image space to raster threads based upon performance data collected by the raster threads. The reallocation of the regions typically includes resizing the regions assigned to particular raster threads and/or reassigning regions to different raster threads to better balance the relative workloads of the raster threads.

Proceedings ArticleDOI
03 Aug 2010
TL;DR: A new multi-view rendering hardware architecture consisting of hybrid parallel DBIR and pipeline interlacing are proposed to improve the performance and Experimental results show that the proposed architecture can achieve 60 frames per second for processing full HD (1920×1080) video in real-time processing system.
Abstract: Three-dimensional television (3D-TV) has attracted significant attention because of 3D immersive feeling for advanced TV development. Multi-view rendering by depth image based rendering (DIBR) and interlacing are the key technologies to realize 3D-TV system from content to display. In this paper, a new multi-view rendering hardware architecture consisting of hybrid parallel DBIR and pipeline interlacing are proposed to improve the performance. Experimental results show that the proposed architecture can achieve 60 frames per second for processing full HD (1920×1080) video in real-time processing system. Only 3% logic elements of ALTERA Cyclone III FPGA are used.

Patent
25 May 2010
TL;DR: In this article, a rolling texture context data structure is proposed to store multiple texture contexts associated with different textures that are being processed in the software pipeline, and each texture context stores state data for a particular texture, and facilitates the access to texture data by multiple, parallel stages in a software pipeline.
Abstract: A multithreaded rendering software pipeline architecture utilizes a rolling texture context data structure to store multiple texture contexts that are associated with different textures that are being processed in the software pipeline. Each texture context stores state data for a particular texture, and facilitates the access to texture data by multiple, parallel stages in a software pipeline. In addition, texture contexts are capable of being “rolled”, or copied to enable different stages of a rendering pipeline that require different state data for a particular texture to separately access the texture data independently from one another, and without the necessity for stalling the pipeline to ensure synchronization of shared texture data among the stages of the pipeline.

Journal ArticleDOI
TL;DR: A novel approach to realistic real-time rendering scenes consisting of many affine IFS fractals and a new method for estimating normals at fractal surface points is proposed, based on approximations of the convex hulls for fractal subsets.

Patent
30 Sep 2010
TL;DR: In this article, a 3D graphics service manager 404 can detect that a graphics processing unit reset and can restart a rendering process configured to render 3D games for a virtual machine and cause a graphics buffer to be established between the rendering process and the virtual machine.
Abstract: Exemplary techniques for recovering from a graphics processor reset are herein disclosed. In an exemplary embodiment, a 3D graphics service manager 404 can detect that a graphics processing unit reset and can restart a rendering process configured to render 3D graphics for a virtual machine and cause a graphics buffer to be established between the rendering process and the virtual machine. In addition to the foregoing, other aspects are described in the detailed description, claims, and figures.

Proceedings ArticleDOI
25 Oct 2010
TL;DR: This paper designs a 3D video remote rendering system that significantly reduces the delay while maintaining high rendering quality and proposes a reference viewpoint prediction algorithm with super sampling support that requires much less computation resources but provides better performance than the search-based algorithms proposed in the related work.
Abstract: As an emerging technology, 3D video has shown a great potential to become the next generation media for tele-immersion. However, streaming and rendering this dynamic 3D data in real-time requires tremendous network bandwidth and computing resources. In this paper, we build a remote rendering model to better study different remote rendering designs and define 3D video rendering as an optimization problem. Moreover, we design a 3D video remote rendering system that significantly reduces the delay while maintaining high rendering quality. We also propose a reference viewpoint prediction algorithm with super sampling support that requires much less computation resources but provides better performance than the search-based algorithms proposed in the related work.

Proceedings ArticleDOI
01 Jan 2010
TL;DR: Recent results from computer graphics research are presented, offering solutions to contemporary challenges in digital planetarium rendering and modeling.
Abstract: Contemporary challenges in the production of digital planetarium shows include real-time rendering realism as well as the creation of authentic content. While interactive, live performance is a standard feature of professional digital-dome planetarium software today, support for physically correct rendering of astrophysical phenomena is still often limited. Similarly, the tools currently available for planetarium show production do not offer much assistance towards creating scientifically accurate models of astronomical objects. Our paper presents recent results from computer graphics research, offering solutions to contemporary challenges in digital planetarium rendering and modeling. Incorporating these algorithms into the next generation of dome display software and production tools will help advance digital planetariums toward make full use of their potential.

Proceedings ArticleDOI
26 Jul 2010
TL;DR: This work describes a motion blur system that integrates image and texture space motion blur for smooth results with less than one sample per pixel in a deferred shading rendering engine used in Split/Second.
Abstract: Motion blur is key to delivering a sense of speed in interactive video game rendering. Further, simulating accurate camera optical exposure properties and reduction of temporal aliasing brings us closer to high quality real-time rendering productions. We describe a motion blur system that integrates image and texture space motion blur for smooth results with less than one sample per pixel. We apply the algorithm in the context of a deferred shading rendering engine used in Split/Second (Disney: Black Rock), but the method also applies to forward rendering, ray tracing or REYES style architectures.