Institution
Firaxis Games
About: Firaxis Games is a based out in . It is known for research contribution in the topics: Mipmap & Image compression. The organization has 4 authors who have published 4 publications receiving 155 citations. The organization is also known as: Firaxis & Firaxis Games, Inc..
Topics: Mipmap, Image compression, Texture filtering, Radiosity (computer graphics), Viewing frustum
Papers
More filters
••
[...]
TL;DR: The Linear Efficient Antialiased Normal (LEAN) mapping as discussed by the authors is a real-time filtering of specular highlights in bump and normal maps, which evaluates bumps as part of a shading computation in the tangent space of the polygonal surface rather than in the individual bumps.
Abstract: We introduce Linear Efficient Antialiased Normal (LEAN) Mapping, a method for real-time filtering of specular highlights in bump and normal maps. The method evaluates bumps as part of a shading computation in the tangent space of the polygonal surface rather than in the tangent space of the individual bumps. By operating in a common tangent space, we are able to store information on the distribution of bump normals in a linearly-filterable form compatible with standard MIP and anisotropic filtering hardware. The necessary textures can be computed in a preprocess or generated in real-time on the GPU for time-varying normal maps. The method effectively captures the bloom in highlight shape as bumps become too small to see, and will even transform bump ridges into anisotropic shading. Unlike even more expensive methods, several layers can be combined cheaply during surface rendering, with per-pixel blending. Though the method is based on a modified Ward shading model, we show how to map between its parameters and those of a standard Blinn-Phong model for compatibility with existing art assets and pipelines, and demonstrate that both models produce equivalent results at the largest MIP levels.
87 citations
••
29 Jun 2009TL;DR: This work builds on a recently introduced multiresolution splatting technique combined with an image‐space lightcut algorithm to intelligently choose virtual point lights for an interactive, one‐bounce instant radiosity solution and proposes clustering computations in image space to reduce computation costs.
Abstract: We introduce image-space radiosity and a hierarchical variant as a method for interactively approximating diffuse indirect illumination in fully dynamic scenes. As oft observed, diffuse indirect illumination contains mainly low-frequency details that do not require independent computations at every pixel. Prior work leverages this to reduce computation costs by clustering and caching samples in world or object space. This often involves scene preprocessing, complex data structures for caching, or wasted computations outside the view frustum. We instead propose clustering computations in image space, allowing the use of cheap hardware mipmapping and implicit quadtrees to allow coarser illumination computations. We build on a recently introduced multiresolution splatting technique combined with an image-space lightcut algorithm to intelligently choose virtual point lights for an interactive, one-bounce instant radiosity solution. Intelligently selecting point lights from our reflective shadow map enables temporally coherent illumination similar to results using more than 4096 regularly-sampled VPLs.
65 citations
••
27 Jun 2011TL;DR: A new paradigm that separates GPU decompression from rendering is presented, including a new GPU‐friendly formulation of range decoding, and a new texture compression scheme averaging 12.4:1 lossy compression ratio on 471 real game textures with a quality level similar to traditional DXT compression.
Abstract: Variable bit rate compression can achieve better quality and compression rates than fixed bit rate methods. None the less, GPU texturing uses lossy fixed bit rate methods like DXT to allow random access and on-the-fly decompression during rendering. Changes in games and GPUs since DXT was developed make its compression artifacts less acceptable, and texture bandwidth less of an issue, but texture size is a serious and growing problem. Games use a large total volume of texture data, but have a much smaller active set. We present a new paradigm that separates GPU decompression from rendering. Rendering is from uncompressed data, avoiding the need for random access decompression. We demonstrate this paradigm with a new variable bit rate lossy texture compression algorithm that is well suited to the GPU, including a new GPU-friendly formulation of range decoding, and a new texture compression scheme averaging 12.4:1 lossy compression ratio on 471 real game textures with a quality level similar to traditional DXT compression. The total game texture set are stored in the GPU in compressed form, and decompressed for use in a fraction of a second per scene.
13 citations
••
19 Feb 2010TL;DR: In this paper, the Tiny Encryption Algorithm (TEA) is used as the basis for a fast and high quality random number generator, and by changing the number of encryption rounds we can trade speed for quality.
Abstract: Random numbers have many uses in computer graphics, from Monte Carlo sampling for realistic image synthesis to noise generation for artistic shader construction. Perlin [1985] introduced the idea of using a repeatable band-limited noise function to add stochastic variation to procedural shaders. We show that the quality of random number generation directly affects the quality of the noise produced, however, good quality noise can still be produced with a lower quality random number generator. Further, we show that the Tiny Encryption Algorithm (TEA) [Reddy 2003] can serve as the basis of a fast and high quality random number generator, and by changing the number of encryption rounds we can trade speed for quality.
2 citations
Authors
Showing all 4 results
Name | H-index | Papers | Citations |
---|---|---|---|
Marc Olano | 21 | 60 | 2589 |
Dan Baker | 3 | 3 | 95 |
Jeremy Shopf | 1 | 2 | 64 |
Joshua Barczak | 1 | 1 | 12 |