scispace - formally typeset
Search or ask a question
Topic

Real-time rendering

About: Real-time rendering is a research topic. Over the lifetime, 3247 publications have been published within this topic receiving 64567 citations.


Papers
More filters
Journal ArticleDOI
01 Aug 2004
TL;DR: This paper shows how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms, and develops a novel temporal two-layer compressed representation that handles matting.
Abstract: The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates.

1,677 citations

Book
28 Sep 2004
TL;DR: Physically Based Rendering: From Theory to Implementation, Third Edition, describes both the mathematical theory behind a modern photorealistic rendering system and its practical implementation through a method known as 'literate programming', which serves as an essential resource on physically-based rendering.
Abstract: Physically Based Rendering: From Theory to Implementation, Third Edition, describes both the mathematical theory behind a modern photorealistic rendering system and its practical implementation. Through a method known as 'literate programming', the authors combine human-readable documentation and source code into a single reference that is specifically designed to aid comprehension. The result is a stunning achievement in graphics education. Through the ideas and software in this book, users will learn to design and employ a fully-featured rendering system for creating stunning imagery. This completely updated and revised edition includes new coverage on ray-tracing hair and curves primitives, numerical precision issues with ray tracing, LBVHs, realistic camera models, the measurement equation, and much more. It is a must-have, full color resource on physically-based rendering. Presents up-to-date revisions of the seminal reference on rendering, including new sections on bidirectional path tracing, numerical robustness issues in ray tracing, realistic camera models, and subsurface scattering Provides the source code fora complete rendering systemallowing readers to get up and running fast Includes a unique indexing feature, literate programming, that lists the locations of each function, variable, and method on the page where they are first describedServes as an essential resource on physically-based rendering

1,612 citations

Proceedings ArticleDOI
01 Sep 1993
TL;DR: In this paper, a view interpolation approach to synthesize 3D scenes has been proposed, which combines interpolation of texture maps and their shape, is applied to computing arbitrary intermediate frames from an array of prestored images.
Abstract: Image-space simplifications have been used to accelerate the calculation of computer graphic images since the dawn of visual simulation. Texture mapping has been used to provide a means by which images may themselves be used as display primitives. The work reported by this paper endeavors to carry this concept to its logical extreme by using interpolated images to portray three-dimensional scenes. The special-effects technique of morphing, which combines interpolation of texture maps and their shape, is applied to computing arbitrary intermediate frames from an array of prestored images. If the images are a structured set of views of a 3D object or scene, intermediate frames derived by morphing can be used to approximate intermediate 3D transformations of the object or scene. Using the view interpolation approach to synthesize 3D scenes has two main advantages. First, the 3D representation of the scene may be replaced with images. Second, the image synthesis time is independent of the scene complexity. The correspondence between images, required for the morphing method, can be predetermined automatically using the range data associated with the images. The method is further accelerated by a quadtree decomposition and a view-independent visible priority. Our experiments have shown that the morphing can be performed at interactive rates on today’s high-end personal computers. Potential applications of the method include virtual holograms, a walkthrough in a virtual environment, image-based primitives and incremental rendering. The method also can be used to greatly accelerate the computation of motion blur and soft shadows cast by area light sources. CR Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. Additional Keywords: image morphing, interpolation, virtual reality, motion blur, shadow, incremental rendering, real-time display, virtual holography, motion compensation.

1,340 citations

Proceedings ArticleDOI
01 Jul 2002
TL;DR: A new, real-time method for rendering diffuse and glossy objects in low-frequency lighting environments that captures soft shadows, interreflections, and caustics and introduces functions for radiance transfer from a dynamic lighting environment through a preprocessed object to neighboring points in space.
Abstract: We present a new, real-time method for rendering diffuse and glossy objects in low-frequency lighting environments that captures soft shadows, interreflections, and caustics. As a preprocess, a novel global transport simulator creates functions over the object's surface representing transfer of arbitrary, low-frequency incident lighting into transferred radiance which includes global effects like shadows and interreflections from the object onto itself. At run-time, these transfer functions are applied to actual incident lighting. Dynamic, local lighting is handled by sampling it close to the object every frame; the object can also be rigidly rotated with respect to the lighting and vice versa. Lighting and transfer functions are represented using low-order spherical harmonics. This avoids aliasing and evaluates efficiently on graphics hardware by reducing the shading integral to a dot product of 9 to 25 element vectors for diffuse receivers. Glossy objects are handled using matrices rather than vectors. We further introduce functions for radiance transfer from a dynamic lighting environment through a preprocessed object to neighboring points in space. These allow soft shadows and caustics from rigidly moving objects to be cast onto arbitrary, dynamic receivers. We demonstrate real-time global lighting effects with this approach.

1,044 citations

Proceedings ArticleDOI
01 Aug 2001
TL;DR: An image based rendering approach that generalizes many current imagebased rendering algorithms, including light field rendering and view-dependent texture mapping, that allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations.
Abstract: We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.

984 citations


Network Information
Related Topics (5)
Rendering (computer graphics)
41.3K papers, 776.5K citations
89% related
Visualization
52.7K papers, 905K citations
80% related
Virtual machine
43.9K papers, 718.3K citations
79% related
Video tracking
37K papers, 735.9K citations
79% related
Motion estimation
31.2K papers, 699K citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202327
202248
202124
202047
201945
201849