scispace - formally typeset
Search or ask a question
Topic

Tiled rendering

About: Tiled rendering is a research topic. Over the lifetime, 1961 publications have been published within this topic receiving 42503 citations.


Papers
More filters
Proceedings ArticleDOI
01 Aug 1996
TL;DR: This work presents a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs, which combines both geometry-based and imagebased techniques, and presents view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models.
Abstract: We present a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs. Our modeling approach, which combines both geometry-based and imagebased techniques, has two components. The first component is a photogrammetricmodeling method which facilitates the recovery of the basic geometry of the photographed scene. Our photogrammetric modeling approach is effective, convenient, and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo technique robustly recovers accurate depth from widely-spaced image pairs. Consequently, our approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. For producing renderings, we present view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models. Our approach can be used to recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach’s ability to create realistic renderings of architectural scenes from viewpoints far from the original photographs. CR Descriptors: I.2.10 [Artificial Intelligence]: Vision and Scene Understanding Modeling and recovery of physical attributes; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Color, shading, shadowing, and texture I.4.8 [Image Processing]: Scene Analysis Stereo; J.6 [Computer-Aided Engineering]: Computer-aided design (CAD).

2,159 citations

Book
28 Sep 2004
TL;DR: Physically Based Rendering: From Theory to Implementation, Third Edition, describes both the mathematical theory behind a modern photorealistic rendering system and its practical implementation through a method known as 'literate programming', which serves as an essential resource on physically-based rendering.
Abstract: Physically Based Rendering: From Theory to Implementation, Third Edition, describes both the mathematical theory behind a modern photorealistic rendering system and its practical implementation. Through a method known as 'literate programming', the authors combine human-readable documentation and source code into a single reference that is specifically designed to aid comprehension. The result is a stunning achievement in graphics education. Through the ideas and software in this book, users will learn to design and employ a fully-featured rendering system for creating stunning imagery. This completely updated and revised edition includes new coverage on ray-tracing hair and curves primitives, numerical precision issues with ray tracing, LBVHs, realistic camera models, the measurement equation, and much more. It is a must-have, full color resource on physically-based rendering. Presents up-to-date revisions of the seminal reference on rendering, including new sections on bidirectional path tracing, numerical robustness issues in ray tracing, realistic camera models, and subsurface scattering Provides the source code fora complete rendering systemallowing readers to get up and running fast Includes a unique indexing feature, literate programming, that lists the locations of each function, variable, and method on the page where they are first describedServes as an essential resource on physically-based rendering

1,612 citations

Proceedings ArticleDOI
01 Aug 2001
TL;DR: An image based rendering approach that generalizes many current imagebased rendering algorithms, including light field rendering and view-dependent texture mapping, that allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations.
Abstract: We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.

984 citations

Proceedings ArticleDOI
22 Oct 2003
TL;DR: This paper describes volume ray-casting on programmable graphics hardware as an alternative to object-order approaches, and exploits the early z-test to terminate fragment processing once sufficient opacity has been accumulated, and to skip empty space along the rays of sight.
Abstract: Nowadays, direct volume rendering via 3D textures has positioned itself as an efficient tool for the display and visual analysis of volumetric scalar fields. It is commonly accepted, that for reasonably sized data sets appropriate quality at interactive rates can be achieved by means of this technique. However, despite these benefits one important issue has received little attention throughout the ongoing discussion of texture based volume rendering: the integration of acceleration techniques to reduce per-fragment operations. In this paper, we address the integration of early ray termination and empty-space skipping into texture based volume rendering on graphical processing units (GPU). Therefore, we describe volume ray-casting on programmable graphics hardware as an alternative to object-order approaches. We exploit the early z-test to terminate fragment processing once sufficient opacity has been accumulated, and to skip empty space along the rays of sight. We demonstrate performance gains up to a factor of 3 for typical renditions of volumetric data sets on the ATI 9700 graphics card.

885 citations

Proceedings ArticleDOI
01 Aug 2001
TL;DR: A novel texture-based volume rendering approach that achieves the image quality of the best post-shading approaches with far less slices, suitable for new flexible consumer graphics hardware and suited for interactive high-quality volume graphics.
Abstract: We introduce a novel texture-based volume rendering approach that achieves the image quality of the best post-shading approaches with far less slices. It is suitable for new flexible consumer graphics hardware and provides high image quality even for low-resolution volume data and non-linear transfer functions with high frequencies, without the performance overhead caused by rendering additional interpolated slices. This is especially useful for volumetric effects in computer games and professional scientific volume visualization, which heavily depend on memory bandwidth and rasterization power.We present an implementation of the algorithm on current programmable consumer graphics hardware using multi-textures with advanced texture fetch and pixel shading operations. We implemented direct volume rendering, volume shading, arbitrary number of isosurfaces, and mixed mode rendering. The performance does neither depend on the number of isosurfaces nor the definition of the transfer functions, and is therefore suited for interactive high-quality volume graphics.

590 citations


Network Information
Related Topics (5)
Visualization
52.7K papers, 905K citations
80% related
Feature (computer vision)
128.2K papers, 1.7M citations
78% related
Object detection
46.1K papers, 1.3M citations
77% related
Image segmentation
79.6K papers, 1.8M citations
77% related
Feature extraction
111.8K papers, 2.1M citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202225
20201
20196
201811
201738