scispace - formally typeset
Search or ask a question
Topic

Rendering (computer graphics)

About: Rendering (computer graphics) is a research topic. Over the lifetime, 41389 publications have been published within this topic receiving 776535 citations. The topic is also known as: image synthesis.


Papers
More filters
Book ChapterDOI
01 Jan 2005
TL;DR: This chapter describes the design and features of a visualization tool called ParaView, a tool that allows scientists to visualize and analyze extremely large datasets and discusses key design decisions and tradeoffs.
Abstract: This chapter describes the design and features of a visualization tool called ParaView, a tool that allows scientists to visualize and analyze extremely large datasets. The tool provides a graphical user interface for the creation and dynamic execution of visualization tasks. ParaView transparently supports the visualization and rendering of large datasets by executing these programs in parallel on shared or distributed memory machines. ParaView supports hardware-accelerated parallel rendering and achieves interactive rendering performance via level-of-detail techniques. The design balances and integrates a number of diverse requirements, including the ability to handle large data, ease of use, and extensibility by developers. The chapter describes the requirements that guided the design, identifies the importance of those requirements to scientific users, and discusses key design decisions and tradeoffs.

1,683 citations

Journal ArticleDOI
01 Aug 2004
TL;DR: This paper shows how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms, and develops a novel temporal two-layer compressed representation that handles matting.
Abstract: The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates.

1,677 citations

Book
28 Sep 2004
TL;DR: Physically Based Rendering: From Theory to Implementation, Third Edition, describes both the mathematical theory behind a modern photorealistic rendering system and its practical implementation through a method known as 'literate programming', which serves as an essential resource on physically-based rendering.
Abstract: Physically Based Rendering: From Theory to Implementation, Third Edition, describes both the mathematical theory behind a modern photorealistic rendering system and its practical implementation. Through a method known as 'literate programming', the authors combine human-readable documentation and source code into a single reference that is specifically designed to aid comprehension. The result is a stunning achievement in graphics education. Through the ideas and software in this book, users will learn to design and employ a fully-featured rendering system for creating stunning imagery. This completely updated and revised edition includes new coverage on ray-tracing hair and curves primitives, numerical precision issues with ray tracing, LBVHs, realistic camera models, the measurement equation, and much more. It is a must-have, full color resource on physically-based rendering. Presents up-to-date revisions of the seminal reference on rendering, including new sections on bidirectional path tracing, numerical robustness issues in ray tracing, realistic camera models, and subsurface scattering Provides the source code fora complete rendering systemallowing readers to get up and running fast Includes a unique indexing feature, literate programming, that lists the locations of each function, variable, and method on the page where they are first describedServes as an essential resource on physically-based rendering

1,612 citations

Proceedings ArticleDOI
TL;DR: Details of a system that allows for an evolutionary introduction of depth perception into the existing 2D digital TV framework are presented and a comparison with the classical approach of "stereoscopic" video is compared.
Abstract: This paper presents details of a system that allows for an evolutionary introduction of depth perception into the existing 2D digital TV framework. The work is part of the European Information Society Technologies (IST) project “Advanced Three-Dimensional Television System Technologies” (ATTEST), an activity, where industries, research centers and universities have joined forces to design a backwards-compatible, flexible and modular broadcast 3D-TV system. At the very heart of the described new concept is the generation and distribution of a novel data representation format, which consists of monoscopic color video and associated perpixel depth information. From these data, one or more “virtual” views of a real-world scene can be synthesized in real-time at the receiver side (i. e. a 3D-TV set-top box) by means of so-called depth-image-based rendering (DIBR) techniques. This publication will provide: (1) a detailed description of the fundamentals of this new approach on 3D-TV; (2) a comparison with the classical approach of “stereoscopic” video; (3) a short introduction to DIBR techniques in general; (4) the development of a specific DIBR algorithm that can be used for the efficient generation of high-quality “virtual” stereoscopic views; (5) a number of implementation details that are specific to the current state of the development; (6) research on the backwards-compatible compression and transmission of 3D imagery using state-of-the-art MPEG (Moving Pictures Expert Group) tools.

1,560 citations

Proceedings ArticleDOI
15 Sep 1995
TL;DR: An image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function is presented and a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections are introduced.
Abstract: Image-based rendering is a powerful new approach for generating real-time photorealistic computer graphics. It can provide convincing animations without an explicit geometric representation. We use the “plenoptic function” of Adelson and Bergen to provide a concise problem statement for image-based rendering paradigms, such as morphing and view interpolation. The plenoptic function is a parameterized function for describing everything that is visible from a given point in space. We present an image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function. In addition, we introduce a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections.

1,555 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
85% related
Image processing
229.9K papers, 3.5M citations
83% related
User interface
85.4K papers, 1.7M citations
83% related
Image segmentation
79.6K papers, 1.8M citations
83% related
Pixel
136.5K papers, 1.5M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20242
20231,313
20222,786
20211,389
20202,154
20192,348