scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Compositing digital images

01 Jan 1984-Vol. 18, Iss: 3, pp 253-259
TL;DR: In this article, a matte component can be computed similarly to the color channels for four-channel pictures, and guidelines for the generation of elements and arithmetic for their arbitrary compositing are discussed.
Abstract: Most computer graphics pictures have been computed all at once, so that the rendering program takes care of all computations relating to the overlap of objects. There are several applications, however, where elements must be rendered separately, relying on compositing techniques for the anti-aliased accumulation of the full image. This paper presents the case for four-channel pictures, demonstrating that a matte component can be computed similarly to the color channels. The paper discusses guidelines for the generation of elements and the arithmetic for their arbitrary compositing.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, a volume-rendering technique for the display of surfaces from sampled scalar functions of 3D spatial dimensions is discussed, which is not necessary to fit geometric primitives to the sampled data; images are formed by directly shading each sample and projecting it onto the picture plane.
Abstract: The application of volume-rendering techniques to the display of surfaces from sampled scalar functions of three spatial dimensions is discussed. It is not necessary to fit geometric primitives to the sampled data; images are formed by directly shading each sample and projecting it onto the picture plane. Surface-shading calculations are performed at every voxel with local gradient vectors serving as surface normals. In a separate step, surface classification operators are applied to compute a partial opacity of every voxel. Operators that detect isovalue contour surfaces and region boundary surfaces are examined. The technique is simple and fast, yet displays surfaces exhibiting smooth silhouettes and few other aliasing artifacts. The use of selective blurring and supersampling to further improve image quality is described. Examples from molecular graphics and medical imaging are given. >

2,437 citations

Posted Content
TL;DR: This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Abstract: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.

2,435 citations

Journal ArticleDOI
01 Jul 1985
TL;DR: The concept of "solid texture" to the field of CGI is introduced and used to create very convincing representations of clouds, fire, water, stars, marble, wood, rock, soap films and crystal.
Abstract: We introduce the concept of a Pixel Stream Editor. This forms the basis for an interactive synthesizer for designing highly realistic Computer Generated Imagery. The designer works in an interactive Very High Level programming environment which provides a very fast concept/implement/view iteration cycle.Naturalistic visual complexity is built up by composition of non-linear functions, as opposed to the more conventional texture mapping or growth model algorithms. Powerful primitives are included for creating controlled stochastic effects. We introduce the concept of "solid texture" to the field of CGI.We have used this system to create very convincing representations of clouds, fire, water, stars, marble, wood, rock, soap films and crystal. The algorithms created with this paradigm are generally extremely fast, highly realistic, and asynchronously parallelizable at the pixel level.

1,812 citations

Journal ArticleDOI
01 Jun 1988
TL;DR: A technique for rendering images of volumes containing mixtures of materials is presented, which allows both the interior of a material and the boundary between materials to be colored.
Abstract: A technique for rendering images of volumes containing mixtures of materials is presented. The shading model allows both the interior of a material and the boundary between materials to be colored. Image projection is performed by simulating the absorption of light along the ray path to the eye. The algorithms used are designed to avoid artifacts caused by aliasing and quantization and can be efficiently implemented on an image computer. Images from a variety of applications are shown.

1,702 citations

Proceedings ArticleDOI
24 Jul 1994
TL;DR: A new object-order rendering algorithm based on the factorization of a shear-warp factorization for perspective viewing transformations is described that is significantly faster than published algorithms with minimal loss of image quality.
Abstract: Several existing volume rendering algorithms operate by factoring the viewing transformation into a 3D shear parallel to the data slices, a projection to form an intermediate but distorted image, and a 2D warp to form an undistorted final image. We extend this class of algorithms in three ways. First, we describe a new object-order rendering algorithm based on the factorization that is significantly faster than published algorithms with minimal loss of image quality. Shear-warp factorizations have the property that rows of voxels in the volume are aligned with rows of pixels in the intermediate image. We use this fact to construct a scanline-based algorithm that traverses the volume and the intermediate image in synchrony, taking advantage of the spatial coherence present in both. We use spatial data structures based on run-length encoding for both the volume and the intermediate image. Our implementation running on an SGI Indigo workstation renders a 2563 voxel medical data set in one second. Our second extension is a shear-warp factorization for perspective viewing transformations, and we show how our rendering algorithm can support this extension. Third, we introduce a data structure for encoding spatial coherence in unclassified volumes (i.e. scalar fields with no precomputed opacity). When combined with our shear-warp rendering algorithm this data structure allows us to classify and render a 2563 voxel volume in three seconds. The method extends to support mixed volumes and geometry and is parallelizable.

1,249 citations

References
More filters
Proceedings ArticleDOI
John Warnock1, Douglas K. Wyatt1
01 Jul 1982
TL;DR: An imaging model and an associated implementation strategy that integrates scanned images, text, and synthetically generated graphics into a uniform device independent metaphor, and isolates the device dependent portions of the implementation to a small set of primitives, thereby minimizing the implementation cost for additional devices.
Abstract: In building graphic systems for use with raster devices, it is difficult to develop an intuitive, device independent model of the imaging process, and to preserve that model over a variety of device implementations. This paper describes an imaging model and an associated implementation strategy that:1. Integrates scanned images, text, and synthetically generated graphics into a uniform device independent metaphor;2. Isolates the device dependent portions of the implementation to a small set of primitives, thereby minimizing the implementation cost for additional devices;3. Has been implemented for binary, grey-scale, and full color raster display systems, and for high resolution black and white printers and color raster printers.

77 citations

Proceedings ArticleDOI
Bruce Wallace1
01 Aug 1981
TL;DR: This paper presents several computer methods for assisting in the production of cartoon animation, both to reduce expense and to improve the overall quality.
Abstract: The task of assembling drawings and backgrounds together for each frame of an animated sequence has always been a tedious undertaking using conventional animation camera stands and has contributed to the high cost of animation production. In addition, the physical limitations that these camera stands place on the manipulation of the individual artwork levels restricts the total image-making possibilities afforded by traditional cartoon animation. Documents containing all frame assembly information must also be maintained.This paper presents several computer methods for assisting in the production of cartoon animation, both to reduce expense and to improve the overall quality.Merging is the process of combining levels of artwork into a final composite frame using digital computer graphics. The term “level” refers to a single painted drawing (cel) or background. A method for the simulation of any hypothetical animation camera set-up is introduced. A technique is presented for reducing the total number of merges by retaining merged groups consisting of individual levels which do not change over successive frames. Lastly, a sequence-editing system which controls precise definition of an animated sequence, is described. Also discussed is the actual method for merging any two adjacent levels and several computational and storage optimizations to speed the process.

77 citations

Proceedings ArticleDOI
F. C. Crow1
01 Jul 1982
TL;DR: A two-level shape data structure supports this execution environment, allowing top-level priority decisions which avoid comparisons between surface elements from non-interfering objects during image construction.
Abstract: A supervisory process is used to distribute picture-generation tasks to heterogeneous subprocesses. Significant advantages accrue by tailoring the subprocesses to their tasks. In particular, scan conversion algorithms tailored to different surface types may be used in the same image, a changing mixture of processors is possible, and, by multiprogramming, a single processor may be used more effectively. A two-level shape data structure supports this execution environment, allowing top-level priority decisions which avoid comparisons between surface elements from non-interfering objects during image construction.

75 citations

Proceedings ArticleDOI
01 Aug 1981
TL;DR: A set of utility routines for 3-D shaded display which allow us to create raster scan display systems for various experimental and production applications and is a flexible scan conversion processor that can simultaneously manage several different object types.
Abstract: We describe a set of utility routines for 3-D shaded display which allow us to create raster scan display systems for various experimental and production applications. The principal feature of this system is a flexible scan conversion processor that can simultaneously manage several different object types.Communications between the scan conversion routine and processes which follow it in the display pipeline can be routed through a structure called a “span buffer” which retains some of the high resolution, three dimensional data of the object description and at the same time has the characteristics of a run length encoded image.

38 citations