scispace - formally typeset
Search or ask a question
Topic

Alpha compositing

About: Alpha compositing is a research topic. Over the lifetime, 482 publications have been published within this topic receiving 11035 citations. The topic is also known as: alpha blend & alpha channel.


Papers
More filters
Patent
Aseem Agarwala1
12 Dec 2007
TL;DR: In this article, a method for stitching the seams between the aligned source images in the assembled composite image to form a final composite image is presented, which includes performing a gradient domain compositing.
Abstract: Systems, methods, and apparatus, including computer program products, for forming composite images using gradient-domain compositing are provided. In some implementations, a method is provided. The method includes receiving two or more source images and aligning the received source images to form an assembled composite image. The method also includes stitching the seams between the aligned source images in the assembled composite image to form a final composite image. The stitching includes performing a gradient domain compositing. The gradient domain compositing uses a subset of pixels in the assembled composite image including calculating individual pixel values along the seams and interpolating pixel values away from the seams.

35 citations

Patent
Atsushi Kashitani1, Nakao Toshiyasu1
20 Jan 2000
TL;DR: In this paper, the rotational center of the camera is used as the center of projection during the compositing of the plurality of partial images, and the intersection point between the projection surface which is established in space and a straight line which is parallel to the straight line connecting the viewpoint of a camera and the pixel in the partial image in an image plane formed by the plurality is defined.
Abstract: When compositing a plurality of acquired partial images, the effects of differences in viewpoint contained in the partial images can be reduced, and a high quality wide-viewfield image is inputted. An image input apparatus for photographing parts of a field to be photographed while altering the photographic direction by rotating a camera, projecting a plurality of partial images of the field to be photographed which are obtained onto a projection surface, and based on the results of this projection, conducting compositing on a composite image surface, and inputting the composite image, wherein is provided an image compositing mechanism 009, which employs the rotational center of the camera as the center of projection during the compositing of the plurality of partial images, and which employs, as the projection point of a pixel of a partial image, the intersection point between the projection surface which is established in space and a straight line, which is parallel to a straight line connecting the viewpoint of the camera and the pixel in the partial image in an image plane formed by the plurality of partial images, and which passes through the rotational center of the camera.

34 citations

Journal ArticleDOI
TL;DR: This paper proposes novel GPU accelerated algorithms for interactive point-based rendering (PBR) and high-quality shading of transparent point surfaces and presents different grouping algorithms for off-line and on-line processing.

34 citations

Journal ArticleDOI
TL;DR: This paper exploits a new adaptation of the dual depth peeling technique to produce correct volume image data and to simultaneously render the resulting volume data using 3D transfer functions into the final 2D image.
Abstract: This paper describes GL4D, an interactive system for visualizing 2-manifolds and 3-manifolds embedded in four Euclidean dimensions and illuminated by 4D light sources. It is a tetrahedron-based rendering pipeline that projects geometry into volume images, an exact parallel to the conventional triangle-based rendering pipeline for 3D graphics. Novel features include GPU-based algorithms for real-time 4D occlusion handling and transparency compositing; we thus enable a previously impossible level of quality and interactivity for exploring lit 4D objects. The 4D tetrahedrons are stored in GPU memory as vertex buffer objects, and the vertex shader is used to perform per-vertex 4D modelview transformations and 4D-to-3D projection. The geometry shader extension is utilized to slice the projected tetrahedrons and rasterize the slices into individual 2D layers of voxel fragments. Finally, the fragment shader performs per-voxel operations such as lighting and alpha blending with previously computed layers. We account for 4D voxel occlusion along the 4D-to-3D projection ray by supporting a multi-pass back-to-front fragment composition along the projection ray; to accomplish this, we exploit a new adaptation of the dual depth peeling technique to produce correct volume image data and to simultaneously render the resulting volume data using 3D transfer functions into the final 2D image. Previous CPU implementations of the rendering of 4D-embedded 3-manifolds could not perform either the 4D depth-buffered projection or manipulation of the volume-rendered image in real-time; in particular, the dual depth peeling algorithm is a novel GPU-based solution to the real-time 4D depth-buffering problem. GL4D is implemented as an integrated OpenGL-style API library, so that the underlying shader operations are as transparent as possible to the user.

33 citations

Patent
20 Nov 2006
TL;DR: In this article, the authors present a system and method for compositing 3D images that combines parts of or at least a portion of two or more images having 3D properties to create a 3D image.
Abstract: A system and method for compositing 3D images that combines parts of or at least a portion of two or more images having 3D properties to create a 3D image. The system and method of the present disclosure provides for acquiring at least two three-dimensional (3D) images (202, 204), obtaining metadata (e.g., lighting, geometry, and object information) relating to the at least two 3D images (206, 208), mapping the metadata of the at least two 3D images into a single 3D coordinate system, and compositing a portion of each of the at least two 3D images into a single 3D image (214). The single 3D image can be rendered into a desired format (e.g., stereo image pair) (218). The system and method can associate the rendered output with relevant metadata (e.g., interocular distance for stereo image pairs) (218).

33 citations


Network Information
Related Topics (5)
Rendering (computer graphics)
41.3K papers, 776.5K citations
77% related
Mobile device
58.6K papers, 942.8K citations
72% related
Mobile computing
51.3K papers, 1M citations
71% related
User interface
85.4K papers, 1.7M citations
70% related
Feature (computer vision)
128.2K papers, 1.7M citations
70% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
20219
20208
201913
201821
201723