scispace - formally typeset
Search or ask a question

Showing papers on "Alpha compositing published in 2006"


Journal ArticleDOI
TL;DR: A compositing procedure based on mathematical morphology and its marker-controlled segmentation paradigm is proposed to position seams along salient image structures so as to diminish their visibility in the output mosaic even in the absence of radiometric corrections or blending procedures.
Abstract: Image mosaicking can be defined as the registration of two or more images that are then combined into a single image. Once the images have been registered to a common coordinate system, the problem amounts to the definition of a selection rule to output a unique value for all those pixels that are present in more than one image. This process is known as image compositing. In this paper, we propose a compositing procedure based on mathematical morphology and its marker-controlled segmentation paradigm. Its scope is to position seams along salient image structures so as to diminish their visibility in the output mosaic even in the absence of radiometric corrections or blending procedures. We also show that it is suited to the seamless minimization of undesirable transient objects occurring in the regions where two or more images overlap. The proposed methodology and algorithms are illustrated for the composition of satellite images minimizing cloud cover

102 citations


Journal ArticleDOI
TL;DR: To provide visually meaningful, high level control over the compositing process, this work introduces three novel image blending operators that are designed to preserve key visual characteristics of their inputs.
Abstract: Linear interpolation is the standard image blending method used in image compositing. By averaging in the dynamic range, it reduces contrast and visibly degrades the quality of composite imagery. We demonstrate how to correct linear interpolation to resolve this longstanding problem. To provide visually meaningful, high level control over the compositing process, we introduce three novel image blending operators that are designed to preserve key visual characteristics of their inputs. Our contrast preserving method applies a linear color mapping to recover the contrast lost due to linear interpolation. Our salience preserving method retains the most informative regions of the input images by balancing their relative opacity with their relative saliency. Our color preserving method extends homomorphic image processing by establishing an isomorphism between the image colors and the real numbers, allowing any mathematical operation defined on real numbers to be applied to colors without losing its algebraic properties or mapping colors out of gamut. These approaches to image blending have artistic uses in image editing and video production as well as technical applications such as image morphing and mipmapping. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image Generation

59 citations


Patent
31 Aug 2006
TL;DR: In this paper, a controller is used to dynamically control rendering quality of an output image when the device is in a reduced power mode based on data representing a desired runtime length of an application.
Abstract: A device includes a controller that is operative to dynamically control rendering quality of an output image when the device is in a reduced power mode based on data representing a desired runtime length of an application. Memory containing data representing quality of rendering control information may be utilized by the controller to control graphics processing circuitry to change a quality of graphics rendering based on the quality of rendering control information. The quality of control information may include, by way of example, and not limitation, data representing a number of vertices per object to use for rendering objects, a texture size to use per frame, a degree or type of anti-aliasing to employ, whether to use alpha blending, a tessellation level to employ, and playback frame rate information. A user interface may be employed that provides a selectable desired application runtime duration setting that is used when the device or portion of the device is in a low power mode. The controller uses the quality of rendering control information to dynamically control the rendering quality based on the selected desired application runtime duration set through the user interface.

45 citations


Proceedings ArticleDOI
30 Jul 2006
TL;DR: This work proposes a new algorithm, that integrates matting and compositing into a single optimization process, that is able to compose foreground elements onto a new background more efficiently and with less artifacts compared with previous approaches.
Abstract: Recent work in matting, hole filling, and compositing allows image elements to be mixed in a new composite image. Previous algorithms for matting foreground elements have assumed that the new background for compositing is unknown. We show that, if the new background is known, the matting algorithm has more freedom to create a successful matte by simultaneously optimizing the matting and compositing operations. We propose a new algorithm, that integrates matting and compositing into a single optimization process. The system is able to compose foreground elements onto a new background more efficiently and with less artifacts compared with previous approaches. In our examples, we show how one can enlarge the foreground while maintaining the wide angle view of the background. We also demonstrate composing a foreground element on top of similar backgrounds to help remove unwanted portions of the background or to re-scale or re-arrange the composite. We compare and contrast our method with a number of previous matting and compositing systems.

41 citations


Journal ArticleDOI
TL;DR: In this article, a high-speed 3D graphics SoC for consumer applications is presented, where a 166-MHz 3-D graphics full pipeline engine with performance of 33 Mvertices/s and 1.3Gtexels/s, and a 333-MHz ARM11 RISC processor, and video composition IPs are integrated together on a single chip.
Abstract: A high-speed three-dimensional (3-D) graphics SoC for consumer applications is presented. A 166-MHz 3-D graphics full pipeline engine with performance of 33 Mvertices/s and 1.3Gtexels/s, and 333-MHz ARM11 RISC processor, and video composition IPs are integrated together on a single chip. The geometry part of 3-D graphics IP provides full programmability in vertex and triangle level, and two-level multi-texturing with trilinear MIPMAP filtering are realized in the rasterization part. Per-pixel effects such as fog effects, alpha blending, and stencil test are also implemented in the proposed 3-D graphics IP. The rasterization architecture is designed for reducing external memory accesses to achieve the peak performance. The chip is fabricated using 0.13/spl mu/m CMOS technology and its area is 7.1/spl times/7.0mm/sup 2/.

37 citations


Patent
04 Apr 2006
TL;DR: In this paper, a method for modifying selected regions of a target image based on selected regions from a source image was proposed, which can produce fun effects such as moving the eyes and lips of an image of Mona Lisa.
Abstract: A method for modifying selected regions of a target image based on selected regions of a source image. In one embodiment, facial features are detected in a video image from a webcam. One or more of those facial features are selected and superimposed on the target image. Resizing and alpha blending techniques are used to blend the source portions into the target images. For example, this can produce fun effects such as moving the eyes and lips of an image of Mona Lisa.

37 citations


Patent
20 Nov 2006
TL;DR: In this article, the authors present a system and method for compositing 3D images that combines parts of or at least a portion of two or more images having 3D properties to create a 3D image.
Abstract: A system and method for compositing 3D images that combines parts of or at least a portion of two or more images having 3D properties to create a 3D image. The system and method of the present disclosure provides for acquiring at least two three-dimensional (3D) images (202, 204), obtaining metadata (e.g., lighting, geometry, and object information) relating to the at least two 3D images (206, 208), mapping the metadata of the at least two 3D images into a single 3D coordinate system, and compositing a portion of each of the at least two 3D images into a single 3D image (214). The single 3D image can be rendered into a desired format (e.g., stereo image pair) (218). The system and method can associate the rendered output with relevant metadata (e.g., interocular distance for stereo image pairs) (218).

33 citations


Proceedings Article
01 Jan 2006
TL;DR: A high-speed three-dimensional (3-D) graphics SoC for consumer applications is presented that provides full programmability in vertex and triangle level, and two-level multi-texturing with trilinear MIPMAP filtering are realized in the rasterization part.
Abstract: A high-speed three-dimensional (3-D) graphics SoC for consumer applications is presented. A 166-MHz 3-D graphics full pipeline engine with performance of 33 Mvertices/s and 1.3 Gtexels/s, and 333-MHz ARM11 RISC processor, and video composition IPs are integrated together on a single chip. The geometry part of 3-D graphics IP provides full programmability in vertex and triangle level, and two-level multi-texturing with trilinear MIPMAP filtering are realized in the rasterization part. Per-pixel effects such as fog effects, alpha blending, and stencil test are also implemented in the proposed 3-D graphics IP. The rasterization architecture is designed for reducing external memory accesses to achieve the peak performance. The chip is fabricated using 0.13 μm CMOS technology and its area is 7.1 x 7.0 mm 2 .

30 citations


Patent
19 Nov 2006
TL;DR: In this paper, a technique for image compositing which allows a user to select the best image of an object, such as for example a person, from a set of images interactively and see how it will be assembled into a final photomontage.
Abstract: A technique for image compositing which allows a user to select the best image of an object, such as for example a person, from a set of images interactively and see how it will be assembled into a final photomontage. A user can select a source image from the set of images as an initial composite image. A region, representing a set of pixels to be replaced, is chosen by the user in the composite image. A corresponding same region is reflected in one or more source images, one of which will be selected by the user for painting into the composite image. The technique optimizes the selection of pixels around the user-chosen region or regions for cut points that will be least likely to show seams where the source images are merged in the composite image.

20 citations


Journal ArticleDOI
Philip Willis1
TL;DR: This work provides a unified explanation of pre‐multiplied and non pre‐ multiplied colours, including negative coordinates and infinite points in colour space, and unifies the three existing significant compositing models in a single framework with a physically‐plausible energy basis.
Abstract: Alpha colours were introduced for image compositing, using a pixel cover age model. Algebraically they resemble homogeneous coordinates, widely used in projective geometry calcula tions. We show why this is the case. This allows us to extend alpha beyond compositing, to all colour calculations r egardless of whether pixels are involved and without the need for a coverage model. Our approach includ es multi-channel spectral calculations and removes the need for 7 channel and 6 channel alpha colour opera tions. It provides a unified explanation of pre-multiplied and non pre-multiplied colours, including negative coordinate s and infinite points in colour space. It permits filter and illumination operations. It unifies the three existing significant compositing models in a single framework. It achieves this with a physically-plausible energy basis.

11 citations


Patent
25 May 2006
TL;DR: In this article, a computational architecture in which the image layers are packetized and streamed through processors can be easily scaled so to handle many image layers and high resolutions, but the available computational resources limit the images and videos that can be produced.
Abstract: Images and video can be produced by compositing or alpha blending a group of image layers or video layers. Increasing resolution or the number of layers results in increased computational demands. As such, the available computational resources limit the images and videos that can be produced. A computational architecture in which the image layers are packetized and streamed through processors can be easily scaled so to handle many image layers and high resolutions. The image layers are packetized to produce packet streams. The packets in the streams are received, placed in queues, and processed. For alpha blending, ingress queues receive the packetized image layers which are then z sorted and sent to egress queues. The egress queue packets are alpha blended to produce an output image or video.

Proceedings ArticleDOI
25 Jun 2006
TL;DR: The current mathematic theory of alpha channel and alpha estimation in image matting, which were used in alpha mapping, was introduced and it was shown that the great improvement of real-time performance could be obtained based on the third dimension of the diversified objects and scene.
Abstract: The display of third dimension graphics is the most crucial part in scene simulation, and alpha mapping plays an important role in the synthesization of third dimension graphics. Alpha mapping is a texture mapping technique with alpha channel for simulating transparent effects caused by patterned irregularities on otherwise locally smooth surfaces. In this paper, the current mathematic theory of alpha channel and alpha estimation in image matting, which were used in alpha mapping, was introduced. Based on the Object-Oriented Graphics Rendering Engine (OGRE), alpha mapping was applied in the scene simulation system of launch vehicle, and achieved excellent result both in rendering time and quality. It is shown that the great improvement of real-time performance could be obtained based on the third dimension of the diversified objects and scene.

Book ChapterDOI
01 Jan 2006
TL;DR: It seems likely that “lossy” HDR formats will soon be introduced that offer much better compression—on a par with existing JPEG images that will remove an important barrier to HDR adoption in markets such as digital photography and video and in Web-based applications such as virtual reality tours.
Abstract: The principal benefit of using scene-referred high-dynamic-range (HDR) images is their independence from the display process. A properly designed HDR format covers the full range and sensitivity of human vision and is thus prepared for any future display technology intended for humans. Many HDR formats offer another benefit, through additional range and accuracy, of permitting complex image operations without exposing quantization and range errors typical of more conventional low-dynamic-range (LDR) formats. The cost of this additional range and accuracy is modest—similar to including an extra alpha channel in an LDR format. This burden can be further reduced in cases in which accuracy is less critical (i.e., when multiple image read/edit/write cycles are not expected). All of the existing HDR file formats are “lossless” in the sense that they do not lose information after the initial encoding, and repeated reading and writing of the files do not result in further degradation. However, it seems likely that “lossy” HDR formats will soon be introduced that offer much better compression—on a par with existing JPEG images. This will remove an important barrier to HDR adoption in markets such as digital photography and video and in Web-based applications such as virtual reality tours. The JPEG-HDR “lossy” compression format is described as a preliminary work in this chapter.


Journal Article
Dai Shu-ling1
TL;DR: The algorithm takes extensive use of the Graphics Processing Unit(GPU) for its multi-texture and programmable performance, and thus transfers the considerable calculation from the CPU to the GPU.
Abstract: The prominent characteristic of an image with depth of field(DOF) effect is that part of the image is clear but the other part is blurry.From this point,in the computer simulation of DOF,an image of the three-dimensional scene was produed and it was stored as a texture which was called as clear scene-texture.In the process of imaging,the information of the depth of objects was stored in the alpha channel by vertex program.Then the clear scene-texture was filtered averagely several times and another blurry scene-texture was got.At last,the alpha value stored in the clear scene-texture was taken as the coefficient,which used it to blend these two textures to achieve the DOF effect.The algorithm takes extensive use of the Graphics Processing Unit(GPU) for its multi-texture and programmable performance,and thus transfers the considerable calculation from the CPU to the GPU.The algorithm can well simulate the real-time DOF effect,and can be applied in VR system.

01 Jul 2006
TL;DR: A new digital 3D image compositing method is proposed aimed to generate 3D space model for modeling scenes which used estimated geometric information and combine with digital source images.
Abstract: Due to recent advances in digital technology, people are showing an increased interest in film and video technology. Image compositing is the core of the digital image related multimedia technology. Currently, various digital image compositing technologies are being developed. So far, digital compositing techniques make use of two general methods. The first is compositing technique of using motion control cameras for precisely capturing in 3D camera motion information. The second is a compositing technique of combining 3D modeling by using 3D graphics editing tool with the existing digital source images. However, when the compositing object is 2D photograph, digital compositing has a great deal of difficulty combining with digital source images properly. In this paper, in order to overcome common difficulties, we propose a new digital 3D image compositing method aimed to generate 3D space model for modeling scenes which used estimated geometric information and combine with digital source images. At this time, we generate a 3D space model by using image-based modeling from a general single 2D photograph. Therefore, we can generate compositing results of high quality easily and quickly with a 2D photograph. Moreover, it is possible for us to generate a more efficient and actual compositing image.