scispace - formally typeset
Search or ask a question

Showing papers on "Alpha compositing published in 2020"


Journal ArticleDOI
TL;DR: A novel CS-based hyperspectral image fusion framework is presented by combining structure tensor and matting model and proves the potential of the proposed algorithm in preserving spectral information and enhancing spatial information.

15 citations


Posted Content
TL;DR: A neural network based method is proposed for soft color segmentation that decomposes a given image into multiple layers in a single forward pass and achieves proper assignment of colors amongst layers without existing issue of inference speed for iterative approaches.
Abstract: We address the problem of soft color segmentation, defined as decomposing a given image into several RGBA layers, each containing only homogeneous color regions. The resulting layers from decomposition pave the way for applications that benefit from layer-based editing, such as recoloring and compositing of images and videos. The current state-of-the-art approach for this problem is hindered by slow processing time due to its iterative nature, and consequently does not scale to certain real-world scenarios. To address this issue, we propose a neural network based method for this task that decomposes a given image into multiple layers in a single forward pass. Furthermore, our method separately decomposes the color layers and the alpha channel layers. By leveraging a novel training objective, our method achieves proper assignment of colors amongst layers. As a consequence, our method achieve promising quality without existing issue of inference speed for iterative approaches. Our thorough experimental analysis shows that our method produces qualitative and quantitative results comparable to previous methods while achieving a 300,000x speed improvement. Finally, we utilize our proposed method on several applications, and demonstrate its speed advantage, especially in video editing.

13 citations


Journal ArticleDOI
TL;DR: The proposed system takes as input, Color images and then embeds data using alpha blending method and tries to recover from dual attacks using median filtering and pseudo Zernike moment, giving better results in terms of PSNR and MSE as compared to other existing systems using QWT.

11 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: Zhang et al. as discussed by the authors proposed a neural network based method for soft color segmentation, which decomposes a given image into multiple layers in a single forward pass, and separately decomposes the color layers and the alpha channel layers.
Abstract: We address the problem of soft color segmentation, defined as decomposing a given image into several RGBA layers, each containing only homogeneous color regions. The resulting layers from decomposition pave the way for applications that benefit from layer-based editing, such as recoloring and compositing of images and videos. The current state-of-the-art approach for this problem is hindered by slow processing time due to its iterative nature, and consequently does not scale to certain real-world scenarios. To address this issue, we propose a neural network based method for this task that decomposes a given image into multiple layers in a single forward pass. Furthermore, our method separately decomposes the color layers and the alpha channel layers. By leveraging a novel training objective, our method achieves proper assignment of colors amongst layers. As a consequence, our method achieve promising quality without existing issue of inference speed for iterative approaches. Our thorough experimental analysis shows that our method produces qualitative and quantitative results comparable to previous methods while achieving a 300,000x speed improvement. Finally, we utilize our proposed method on several applications, and demonstrate its speed advantage, especially in video editing.

9 citations


Patent
22 Oct 2020
TL;DR: In this article, an ECU selects a first synthesis method when a distance between an object (8) existing in an overlapping portion of areas that are captured by the plurality of cameras (1) and the vehicle (7) is less than or equal to a threshold value.
Abstract: An ECU (2) generates a bird's-eye image including a vehicle (7) and the surroundings of the vehicle on the basis of a plurality of images captured by a plurality of cameras (1) mounted on the vehicle (7). A synthesis method selection unit (23) selects a first synthesis method when a distance between an object (8) existing in an overlapping portion of areas that are captured by the plurality of cameras (1) and the vehicle (7) is less than or equal to a threshold value and the object (8) exists in a traveling direction of the vehicle (7). With the first synthesis method, the overlapping portion is divided into two sets of a road surface image and an object image. Further, the images are synthesized by performing alpha blending for a region which corresponds to the object image in one of the two sets and corresponds to the road surface image in the other set, with an α value of the object image being set to 1 and an α value of the road surface image being set to 0, thereby generating the bird's-eye image.

Patent
30 Dec 2020
TL;DR: In this paper, the first destination layer pixels have associated alpha values and the first source layer pixels are converted to the first blending color format for the alpha values, which is different from a first-source layer color format and an output color format.
Abstract: In some examples, an apparatus obtains source layer pixels, such as those of a content image and first destination layer pixels, such as those of a destination image. The first destination layer pixels have associated alpha values. The apparatus obtains information that indicates a first blending color format for the alpha values. The first blending color format is different from a first destination layer color format for the first destination layer pixels and an output color format for a display. The apparatus converts the source and/or first destination layer pixels to the first blending color format. The apparatus generates first alpha blended pixels based on alpha blending the source layer pixels with the first destination layer pixels using the associated alpha values. The apparatus provides, for display on the display, the first alpha blended pixels.

Patent
22 Oct 2020
TL;DR: In this article, a display controller reads out data of a first image 52a and a second image 52b from a frame buffer, and with a conversion formula depending on the characteristics for the brightness of the images, converts the data into data in a blend space A having common characteristics (S10a).
Abstract: To suitably composite and easily display a plurality of images regardless of conditions.SOLUTION: A display controller reads out data of a first image 52a and a second image 52b from a frame buffer, and with a conversion formula depending on the characteristics for the brightness of the images, converts the data into data in a blend space A having common characteristics (S10a). The display controller performs alpha blending of the converted data in the blend space A (S12), further converts the data into a space having characteristics suitable for a display (S14), and outputs the data to the display.SELECTED DRAWING: Figure 8

Patent
28 May 2020
TL;DR: In this article, a method for video processing based on augmented reality comprises: acquiring an image; recognizing a target object in the image; obtaining a video file associated with the target object; recognizing foreground and background portions of a video frame in the video file; configuring alpha channel values for pixels corresponding to the foreground portion and pixels correspond to the background portion of the video frame, to make the background part transparent; determining a position of the foreground part of the voxelized image in image; and synthesizing the video frames with the image based on the position of foreground portion
Abstract: Embodiments of the application provide a method, apparatus, and non-transitory computer-readable storage medium for video processing based on augmented reality. The method for video processing based on augmented reality comprises: acquiring an image; recognizing a target object in the image; obtaining a video file associated with the target object; recognizing a foreground portion and a background portion of a video frame in the video file; configuring alpha channel values for pixels corresponding to the foreground portion and pixels corresponding to the background portion of the video frame, to make the background portion of the video frame transparent; determining a position of the foreground portion of the video frame in the image; and synthesizing the video frame with the image based on the position of the foreground portion of the video frame in the image to obtain a synthesized video frame.