scispace - formally typeset
Search or ask a question
Topic

Alpha compositing

About: Alpha compositing is a research topic. Over the lifetime, 482 publications have been published within this topic receiving 11035 citations. The topic is also known as: alpha blend & alpha channel.


Papers
More filters
Patent
07 Apr 2008
TL;DR: In this paper, a car audio/video/navigation system using alpha blending and a control method thereof are provided to semi-transparently display MP3 information on a navigation screen in response to a request of MP3 operation.
Abstract: A car audio/video/navigation system using alpha blending and a control method thereof are provided to semi-transparently display MP3 information on a navigation screen in response to a request of MP3 operation. A method for controlling a car audio/video/navigation system using alpha blending comprises a step(211) of displaying a screen of navigation; a step(213) of detecting a request for an MP3 operation during display of the navigation screen; and a step(215) of displaying the MP3 information in response to the request by using alpha blending. A car audio/video/navigation system using alpha blending comprises a display part for displaying MP3 information on a navigation screen, a video process part for outputting MP3 information on the display part, and a control part for overlaying MP3 information by controlling the video process part.

1 citations

Proceedings ArticleDOI
13 Jul 2021
TL;DR: This paper proposes a combination of Mask R-CNN, state-of-the-art object detection and segmentation neural network, with the previously published method of sparse feature tracking, and utilises the additional information as an alpha channel of the object representations to increase the precision of the re-identification.
Abstract: In the last decade, we have seen a significant uprise of deep neural networks in image processing tasks and many other research areas. However, while various neural architectures have successfully solved numerous tasks, they constantly demand more and more processing time and training data. Moreover, the current trend of using existing pre-trained architectures just as backbones and attaching new processing branches on top not only increases this demand but diminishes the explainability of the whole model. Our research focuses on combinations of explainable building blocks for the image processing tasks, such as object tracking. We propose a combination of Mask R-CNN, state-of-the-art object detection and segmentation neural network, with our previously published method of sparse feature tracking [16]. Such a combination allows us to track objects by connecting detected masks using the proposed sparse feature tracklets. However, this method cannot recover from complete object occlusions and has to be assisted by an object re-identification. To this end, this paper uses our feature tracking method for a slightly different task: an unsupervised extraction of object representations that we can directly use to fine-tune an object re-identification algorithm, see Fig. 1 for visualisation. As we have to use objects masks already in the object tracking, our approach utilises the additional information as an alpha channel of the object representations, which further increases the precision of the re-identification. An additional benefit is that our fine-tuning method can be employed even in a fully online scenario.

1 citations

Proceedings ArticleDOI
16 Dec 2009
TL;DR: A novel, and interactive if desired, technique for compositing a video of a scene illuminated part by part by a moving light source into a single image since there is no requirement of camera calibration, modeling of the scene reflectance and light source estimation.
Abstract: Compositing the frames of a video into a single image can be quite useful in many applications like surveillance and special effects. We propose a novel, and interactive if desired, technique for compositing a video of a scene illuminated part by part by a moving light source into a single image. The automatically composited image provides well illuminated details of the objects in the scene which are in the path of the moving light. The proposed method also recovers the light path along with the illuminant direction and, through the user interaction, allows the user to select any specific sub-path and perform compositing on the corresponding video frames. This approach could prove to be an interesting idea since there is no requirement of camera calibration, modeling of the scene reflectance and light source estimation.

1 citations

01 Jan 2013
TL;DR: This dissertation has tried to obtain the secret image from stego image without having cover image, considering secret image as noise, and compared with the existing method using PSNR, MSE, NCC, MAD, SC as comparison parameters.
Abstract: Unlike encryption, steganography hides the very existence of secret information rather than hiding its meaning only. Image based steganography is the most common system used since digital images are widely used over the Internet and Web. The main aim of steganography is to increase the steganographic capacity and enhance the imperceptibility or undetectability. However, steganographic capacity and imperceptibility are at odds with each other. In addition, there is a tradeoff between both steganographic capacity and stego image quality. Hiding more data in cover images (higher capacity) introduces more artefacts into cover images and then increases the perceptibility of hidden data . Furthermore, it is not possible to simultaneously maximize the security and capacity of a steganographic system. Therefore, increasing steganographic capacity and enhancing stego image quality are still challenges. Secret image extraction is done by the proposed technique in which first the cover image is recovered by noise removal methods and then applying alpha blending. Since peak signal-to- noise ratio (PSNR) is extensively used as a quality measure of stego images, the reliability of PSNR for stego images is also evaluated in the work described in this dissertation. The proposed work is compared with the existing method using PSNR, MSE, NCC, MAD, SC as comparison parameters. Proposed technique reduces the requirement to keep record of cover images for secret information extraction. Otherwise for each information received, the receiver should also have the cover image saved with him. In the proposed technique I have tried to obtain the secret image from stego image without having cover image, considering secret image as noise. The technique deals with steganography in wavelet domain. Complete work can be seen as adding noise to cover image, and then using noise removal technique to obtain secret image. Soft thresholding and bilateral filtering used for removing noise are efficient. Experimental results shows that there's a trade - off between stego image and secret image extracted. It is seen that as we increase the value of alpha, stego image degrades, but secret image improves. The secret image obtained is in visually acceptable form. Results shown are objective and subjective in nature.

1 citations

Patent
26 Sep 2013
TL;DR: In this article, a work support device consisting of a robot 10 transporting an object article and a display means 19 is presented. But the robot is not equipped with a communication board. And the display means are not connected to the robot.
Abstract: PROBLEM TO BE SOLVED: To help a user to easily make a decision about executability of a task by a robot.SOLUTION: A work support device comprises a robot 10 transporting an object article and a display means 19. The robot 10 includes: a robot arm 11; a reachable range generator 12 calculating a range that can be reached by the robot arm 11 in voxel units; a voxel space data storage part 13 storing a reach range of the robot arm 11; a distance image sensor 14 acquiring an image and a distance of a viewpoint of the robot 10; a distance point calculation part 15 calculating a distance point on the basis of the acquired image and the distance; a reachable score extractor 16 calculating a reachable score on the basis of a comparison between the distance point and the voxel stored in the voxel space data storage part 13; and an alpha blending calculation part 17 generating an image in which a reachability score is superimposed on the image acquired by the distance image sensor 14. The display means 19 displays the image generated by the alpha blending calculation part 17.

1 citations


Network Information
Related Topics (5)
Rendering (computer graphics)
41.3K papers, 776.5K citations
77% related
Mobile device
58.6K papers, 942.8K citations
72% related
Mobile computing
51.3K papers, 1M citations
71% related
User interface
85.4K papers, 1.7M citations
70% related
Feature (computer vision)
128.2K papers, 1.7M citations
70% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
20219
20208
201913
201821
201723