scispace - formally typeset
Search or ask a question
Topic

Channel (digital image)

About: Channel (digital image) is a research topic. Over the lifetime, 7211 publications have been published within this topic receiving 69974 citations.


Papers
More filters
Proceedings ArticleDOI
24 Aug 2007
TL;DR: This work proposes a novel PCNN model - dual- channel PCNN for the first time based on original model, which is specialized in image fusion, and takes two medical images for example to explain efficiency and validity of the proposed method.
Abstract: Image fusion plays an important role in many fields such as computer vision, medical image, manufacturing, military, and remote sensing so on. Pulse coupled neural network (PCNN) is derived from the synchronous neuronal burst phenomena in the cat visual cortex. So it is very suitable for image processing. Due to some defects of original PCNN for data fusion, we propose a novel PCNN model - dual- channel PCNN for the first time based on original model, which is specialized in image fusion. In order to explain efficiency and validity of our proposed method, we take two medical images for example to explain further the advantages in comparison to other image fusion methods. Better results are obtained with our approach. Our fused image includes more information than others, which show our method is better and efficient one. Meanwhile our method not only fuses multi-source images very well but also enhances the quality of the fused image.

20 citations

Journal ArticleDOI
Baixin Jin1, Pingping Liu1, Wang Peng1, Shi Lida1, Jing Zhao1 
30 Jul 2020-Entropy
TL;DR: A new aggregation channel attention network to make full use of the influence of context information on semantic segmentation is proposed, which retains more image features, restores the significant features more accurately, and further improves the segmentation performance of medical images.
Abstract: Medical image segmentation is an important part of medical image analysis. With the rapid development of convolutional neural networks in image processing, deep learning methods have achieved great success in the field of medical image processing. Deep learning is also used in the field of auxiliary diagnosis of glaucoma, and the effective segmentation of the optic disc area plays an important assistant role in the diagnosis of doctors in the clinical diagnosis of glaucoma. Previously, many U-Net-based optic disc segmentation methods have been proposed. However, the channel dependence of different levels of features is ignored. The performance of fundus image segmentation in small areas is not satisfactory. In this paper, we propose a new aggregation channel attention network to make full use of the influence of context information on semantic segmentation. Different from the existing attention mechanism, we exploit channel dependencies and integrate information of different scales into the attention mechanism. At the same time, we improved the basic classification framework based on cross entropy, combined the dice coefficient and cross entropy, and balanced the contribution of dice coefficients and cross entropy loss to the segmentation task, which enhanced the performance of the network in small area segmentation. The network retains more image features, restores the significant features more accurately, and further improves the segmentation performance of medical images. We apply it to the fundus optic disc segmentation task. We demonstrate the segmentation performance of the model on the Messidor dataset and the RIM-ONE dataset, and evaluate the proposed architecture. Experimental results show that our network architecture improves the prediction performance of the base architectures under different datasets while maintaining the computational efficiency. The results render that the proposed technologies improve the segmentation with 0.0469 overlapping error on Messidor.

20 citations

Proceedings ArticleDOI
01 Jun 2020
TL;DR: The proposed method -namely, AtJwD- can outperform many state-of-the-art alternatives in the sense of quality metrics such as SSIM, especially in recovering images under non-homogeneous haze.
Abstract: The emergence of deep learning methods that complement traditional model-based methods has helped achieve a new state-of-the-art for image dehazing. Many recent methods design deep networks that either estimate the haze-free image (J) directly or estimate physical parameters in the haze model, i.e. ambient light (A) and transmission map (t) followed by using the inverse of the haze model to estimate the dehazed image. However, both kinds of methods fail in dealing with non-homogeneous haze images where some parts of the image are covered with denser haze and the other parts with shallower haze. In this work, we develop a novel neural network architecture that can take benefits of the aforementioned two kinds of dehazed images simultaneously by estimating a new quantity - a spatially varying weight map (w). w can then be used to combine the directly estimated J and the results obtained by the inverse model. In our work, we utilize a shared DenseNet-based encoder, and four distinct DenseNet-based decoders that estimate J, A, t, and w jointly. A channel attention structure is added to facilitate the generation of distinct feature maps of different decoders. Furthermore, we propose a novel dilation inception module in the architecture to utilize the non-local features to make up the missing information during the learning process. Experiments performed on challenging benchmark datasets of NTIRE'20 and NTIRE'18 demonstrate that the proposed method -namely, AtJwD- can outperform many state-of-the-art alternatives in the sense of quality metrics such as SSIM, especially in recovering images under non-homogeneous haze.

20 citations

Patent
30 Jun 2008
TL;DR: In this article, a first capture unit captures: a visible first color component of a visible left image combined with a fluorescence left image from first light from one channel in the endoscope; a visible second colour component of the visible left images from the first light; and a visible third color component for the visible right image from the second light.
Abstract: An endoscope with a stereoscopic optical channel is held and positioned by a robotic surgical system. A first capture unit captures: a visible first color component of a visible left image combined with a fluorescence left image from first light from one channel in the endoscope; a visible second color component of the visible left image from the first light; and a visible third color component of the visible left image from the first light. A second capture unit captures: a visible first color component of a visible right image combined with a fluorescence right image from second light from the other channel in the endoscope; a visible second color component of the visible right image from the second light; and a visible third color component of the visible right image from the second light. An augmented stereoscopic outputs a real-time stereoscopic image including a three-dimensional presentation including the visible left and right images and the fluorescence left and right images.

20 citations

Patent
29 Aug 2008
TL;DR: In this article, a video with uniform quality corresponding to a channel environment having a variable bit-rate can be provided, by using at least one of channel state information and a result of encoding a video in a predetermined encoding unit encoded in advance, whether or not to convert a video from RGB (red, green, blue) color format into a YCbCr color format is adaptively determined to perform encoding.
Abstract: Provided are video encoding and decoding methods and apparatuses for encoding a video by variably selecting one from two or more difference color formats. Accordingly, by using at least one of channel state information and a result of encoding a video in a predetermined encoding unit encoded in advance, whether or not to convert a video in a current encoding unit of an input RGB (red, green, blue) color format into a YCbCr color format is adaptively determined to perform encoding. Therefore, a video with uniform quality corresponding to a channel environment having a variable bit-rate can be provided.

20 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
86% related
Image processing
229.9K papers, 3.5M citations
85% related
Feature (computer vision)
128.2K papers, 1.7M citations
85% related
Image segmentation
79.6K papers, 1.8M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202216
2021559
2020643
2019696
2018613
2017496