scispace - formally typeset
Search or ask a question
Topic

Channel (digital image)

About: Channel (digital image) is a research topic. Over the lifetime, 7211 publications have been published within this topic receiving 69974 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, morphological operations and median filter were used first to remove noise from the original image during pre-processing and the combined Spline and B-spline method was subsequently adopted to enhance the image before segmentation.
Abstract: In a computerized image analysis environment, the irregularity of a lesion border has been used to differentiate between malignant melanoma and other pigmented skin lesions. The accuracy of the automated lesion border detection is a significant step towards accurate classification at a later stage. In this paper, we propose the use of a combined Spline and B-spline in order to enhance the quality of dermoscopic images before segmentation. In this paper, morphological operations and median filter were used first to remove noise from the original image during pre-processing. Then we proceeded to adjust image RGB values to the optimal color channel (green channel). The combined Spline and B-spline method was subsequently adopted to enhance the image before segmentation. The lesion segmentation was completed based on threshold value empirically obtained using the optimal color channel. Finally, morphological operations were utilized to merge the smaller regions with the main lesion region. Improvement on the average segmentation accuracy was observed in the experimental results conducted on 70 dermoscopic images. The average accuracy of segmentation achieved in this paper was 97.21 % (where, the average sensitivity and specificity were 94 % and 98.05 % respectively).

37 citations

Journal ArticleDOI
TL;DR: An image steganography technique based on the Canny edge detection algorithm that is designed to hide secret data into a digital image within the pixels that make up the boundaries of objects detected in the image.
Abstract: is the science of hiding digital information in such a way that no one can suspect its existence. Unlike cryptography which may arouse suspicions, steganography is a stealthy method that enables data communication in total secrecy. Steganography has many requirements, the foremost one is irrecoverability which refers to how hard it is for someone apart from the original communicating parties to detect and recover the hidden data out of the secret communication. A good strategy to guaranteeirrecoverability is to cover the secret data not usinga trivial method based on a predictable algorithm, but using a specific random pattern based on a mathematical algorithm. This paper proposes an image steganography technique based on theCanny edge detection algorithm.It is designed to hide secret data into a digital image within the pixels that make up the boundaries of objects detected in the image. More specifically, bits of the secret data replace the three LSBs of every color channel of the pixels detected by the Canny edge detection algorithm as part of the edges in the carrier image. Besides, the algorithm is parameterized by three parameters: The size of the Gaussian filter, a low threshold value, and a high threshold value. These parameters can yield to different outputs for the same input image and secret data. As a result, discovering the inner- workings of the algorithm would be considerably ambiguous, misguiding steganalysts from the exact location of the covert data. Experiments showed a simulation tool codenamed GhostBit, meant to cover and uncover secret data using the proposed algorithm. As future work, examining how other image processing techniques such as brightness and contrast adjustment can be taken advantage of in steganography with the purpose ofgiving the communicating parties more preferences tomanipulate their secret communication.

37 citations

Patent
Gary L. Taylor1
14 Dec 1990
TL;DR: In this article, a color encoder for reducing visible artifacts is proposed, where the human eye is most sensitive to luminance variations in the displayed image, and the luminance errors are minimized by adding red and green color error signals to the blue color intensity signal prior to encoding so that luminance detail may be maintained.
Abstract: A color encoder for reducing visible artifacts even when the values of the respective color signals are truncated during encoding. Since the human eye is most sensitive to luminance variations in the displayed image, luminance errors (and hence visible artifacts) are minimized by adding red and green color error signals to the blue color intensity signal prior to encoding so that luminance detail may be maintained. The blue channel is chosen for this purpose since the human eye is least sensitive to changes in blue and yellow. The color maps for the resulting encoded signals may be scaled such that the red and green values are raised and/or the blue values lowered so as to maintain the same relative intensity of luminance in the displayed image. Errors in the displayed luminance signal may thus be made up to four times smaller without increasing the amount of memory required and without adding expensive processing circuitry.

37 citations

Journal ArticleDOI
TL;DR: An effective algorithm for colorizing a grayscale image using a reference color image, an RGB to color transform (=luminance, =chrominance), and a block-based vector quantization of luminance mapping (VQLM) technique is developed.
Abstract: We develop an effective algorithm for colorizing a grayscale image In our approach, a reference color image, an RGB to color transform (=luminance, =chrominance), and a block-based vector quantization of luminance mapping (VQLM) technique are used to automatically colorize the grayscale image The VQLM technique compares the grayscale image with the luminance of the color reference image to obtain the information of the planes of the grayscale image After the chrominance is padded, the inverse color transform, to RGB, colorizes the grayscale scene is colorized Meanwhile, we create a mean of VQLM (MVQLM) method to improve the quality of the colorized grayscale image Experimental results show the MVQLM method is better than the VQLM method Also, we investigate colorizing the grayscale image working in the and YIQ spaces The simulation results also reveal that working in the space is slightly better than working in the YIQ space Compared to other colorizing schemes, our proposed method has two advantages: 1 the codebook and MVQLM techniques colorize the grayscale images for any size image that can be evenly divided by 2 automatically; and 2 the MVQLM method obtains a smoother colorizing effect and improved quality compared to the VQLM method

37 citations

Journal ArticleDOI
TL;DR: A Moore neighborhood-based gradient profile prior is designed and developed to efficiently estimate the transmission map and atmospheric veil and shows the supremacy of the proposed technique in removing haze from still images when compared with several existing techniques.
Abstract: Removing the haze from still images is a challenging issue. Dark Channel Prior (DCP) based dehazing techniques have been used to remove haze from still images. However, it produces poor results when image objects are inherently similar to the airlight and no shadow is cast on them. To eliminate this problem, a Moore neighborhood-based gradient profile prior is designed and developed to efficiently estimate the transmission map and atmospheric veil. The transmission map is also refined by developing a local activity-tuned anisotropic diffusion based filter. Afterward, image restoration is performed using the estimated transmission function. Thus, the proposed technique has an ability to remove haze from still images in an effective manner. The performance of the proposed technique is compared with recently developed seven dehazing techniques over synthetic and real-life hazy images. The experimental results depict the supremacy of the proposed technique in removing haze from still images when compared with several existing techniques. It also reveals that the restored image has little or no artifacts.

37 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
86% related
Image processing
229.9K papers, 3.5M citations
85% related
Feature (computer vision)
128.2K papers, 1.7M citations
85% related
Image segmentation
79.6K papers, 1.8M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202216
2021559
2020643
2019696
2018613
2017496