scispace - formally typeset
Search or ask a question

Showing papers on "Channel (digital image) published in 2015"


Proceedings ArticleDOI
Soonmin Hwang1, Jaesik Park1, Namil Kim1, Yukyung Choi1, In So Kweon1 
07 Jun 2015
TL;DR: This dataset introduces multispectral ACF, which is an extension of aggregated channel features (ACF) to simultaneously handle color-thermal image pairs, and achieves another breakthrough in the pedestrian detection task.
Abstract: With the increasing interest in pedestrian detection, pedestrian datasets have also been the subject of research in the past decades. However, most existing datasets focus on a color channel, while a thermal channel is helpful for detection even in a dark environment. With this in mind, we propose a multispectral pedestrian dataset which provides well aligned color-thermal image pairs, captured by beam splitter-based special hardware. The color-thermal dataset is as large as previous color-based datasets and provides dense annotations including temporal correspondences. With this dataset, we introduce multispectral ACF, which is an extension of aggregated channel features (ACF) to simultaneously handle color-thermal image pairs. Multi-spectral ACF reduces the average miss rate of ACF by 15%, and achieves another breakthrough in the pedestrian detection task.

711 citations


Journal ArticleDOI
TL;DR: A Red Channel method is proposed, where colors associated to short wavelengths are recovered, as expected for underwater images, leading to a recovery of the lost contrast, and achieves a natural color correction and superior or equivalent visibility improvement when compared to other state-of-the-art methods.

584 citations


Journal ArticleDOI
01 Feb 2015
TL;DR: Qualitative analysis reveals that the proposed method significantly enhances the image contrast, reduces the blue-green effect, and minimizes under- and over-enhanced areas in the output image.
Abstract: Method to increase the contrast and reduce the noise of underwater image.Applied histogram modification of integrated RGB and HSV color models.Mapping the image histogram according to Rayleigh distribution.Limiting the dynamic range of color models to reduce under- and over-enhanced areas.Outperforms other state-of-the-art methods in term of contrast and noise reduction. The physical properties of water cause light-induced degradation of underwater images. Light rapidly loses intensity as it travels in water, depending on the color spectrum wavelength. Visible light is absorbed at the longest wavelength first. Red and blue are the most and least absorbed, respectively. Underwater images with low contrast are captured due to the degradation effects of light spectrum. Therefore, the valuable information from these images cannot be fully extracted for further processing. The current study proposes a new method to improve the contrast and reduce the noise of underwater images. The proposed method integrates the modification of image histogram into two main color models, Red-Green-Blue (RGB) and Hue-Saturation-Value (HSV). In the RGB color model, the histogram of the dominant color channel (i.e., blue channel) is stretched toward the lower level, with a maximum limit of 95%, whereas the inferior color channel (i.e., red channel) is stretched toward the upper level, with a minimum limit of 5%. The color channel between the dominant and inferior color channels (i.e., green channel) is stretched to both directions within the whole dynamic range. All stretching processes in the RGB color model are shaped to follow the Rayleigh distribution. The image is converted into the HSV color model, wherein the S and V components are modified within the limit of 1% from the minimum and maximum values. Qualitative analysis reveals that the proposed method significantly enhances the image contrast, reduces the blue-green effect, and minimizes under- and over-enhanced areas in the output image. For quantitative analysis, the test with 300 underwater images shows that the proposed method produces average mean square error (MSE) and peak signal to noise ratio (PSNR) of 76.76 and 31.13, respectively, which outperform six state-of-the-art methods.

208 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed algorithm can significantly improve image fusion performance, accomplish notable target information and high contrast, simultaneously preserve rich details information, and excel other typical current methods in both objective evaluation criteria and visual effect.

161 citations


Journal ArticleDOI
TL;DR: A visual-attention-aware model to mimic the HVS for salient-object detection and proposes a method for extracting directional patches, as humans are sensitive to orientation features, and as directional patches are reliable cues.
Abstract: The human visual system (HVS) can reliably perceive salient objects in an image, but, it remains a challenge to computationally model the process of detecting salient objects without prior knowledge of the image contents. This paper proposes a visual-attention-aware model to mimic the HVS for salient-object detection. The informative and directional patches can be seen as visual stimuli, and used as neuronal cues for humans to interpret and detect salient objects. In order to simulate this process, two typical patches are extracted individually and in parallel from the intensity channel and the discriminant color channel, respectively, as the primitives. In our algorithm, an improved wavelet-based salient-patch detector is used to extract the visually informative patches. In addition, as humans are sensitive to orientation features, and as directional patches are reliable cues, we also propose a method for extracting directional patches. These two different types of patches are then combined to form the most important patches, which are called preferential patches and are considered as the visual stimuli applied to the HVS for salient-object detection. Compared with the state-of-the-art methods for salient-object detection, experimental results using publicly available datasets show that our produced algorithm is reliable and effective.

147 citations


Journal ArticleDOI
TL;DR: A novel edge-preserving decomposition-based method is introduced to estimate transmission map for a haze image so as to design a single image haze removal algorithm from the Koschmiedars law without using any prior.
Abstract: Single image haze removal is under-constrained, because the number of freedoms is larger than the number of observations. In this paper, a novel edge-preserving decomposition-based method is introduced to estimate transmission map for a haze image so as to design a single image haze removal algorithm from the Koschmiedars law without using any prior. In particular, weighted guided image filter is adopted to decompose simplified dark channel of the haze image into a base layer and a detail layer. The transmission map is estimated from the base layer, and it is applied to restore the haze-free image. The experimental results on different types of images, including haze images, underwater images, and normal images without haze, show the performance of the proposed algorithm.

141 citations


Journal ArticleDOI
TL;DR: The proposed single image dehazing method is based on a physical model and the dark channel prior principle and the selection of an atmospheric light value is directly responsible for the color authenticity and contrast of the resulting image.

132 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: A novel method for illuminant estimation by using the information of grey pixels detected in a given color-biased image to outperforms most of the state-of-the-art color constancy approaches with the inherent merit of low computational cost.
Abstract: Illuminant estimation is a key step for computational color constancy. Instead of using the grey world or grey edge assumptions, we propose in this paper a novel method for illuminant estimation by using the information of grey pixels detected in a given color-biased image. The underlying hypothesis is that most of the natural images include some detectable pixels that are at least approximately grey, which can be reliably utilized for illuminant estimation. We first validate our assumption through comprehensive statistical evaluation on diverse collection of datasets and then put forward a novel grey pixel detection method based on the illuminant-invariant measure (IIM) in three logarithmic color channels. Then the light source color of a scene can be easily estimated from the detected grey pixels. Experimental results on four benchmark datasets (three recorded under single illuminant and one under multiple illuminants) show that the proposed method outperforms most of the state-of-the-art color constancy approaches with the inherent merit of low computational cost.

129 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: An object-based co-segmentation method that takes advantage of depth data and is able to correctly handle noisy images in which the common foreground object is missing and provides performance comparable to state-of-the-art RGB co- segmentation techniques on regular RGB images with depth maps estimated from them is presented.
Abstract: We present an object-based co-segmentation method that takes advantage of depth data and is able to correctly handle noisy images in which the common foreground object is missing. With RGBD images, our method utilizes the depth channel to enhance identification of similar foreground objects via a proposed RGBD co-saliency map, as well as to improve detection of object-like regions and provide depth-based local features for region comparison. To accurately deal with noisy images where the common object appears more than or less than once, we formulate co-segmentation in a fully-connected graph structure together with mutual exclusion (mutex) constraints that prevent improper solutions. Experiments show that this object-based RGBD co-segmentation with mutex constraints outperforms related techniques on an RGBD co-segmentation dataset, while effectively processing noisy images. Moreover, we show that this method also provides performance comparable to state-of-the-art RGB co-segmentation techniques on regular RGB images with depth maps estimated from them.

107 citations


Proceedings ArticleDOI
17 Jun 2015
TL;DR: A general method for increasing the security of additive steganographic schemes for digital images represented in the spatial domain that starts with the cost assignment and forms a non-additive distortion function that forces adjacent embedding changes to synchronize.
Abstract: This paper describes a general method for increasing the security of additive steganographic schemes for digital images represented in the spatial domain. Additive embedding schemes first assign costs to individual pixels and then embed the desired payload by minimizing the sum of costs of all changed pixels. The proposed framework can be applied to any such scheme -- it starts with the cost assignment and forms a non-additive distortion function that forces adjacent embedding changes to synchronize. Since the distortion function is purposely designed as a sum of locally supported potentials, one can use the Gibbs construction to realize the embedding in practice. The beneficial impact of synchronizing the embedding changes is linked to the fact that modern steganalysis detectors use higher-order statistics of noise residuals obtained by filters with sign-changing kernels and to the fundamental difficulty of accurately estimating the selection channel of a non-additive embedding scheme implemented with several Gibbs sweeps. Both decrease the accuracy of detectors built using rich media models, including their selection-channel-aware versions.

106 citations


Journal ArticleDOI
TL;DR: This paper describes and provides results for modeling image sensor based VLC for automotive applications and shows that a single-pinhole camera model can be applied to vehicle motion modeling of a I2V-VLC, V2I-V LC, and V2V -VLC.
Abstract: Channel modeling is critical for the design and performance evaluation of visible light communication (VLC). Although a considerable amount of research has focused on indoor VLC systems using single-element photodiodes, there remains a need for channel modeling of VLC systems for outdoor mobile environments. In this paper, we describe and provide results for modeling image sensor based VLC for automotive applications. In particular, we examine the channel model for mobile movements in the image plane as well as channel decay according to the distance between the transmitter and the receiver. Optical flow measurements were conducted for three VLC situations for automotive use: infrastructure to vehicle VLC (I2V-VLC); vehicle to infrastructure VLC (V2I-VLC); and vehicle to vehicle VLC (V2V-VLC). We describe vehicle motion by optical flow with subpixel accuracy using phase-only correlation (POC) analysis and show that a single-pinhole camera model successfully describes these three VLC cases. In addition, the luminance of the central pixel from the projected LED area versus the distance between the LED and the camera was measured. Our key findings are twofold. First, a single-pinhole camera model can be applied to vehicle motion modeling of a I2V-VLC, V2I-VLC, and V2V-VLC. Second, the DC gain at a pixel remains constant as long as the projected image of the transmitter LED occupies several pixels. In other words, if we choose a pixel with highest luminance among the projected image of transmitter LED, the value remains constant, and the signal-to-noise ratio does not change according to the distance.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: This paper first compute a dense 3D template of the shape of the object, using a short rigid sequence, and subsequently perform online reconstruction of the non-rigid mesh as it evolves over time, which minimizes a robust photometric cost.
Abstract: In this paper we tackle the problem of capturing the dense, detailed 3D geometry of generic, complex non-rigid meshes using a single RGB-only commodity video camera and a direct approach. While robust and even real-time solutions exist to this problem if the observed scene is static, for non-rigid dense shape capture current systems are typically restricted to the use of complex multi-camera rigs, take advantage of the additional depth channel available in RGB-D cameras, or deal with specific shapes such as faces or planar surfaces. In contrast, our method makes use of a single RGB video as input, it can capture the deformations of generic shapes, and the depth estimation is dense, per-pixel and direct. We first compute a dense 3D template of the shape of the object, using a short rigid sequence, and subsequently perform online reconstruction of the non-rigid mesh as it evolves over time. Our energy optimization approach minimizes a robust photometric cost that simultaneously estimates the temporal correspondences and 3D deformations with respect to the template mesh. In our experimental evaluation we show a range of qualitative results on novel datasets, we compare against an existing method that requires multi-frame optical flow, and perform a quantitative evaluation against other template-based approaches on a ground truth dataset.

Journal ArticleDOI
TL;DR: This paper proposes to use real-valued or binary random projections to effectively compress the fingerprints at a small cost in terms of matching accuracy, and examines the performance of randomly projected fingerprints on databases of real photographs.
Abstract: Sensor imperfections in the form of photoresponse nonuniformity (PRNU) patterns are a well-established fingerprinting technique to link pictures to the camera sensors that acquired them. The noise-like characteristics of the PRNU pattern make it a difficult object to compress, thus hindering many interesting applications that would require storage of a large number of fingerprints or transmission over a bandlimited channel for real-time camera matching. In this paper, we propose to use real-valued or binary random projections to effectively compress the fingerprints at a small cost in terms of matching accuracy. The performance of randomly projected fingerprints is analyzed from a theoretical standpoint and experimentally verified on databases of real photographs. Practical issues concerning the complexity of implementing random projections are also addressed using circulant matrices.

Proceedings ArticleDOI
10 Dec 2015
TL;DR: Multiple Background Model based Background Subtraction Algorithm is presented, originally designed for handling sudden illumination changes, and comprehensive evaluation demonstrates the superiority of algorithm against state of the art.
Abstract: Background subtraction is one of the most commonly used components in machine vision systems. Despite the numerous algorithms proposed in the literature and used in practical applications, key challenges remain in designing a single system that can handle diverse environmental conditions. In this paper we present Multiple Background Model based Background Subtraction Algorithm as such a candidate. The algorithm was originally designed for handling sudden illumination changes. The new version has been refined with changes at different steps of the process, specifically in terms of selecting optimal color space, clustering of training images for Background Model Bank and parameter for each channel of color space. This has allowed the algorithm's applicability to wide variety of challenges associated with change detection including camera jitter, dynamic background, Intermittent Object Motion, shadows, bad weather, thermal, night videos etc. Comprehensive evaluation demonstrates the superiority of algorithm against state of the art.

Journal ArticleDOI
TL;DR: This work investigates the use of alternative color spaces derived from sRGB video recordings as a fast light-weight alternative to pulse rate estimation and indicates that the hue channel provides better estimation accuracy using extremely low computation power and with practically no latency.
Abstract: Existing video plethysmography methods use standard red-green-blue (sRGB) video recordings of the facial region to estimate heart pulse rate without making contact with the person being monitored. Methods achieving high estimation accuracy require considerable signal-processing power and result in significant processing latency. High processing power and latency are limiting factors when real-time pulse rate estimation is required or when the sensing platform has no access to high processing power. We investigate the use of alternative color spaces derived from sRGB video recordings as a fast light-weight alternative to pulse rate estimation. We consider seven color spaces and compare their performance with state-of-the-art algorithms that use independent component analysis. The comparison is performed over a dataset of 41 video recordings from subjects of varying skin tone and age. Results indicate that the hue channel provides better estimation accuracy using extremely low computation power and with practically no latency.

Journal ArticleDOI
TL;DR: A novel color image RDH scheme based on channel-dependent payload partition and adaptive embedding that can yield a better performance than some state-of-the-art works is proposed.

Journal ArticleDOI
TL;DR: A local adaptive thresholding technique based on gray level cooccurrence matrix- (GLCM-) energy information for retinal vessel segmentation is presented and is time efficient with a higher average sensitivity and average accuracy rates in the same range of very good specificity.
Abstract: Although retinal vessel segmentation has been extensively researched, a robust and time efficient segmentation method is highly needed. This paper presents a local adaptive thresholding technique based on gray level cooccurrence matrix- (GLCM-) energy information for retinal vessel segmentation. Different thresholds were computed using GLCM-energy information. An experimental evaluation on DRIVE database using the grayscale intensity and Green Channel of the retinal image demonstrates the high performance of the proposed local adaptive thresholding technique. The maximum average accuracy rates of 0.9511 and 0.9510 with maximum average sensitivity rates of 0.7650 and 0.7641 were achieved on DRIVE and STARE databases, respectively. When compared to the widely previously used techniques on the databases, the proposed adaptive thresholding technique is time efficient with a higher average sensitivity and average accuracy rates in the same range of very good specificity.

Journal ArticleDOI
Zhi Liu1, Liu Jing1, Xiaoyan Xiao1, Hui Yuan1, Li Xiaomei1, Jun Chang1, Zheng Chengyun1 
08 Sep 2015-Sensors
TL;DR: A novel method for segmentation of white blood cells in peripheral blood and bone marrow images under different lights through mean shift clustering, color space conversion and nucleus mark watershed operation (NMWO) is presented.
Abstract: This paper presents a novel method for segmentation of white blood cells (WBCs) in peripheral blood and bone marrow images under different lights through mean shift clustering, color space conversion and nucleus mark watershed operation (NMWO). The proposed method focuses on obtaining seed points. First, color space transformation and image enhancement techniques are used to obtain nucleus groups as inside seeds. Second, mean shift clustering, selection of the C channel component in the CMYK model, and illumination intensity adjustment are employed to acquire WBCs as outside seeds. Third, the seeds and NMWO are employed to precisely determine WBCs and solve the cell adhesion problem. Morphological operations are further used to improve segmentation accuracy. Experimental results demonstrate that the algorithm exhibits higher segmentation accuracy and robustness compared with traditional methods.

Journal ArticleDOI
TL;DR: This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error, and an appropriate design of channel code can protect the reference bits against tampering.
Abstract: Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.

Journal ArticleDOI
TL;DR: A novel gradient correlation similarity (Gcs) measure-based decolorization model for faithfully preserving the appearance of the original color image and a discrete searching solver is proposed by determining the solution with the minimum function value from the linear parametric model-induced candidate images.
Abstract: This paper presents a novel gradient correlation similarity (Gcs) measure-based decolorization model for faithfully preserving the appearance of the original color image. Contrary to the conventional data-fidelity term consisting of gradient error-norm-based measures, the newly defined Gcs measure calculates the summation of the gradient correlation between each channel of the color image and the transformed grayscale image. Two efficient algorithms are developed to solve the proposed model. On one hand, due to the highly nonlinear nature of Gcs measure, a solver consisting of the augmented Lagrangian and alternating direction method is adopted to deal with its approximated linear parametric model. The presented algorithm exhibits excellent iterative convergence and attains superior performance. On the other hand, a discrete searching solver is proposed by determining the solution with the minimum function value from the linear parametric model-induced candidate images. The non-iterative solver has advantages in simplicity and speed with only several simple arithmetic operations, leading to real-time computational speed. In addition, it is very robust with respect to the parameter and candidates. Extensive experiments under a variety of test images and a comprehensive evaluation against existing state-of-the-art methods consistently demonstrate the potential of the proposed model and algorithms.

Journal ArticleDOI
TL;DR: Two separate methods for robust and invisible image watermarking are proposed in RGB color space and Singular Value Decomposition (SVD) is employed on the blue channel of the host image to retrieve the singular values and the watermark is embedded in these singular values.

Journal ArticleDOI
TL;DR: The proposed stratagem embeds the watermark image both into the spatial domain and the frequency domain of the multi-channel quantum carrier image, while also providing a quantum measurement-based algorithm to generate an unknown key that is used to protect the color information.
Abstract: Utilizing a stockpile of efficient transformations consisting of channel of interest, channel swapping, and quantum Fourier transforms, a duple watermarking strategy on multi-channel quantum images is proposed. It embeds the watermark image both into the spatial domain and the frequency domain of the multi-channel quantum carrier image, while also providing a quantum measurement-based algorithm to generate an unknown key that is used to protect the color information, which accompanies another key that is mainly used to scramble the spatial content of the watermark image in order to further safeguard the copyright of the carrier image. Simulation-based experiments using a watermark logo and nine building images as watermark image and carrier images, respectively, offer a duple protection for the copyright of carrier images in terms of the visible quality of the watermarked images. The proposed stratagem advances available literature in the quantum watermarking research field and sets the stage for the applications aimed at quantum data protection.

Journal ArticleDOI
TL;DR: The authors propose an adaptive DCP modelled by a Gaussian curve that produces a more natural recovered image of the sky and other bright regions that is about 30 times faster and produces improved recovered images than the well-known state-of-the-art DCP approach.
Abstract: The authors propose a novel and efficient method for single image dehazing. To accelerate the transmission estimation process, a block-to-pixel interpolation method is used for fine dark channel computation, in which the block-level dark channel is first computed, and then the fine pixel-level dark channel is obtained by a weighted voting of the block-level dark channel to preserve edges and smooth out texture noise. This technique can be used for a direct transmission map generation without a computationally expensive refinement step. Since the dark channel prior (DCP) is not valid in bright (sky) regions, they propose an adaptive DCP modelled by a Gaussian curve that produces a more natural recovered image of the sky and other bright regions. In addition, a scaling method for transmission map computation is proposed to further accelerate the dehazing method. Through experiments, they show that the proposed adaptive block-to-pixel technique is about 30 times faster and produces improved recovered images than the well-known state-of-the-art DCP approach.

Proceedings ArticleDOI
TL;DR: Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.
Abstract: Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin’s surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

Journal ArticleDOI
TL;DR: A simple cryptic-free least significant bits spatial- domain-based steganographic technique that embeds information (a color or a grayscale image) into a color image in terms of peak signal-to-noise ratio and quality index is presented.
Abstract: In recent years, chaotic systems have surfaced to become an important field in steganographic matters. In this paper, we present a simple cryptic-free least significant bits spatial- domain-based steganographic technique that embeds information (a color or a grayscale image) into a color image. The proposed algorithm, called cycling chaos-based steganographic algorithm, comprises two main parts: A cycling chaos function that is used for generating the seeds for pseudorandom number generator (PRNG) and PRNG that is utilized for determining the channel and the pixel positions of the host image in which the sensitive data are stored. The proposed algorithm is compared with two powerful steganographic color image methods in terms of peak signal-to-noise ratio and quality index. The comparisons indicate that the proposed algorithm shows good hiding capacity and fulfills stego-image quality. We also compare our algorithm against some existing steganographic attacks including RS attack, Chi-square test, byte attack and visual attack. The results demonstrate that the proposed algorithm can successfully withstand against these attacks.

Journal Article
TL;DR: A new spatial domain probability based watermarking scheme for color Images is proposed and has been proved to be robust to various image processing operations such as filtering, lossy image compression, and various geometrical attack such as rotation, scaling, cropping.
Abstract: A new spatial domain probability based watermarking scheme for color Images is proposed. The blue channel of the color image has been used for watermark embedding. Host image is divided into 8x8 blocks and each bit of the binary encoded watermark is embedded in each such block. For each inserted bit, intensity of all the pixels in the block is modified according to the embedding algorithm. Non-blind probability based watermark extraction is performed with the help of original host image. The method has been proved to be robust to various image processing operations such as filtering, lossy image compression, and various geometrical attack such as rotation, scaling, cropping.

Journal ArticleDOI
TL;DR: A novel single-image based dehazing framework is proposed to remove haze artifacts from images through local atmospheric light estimation using a novel strategy based on a physical model where the extreme intensity of each RGB pixel is used to define an initial atmospheric veil.

Journal ArticleDOI
TL;DR: An automatic cloud detection algorithm, "green channel background subtraction adaptive threshold" (GBSAT), which incorporates channel selection, background simulation, computation of solar mask and cloud mask, subtraction, an adaptive threshold, and binarization is proposed.
Abstract: . Obtaining an accurate cloud-cover state is a challenging task. In the past, traditional two-dimensional red-to-blue band methods have been widely used for cloud detection in total-sky images. By analyzing the imaging principle of cameras, the green channel has been selected to replace the 2-D red-to-blue band for detecting cloud pixels from partly cloudy total-sky images in this study. The brightness distribution in a total-sky image is usually nonuniform, because of forward scattering and Mie scattering of aerosols, which results in increased detection errors in the circumsolar and near-horizon regions. This paper proposes an automatic cloud detection algorithm, "green channel background subtraction adaptive threshold" (GBSAT), which incorporates channel selection, background simulation, computation of solar mask and cloud mask, subtraction, an adaptive threshold, and binarization. Five experimental cases show that the GBSAT algorithm produces more accurate retrieval results for all these test total-sky images.

Journal ArticleDOI
TL;DR: A new method for full-color SIM with a color digital camera based on HSV (Hue, Saturation, and Value) color space is proposed, in which the recorded color raw images are processed in the Hue, S saturation, Value color channels, and reconstructed to a 3D image with full color.
Abstract: In merits of super-resolved resolution and fast speed of three-dimensional (3D) optical sectioning capability, structured illumination microscopy (SIM) has found variety of applications in biomedical imaging. So far, most SIM systems use monochrome CCD or CMOS cameras to acquire images and discard the natural color information of the specimens. Although multicolor integration scheme are employed, multiple excitation sources and detectors are required and the spectral information is limited to a few of wavelengths. Here, we report a new method for full-color SIM with a color digital camera. A data processing algorithm based on HSV (Hue, Saturation, and Value) color space is proposed, in which the recorded color raw images are processed in the Hue, Saturation, Value color channels, and then reconstructed to a 3D image with full color. We demonstrated some 3D optical sectioning results on samples such as mixed pollen grains, insects, micro-chips and the surface of coins. The presented technique is applicable to some circumstance where color information plays crucial roles, such as in materials science and surface morphology.

Journal ArticleDOI
TL;DR: An effective scalable mobile image retrieval approach by exploiting the advantage of mobile end that people usually take multiple photos of an object in different viewpoints and focuses, and makes full use of the multiphotos taken at mobile end to extract saliency.