scispace - formally typeset
Search or ask a question

Showing papers on "Channel (digital image) published in 2010"


Proceedings ArticleDOI
13 Oct 2010
TL;DR: This paper implements the Multi-scale Retinex algorithm on the luminance component in YCbCr space, and obtains a pseudo transmission map whose function is similar to the transmission map in original approach.
Abstract: In this paper, we propose an improved image dehazing algorithm using dark channel prior and Multi-Scale Retinex. Main improvement lies in automatic and fast acquisition of transmission map of the scene. We implement the Multi-scale Retinex algorithm on the luminance component in YCbCr space, obtain a pseudo transmission map whose function is similar to the transmission map in original approach. Combining with the haze image model and the dark channel prior, we can recover a high quality haze-free image. Compared with the original method, our algorithm has two main advantages: (i) no user interaction is needed, and (ii) restoring the image much faster while maintaining comparable dehazing performance.

154 citations


Journal ArticleDOI
TL;DR: A class of SR algorithms based on the maximum a posteriori (MAP) framework is proposed, which utilize a new multichannel image prior model, along with the state-of-the-art single channel image prior and observation models.
Abstract: Super-resolution (SR) is the term used to define the process of estimating a high-resolution (HR) image or a set of HR images from a set of low-resolution (LR) observations. In this paper we propose a class of SR algorithms based on the maximum a posteriori (MAP) framework. These algorithms utilize a new multichannel image prior model, along with the state-of-the-art single channel image prior and observation models. A hierarchical (two-level) Gaussian nonstationary version of the multichannel prior is also defined and utilized within the same framework. Numerical experiments comparing the proposed algorithms among themselves and with other algorithms in the literature, demonstrate the advantages of the adopted multichannel approach.

145 citations


Patent
07 May 2010
TL;DR: An image sensor for capturing a color image comprising a two dimensional array of light-sensitive pixels including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a repeating pattern having a square minimal repeating unit with at least three rows and three columns as discussed by the authors.
Abstract: An image sensor for capturing a color image comprising a two dimensional array of light-sensitive pixels including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a repeating pattern having a square minimal repeating unit having at least three rows and three columns, the color pixels being arranged along one of the diagonals of the minimal repeating unit, and all other pixels being panchromatic pixels.

118 citations


Journal ArticleDOI
TL;DR: A new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field is proposed and it is demonstrated that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency.
Abstract: Both commercial and scientific applications often need to transform color images into gray-scale images, e.g., to reduce the publication cost in printing color images or to help color blind people see visual cues of color images. However, conventional color to gray algorithms are not ready for practical applications because they encounter the following problems: 1) Visual cues are not well defined so it is unclear how to preserve important cues in the transformed gray-scale images; 2) some algorithms have extremely high time cost for computation; and 3) some require human-computer interactions to have a reasonable transformation. To solve or at least reduce these problems, we propose a new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field. Thus, color to gray procedure can be regarded as a labeling process to preserve the newly well--defined visual cues of a color image in the transformed gray-scale image. Visual cues are measurements that can be extracted from a color image by a perceiver. They indicate the state of some properties of the image that the perceiver is interested in perceiving. Different people may perceive different cues from the same color image and three cues are defined in this paper, namely, color spatial consistency, image structure information, and color channel perception priority. We cast color to gray as a visual cue preservation procedure based on a probabilistic graphical model and optimize the model based on an integral minimization problem. We apply the new algorithm to both natural color images and artificial pictures, and demonstrate that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency. In addition, it requires no human-computer interactions.

104 citations


Proceedings Article
01 Jan 2010
TL;DR: The results presented here show that in fact MaxRGB works surprisingly well when tested on a new dataset of 105 high dynamic range images, and also better than previously reported when some simple pre-processing is applied to the images of the standard 321 image set.
Abstract: The poor performance of the MaxRGB illuminationestimation method is often used in the literature as a foil when promoting some new illumination-estimation method. However, the results presented here show that in fact MaxRGB works surprisingly well when tested on a new dataset of 105 high dynamic range images, and also better than previously reported when some simple pre-processing is applied to the images of the standard 321 image set [1]. The HDR images in the dataset for color constancy research were constructed in the standard way from multiple exposures of the same scene. The color of the scene illumination was determined by photographing an extra HDR image of the scene with 4 Gretag Macbeth mini Colorcheckers at 45 degrees relative to one another placed in it. With preprocessing, MaxRGB’s performance is statistically equivalent to that of Color by Correlation [2] and statistically superior to that of the Greyedge [3] algorithm on the 321 set (null hypothesis rejected at the 5% significance level). It also performs as well as Greyedge on the HDR set. These results demonstrate that MaxRGB is far more effective than it has been reputed to be so long as it is applied to image data that encodes the full dynamic range of the original scene. Introduction MaxRGB is an extremely simple method of estimating the chromaticity of the scene illumination for color constancy and automatic white balancing based on the assumption that the triple of maxima obtained independently from each of the three color channels represents the color of the illumination. It is often used as a foil to demonstrate how much better some newly proposed algorithm performs in comparison. However, is its performance really as bad as it has been reported [1,3-5] to be? Is it really any worse than the algorithms to which it is compared?1 The prevailing belief in the field about the inadequacy of MaxRGB is reflected in the following two quotations from two different anonymous reviewers criticizing a manuscript describing a different illumination-estimation proposal: “Almost no-one uses Max RGB in the field (or in commercial cameras). That this, rejected method, gives better performance than the (proposed) method is grounds alone for rejection.” “The first and foremost thing that attracts attention is the remarkable performance of the Scale-by-Max (i.e. White-Patch) algorithm. This algorithm has the highest performance on two of the three data sets, which is quite remarkable by itself.”   Paper’s title inspired by Charles Poynton, “The Rehabilitation of Gamma,” Proc. of Human Vision and Electronic Imaging III SPIE 3299, 232-249, 1998. We hypothesize that there are two reasons why the effectiveness of MaxRGB may have been underestimated. One is that it is important not to apply MaxRGB naively as the simple maximum of each channel, but rather it is necessary to preprocess the image data somewhat before calculating the maximum, otherwise a single bad pixel or spurious noise will lead to the maximum being incorrect. The second is that MaxRGB generally has been applied to 8-bit-per-channel, non-linear images, for which there is both significant tone-curve compression and clipping of high intensity values. To test the pre-processing hypothesis, the effects of preprocessing by median filtering, and resizing by bilinear filtering, are compared to that of the common pre-processing, which simply discards pixels for which at least one channel is maximal (i.e., for n-bit images when R=2n-1 or G=2n-1 or B=2n-1). To test the dynamic-range hypothesis, a new HDR dataset for color constancy research has been constructed which consists of images of 105 scenes. For each scene there are HDR2 (high dynamic range) images with and without Macbeth mini Colorchecker charts, from which the chromaticity of the scene illumination is measured. This data set is now available on-line. MaxRGB is a special and extremely limited case of Retinex [6]. In particular, it corresponds to McCann99 Retinex [7] when the number of iterations is infinite, or to path-based Retinex [8] without thresholding but with infinite paths. Retinex and MaxRGB both depend on the assumption that either there is a white surface in the scene, or there are three separate surfaces reflecting maximally in the R, G and B sensitivity ranges. In practice, most digital still cameras are incapable of capturing the full dynamic range of a scene and use exposures and tone reproduction curves that clip or compress high digital counts. As a result, the maximum R, G and B digital counts from an image generally do not faithfully represent the corresponding maximum scene radiances. Barnard et al. [9] present some tests using artificial clipping of images that show the effect that lack of dynamic range can have on various illumination-estimation algorithms. To determine whether or not MaxRGB is really as poor as it is report to be in comparison to other illumination-estimation algorithms, we compare the performance of several algorithms on the new image database. We also find that two simple preprocessing strategies lead to significant performance improvement in the case of MaxRGB. Tests described below show that MaxRGB performs as well on this new HDR data set as other representative and recently published algorithms. We also find that two simple pre-processing strategies lead to significant performance improvement. The results reported here extend those of an earlier study [10] in a number of ways: the size of the dataset   2 Note that the scenes were not necessarily of high dynamic range. The term HDR is used here to mean simply that that full dynamic range of the scene is captured within the image. 3 www.cs.sfu.ca/~colour/data  Page 1 of 4

103 citations


Proceedings ArticleDOI
16 Apr 2010
TL;DR: In this article, an efficient and effective method is proposed using dark channel prior to restore the original clarity of the images underwater, where the depth of the turbid water can be estimated by the assumption that most local patches in water-free images contain some pixels which have very low intensities in at least one color channel.
Abstract: In this paper, an efficient and effective method is proposed using dark channel prior to restore the original clarity of the images underwater. Images taken in the underwater environment are subject to water attenuation and particles in water's scattering, a phenomenon similar to the effect of heavy fog in the air. Using dark channel prior, the depth of the turbid water can be estimated by the assumption that most local patches in water-free images contain some pixels which have very low intensities in at least one color channel. In this way, the effect of turbid water can be removed and the original clarity of images can be unveiled. The results processed by this method are presented in the paper.

99 citations


Patent
Jian Sun1, Kaiming He1, Xiaoou Tang1
01 Feb 2010
TL;DR: In this article, techniques and technologies for de-hazing hazy images are described, and some of the disclosed methods include removing the effects of the haze from a hazy image and outputting the recovered, dehazed image.
Abstract: Techniques and technologies for de-hazing hazy images are described. Some techniques provide for determining the effects of the haze and removing the same from an image to recover a de-hazed image. Thus, the de-hazed image does not contain the effects of the haze. Some disclosed technologies allow for similar results. This document also discloses systems and methods for de-hazing images. Some of the disclosed de-hazing systems include an image capture device for capturing the hazy image and a processor for removing the effects of the haze from the hazy image. These systems store the recovered, de-hazed images in a memory and/or display the de-hazed images on a display. Some of the disclosed methods include removing the effects of the haze from a hazy image and outputting the recovered, de-hazed image.

99 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper introduces a method to correct over-exposure in an existing photograph by recovering the color and lightness separately, which is fully automatic and requires only one single input photo.
Abstract: This paper introduces a method to correct over-exposure in an existing photograph by recovering the color and lightness separately. First, the dynamic range of well exposed region is slightly compressed to make room for the recovered lightness of the over-exposed region. Then the lightness is recovered based on an over-exposure likelihood. The color of each pixel is corrected via neighborhood propagation and also based on the confidence of the original color. Previous methods make use of ratios between different color channels to recover the over-exposed ones, and thus can not handle regions where all three channels are over-exposed. In contrast, our method does not have this limitation. Our method is fully automatic and requires only one single input photo. We also provide users with the flexibility to control the amount of over-exposure correction. Experiment results demonstrate the effectiveness of the proposed method in correcting over-exposure.

81 citations


Proceedings ArticleDOI
Yan Wang1, Bo Wu1
06 Dec 2010
TL;DR: An improved single image dehazing algorithm which based on the atmospheric scattering physics-based models is introduced, which applies the local dark channel prior on selected region to estimate the atmospheric light, and obtain more accurate result.
Abstract: Atmospheric conditions induced by suspended particles, such as fog and haze, severely degrade image quality. Haze removal from a single image of a weather-degraded scene remains a challenging task, because the haze is dependent on the unknown depth information. In this paper, we introduce an improved single image dehazing algorithm which based on the atmospheric scattering physics-based models. We apply the local dark channel prior on selected region to estimate the atmospheric light, and obtain more accurate result. Experiments on real images validate our approach.

64 citations


Proceedings ArticleDOI
25 Sep 2010
TL;DR: A new method for real-time image and video dehazing is proposed, based on a newly presented haze-free image prior - dark channel prior and a common haze imaging model, that can estimate the global atmospheric light and extract the scene objects transmission and prevent artifacts.
Abstract: Outdoor photography and computer vision tasks often suffer from bad weather conditions, observed objects lose visibility and contrast due to the presence of atmospheric haze, fog, and smoke. In this paper, we propose a new method for real-time image and video dehazing. Based on a newly presented haze-free image prior - dark channel prior and a common haze imaging model, for a single input image, we can estimate the global atmospheric light and extract the scene objects transmission. To prevent artifacts, we refine the transmission using a cross-bilateral filter, and finally the haze-free frame can be restored by inversing the haze imaging model. The whole process is highly parallelized, and can be easily implemented on modern GPUs to achieve real-time performance. Comparing with existing methods, our approach provides similar or better results with much less processing time. The proposed method can be further used for many applications such as outdoor surveillance, remote sensing, and intelligent vehicles. In addition, rough depth information of the scene can be obtained as a by-product.

63 citations


Journal ArticleDOI
TL;DR: It was observed that performing principal component analysis (PCA) calculations on multidimensional or multispectral information not only provides the combination of variables that explain most of the variance at a certain time instance but also decreases the autocorrelation of the resulting time series.

Book ChapterDOI
05 Sep 2010
TL;DR: Topic Random Field (TRF) is proposed, which defines a Markov Random Field over hidden labels of an image, to enforce the spatial coherence between topic labels for neighboring regions and achieves better segmentation performance.
Abstract: Recently, there has been increasing interests in applying aspect models (e.g., PLSA and LDA) in image segmentation. However, these models ignore spatial relationships among local topic labels in an image and suffers from information loss by representing image feature using the index of its closest match in the codebook. In this paper, we propose Topic Random Field (TRF) to tackle these two problems. Specifically, TRF defines a Markov Random Field over hidden labels of an image, to enforce the spatial coherence between topic labels for neighboring regions. Moreover, TRF utilizes a noise channel to model the generation of local image features, and avoids the off-line process of building visual codebook. We provide details of variational inference and parameter learning for TRF. Experimental evaluations on three image data sets show that TRF achieves better segmentation performance.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: It is shown in this paper that Fourier-based reconstruction approaches suffer from severe artifacts in the case of sensor saturation, and a novel combined optical light modulation and computational reconstruction method is proposed that not only suppresses such artifacts, but also allows us to recover a wider dynamic range than existing image-space multiplexing approaches.
Abstract: Optically multiplexed image acquisition techniques have become increasingly popular for encoding different exposures, color channels, light fields, and other properties of light onto two-dimensional image sensors. Recently, Fourier-based multiplexing and reconstruction approaches have been introduced in order to achieve a superior light transmission of the employed modulators and better signal-to-noise characteristics of the reconstructed data. We show in this paper that Fourier-based reconstruction approaches suffer from severe artifacts in the case of sensor saturation, i.e. when the dynamic range of the scene exceeds the capabilities of the image sensor. We analyze the problem, and propose a novel combined optical light modulation and computational reconstruction method that not only suppresses such artifacts, but also allows us to recover a wider dynamic range than existing image-space multiplexing approaches.

Journal ArticleDOI
TL;DR: In this article, the color of the pixel is changed as electrowetting moves the pigment dispersion between a top and bottom channel, and a near zero Laplace pressure and a hysteresis pressure of 0.11kN/m2 stabilizes the position.
Abstract: Electrofluidic display pixels are demonstrated with zero-power grayscale operation for 3 months and with >70% reflectance. The color of the pixel is changed as electrowetting moves the pigment dispersion between a top and bottom channel. When voltage is removed, a near zero Laplace pressure and a hysteresis pressure of 0.11 kN/m2 stabilizes the position. For 450 μm pixels, an electromechanical pressure of 1.4 kN/m2 moves the pigment dispersion at a speed of ∼2650 μm/s. The predicted switching speed for ∼150 μm pixels is consistent with video rate operation (20 ms). The geometrically sophisticated pixel structure is fabricated with only simple photolithography and wet chemical processing.

Patent
15 Dec 2010
TL;DR: In this paper, a method for processing an image such as a computer wallpaper identifies a characteristic color representative of the image, which can be used in other displayed images at an intensity α, with α being the lesser of α max and α min plus the average color span of all pixels in the image.
Abstract: A method for processing an image such as a computer wallpaper identifies a characteristic color representative of the image. Image pixels with similar colors are separated into groups, and the average value of the R,G,B color components in each group is determined, after filtering out pixels with R,G,B values representing white, black, or grey. The group with the maximum difference between the highest average color component value and the lowest average color component value is identified as the characteristic color. Groups representing a number of pixels less than a certain percentage of all of the pixels are not considered. The characteristic color can be used in other displayed images at an intensity α determined by setting maximum and minimum values of α, with α being the lesser of α max and α min plus the average color span of all pixels in the image.

Patent
25 Feb 2010
TL;DR: In this article, a method of presenting an image on a display device having color channel dependent light emission was proposed, where a reduction factor for each input pixel signal was calculated dependent upon a distance metric between the input pixel signals and the selected reduction color component.
Abstract: A method of presenting an image on a display device having color channel dependent light emission comprising receiving an image input signal including a plurality of three-component input pixel signals; selecting a reduction color component; calculating a reduction factor for each input pixel signal dependent upon a distance metric between the input pixel signal and the selected reduction color component; selecting a respective saturation adjustment factor for each color component of each pixel signal; producing an image output signal having four color components from the image input signal using the reduction factors and saturation adjustment factors to adjust the luminance and color saturation, respectively, of the image input signal; providing a four-channel display device having color channel dependent light emission; and applying the image output signal to the display device to cause it to present an image corresponding to the image output signal.

Proceedings ArticleDOI
15 Dec 2010
TL;DR: The key contributions are a generalization of three-color photometric stereo to more than three color channels, and the design of a practical six-color-channel system using off-the-shelf parts.
Abstract: Spectral multiplexing allows multiple channels of information to be captured simultaneously, using readily available color cameras. Information may be multiplexed across the color channels of a camera by use of colored lights (e.g. [Woodham 1980; Hernandez and Vogiatzis 2010]) or colored filters (e.g. [Bando et al. 2008]). We propose a novel method for single-shot photometric stereo by spectral multiplexing. The output of our method is a simultaneous per-pixel estimate of the surface normal and full-color reflectance. Our method is well suited to materials with varying color and texture, requires no time-varying illumination, and no high-speed cameras. Being a single-shot method, it may be applied to dynamic scenes without any need for optical flow. Our key contributions are a generalization of three-color photometric stereo to multiple (more than three) color channels, and the design of a practical six-color-channel system using off-the-shelf parts only.

Journal ArticleDOI
TL;DR: Two approaches to encrypt color images based on interference and virtual optics are proposed and a concept based on virtual optics is further applied to enhance the security level.
Abstract: We propose two approaches to encrypt color images based on interference and virtual optics. In the first method, a color image is first decomposed into three independent channels, i.e., red, green and blue. Each channel of the input image is encrypted into two random phase-only masks based on interference. In the second method, a color image is first converted into an image matrix and a color map, and only the image matrix is encrypted into random-phase masks based on interference. After the phase masks are retrieved, a concept based on virtual optics is further applied to enhance the security level. Numerical simulations are demonstrated to show the feasibility and effectiveness of the proposed methods.

Journal ArticleDOI
TL;DR: A digital technique for multiplexing and encryption of four RGB images has been proposed using the fractional Fourier transform (FRT), which is more compact and faster as compared to the multichannel techniques.

Journal ArticleDOI
TL;DR: In this article, a difference image based JPEG communication scheme and water level measurement scheme using sparsely sampled images in time domain were proposed to measure water levels from remote sites using a narrowband channel.
Abstract: To measure water levels from remote sites using a narrowband channel, this paper propose a difference image based JPEG communication scheme and water level measurement scheme using sparsely sampled images in time domain. In the slave system located in the field, the image is converted to difference image, and are compressed using JPEG, then larger changes are sampled and transmitted. To measure the water level from the images received in the master system which may contain noises caused by various sources, the averaging filter and Gaussian filter are used to reduce the noise and the Y-axis profile of an edge image is used to read the water level. Considering the wild condition of the field, a simplified camera calibration scheme is also introduced. The implemented slave system was installed in a river and its performance has been tested with the data collected for a year.

Patent
03 Nov 2010
TL;DR: In this article, the authors proposed a method for removing noise in multiview video by multiple cameras setup, which consists of several steps which are normalizing the colour and intensity of the images; then choosing a reference image; reducing temporal noise for each channel motion compensation or frame averaging independently; mapping each pixel in the reference camera to the other camera views; determining the visibility of the corresponding pixel, after mapped to other images by comparing the depth value; checking RGB range of the candidates with the corresponding pixels within the visible observations, then among stored RGB values from the visible regions of a
Abstract: This invention relates to method for removing noise in multiview video by multiple cameras setup. This method comprises several steps which are normalizing the colour and intensity of the images; then choosing a reference image; reducing temporal noise for each channel motion compensation or frame averaging independently; mapping each pixel in the reference camera to the other camera views; determining the visibility of the corresponding pixel, after mapped to the other images by comparing the depth value; checking RGB range of the candidates with the corresponding pixel within the visible observations, then among stored RGB values from the visible regions of a pixel in the reference view, getting the median value and assigning this value to the reference pixel and all the other pixels matched to the reference pixel after mapping through depth map; and repeating the said steps until all of the pixels in each view are visited.

Book ChapterDOI
10 Sep 2010
TL;DR: This paper introduces a simple but efficient cue for the extraction of shadows from a single color image, the bright channel cue, and presents qualitative and quantitative results for shadow detection, as well as results in illumination estimation from shadows.
Abstract: In this paper, we introduce a simple but efficient cue for the extraction of shadows from a single color image, the bright channel cue. We discuss its limitations and offer two methods to refine the bright channel: by computing confidence values for the cast shadows, based on a shadow-dependent feature, such as hue; and by combining the bright channel with illumination invariant representations of the original image in a flexible way using an MRF model. We present qualitative and quantitative results for shadow detection, as well as results in illumination estimation from shadows. Our results show that our method achieves satisfying results despite the simplicity of the approach.

Proceedings ArticleDOI
Beijing Chen1, Huazhong Shu1, Hui Zhang1, Gang Chen1, Limin Luo1 
23 Aug 2010
TL;DR: It is shown that the QZMs can be obtained via the conventional Zernike moments of each channel, and a set of combined invariants to rotation and translation (RT) using the modulus of centralQZMs is constructed.
Abstract: Moments and moment invariants are useful tool in pattern recognition and image analysis. Conventional methods to deal with color images are based on RGB decomposition or graying. In this paper, by using the theory of quaternions, we introduce a set of quaternion Zernike moments (QZMs) for color images in a holistic manner. It is shown that the QZMs can be obtained via the conventional Zernike moments of each channel. We also construct a set of combined invariants to rotation and translation (RT) using the modulus of central QZMs. Experimental results show that the proposed descriptors are more efficient than the existing ones.

Book ChapterDOI
05 Sep 2010
TL;DR: A novel representation for the color transfer function of any device, using higher-dimensional Bezier patches, that does not rely on any restrictive assumptions and hence can handle devices that do not behave in an ideal manner and is performed efficiently using a real-time GPU implementation.
Abstract: A color transfer function describes the relationship between the input and the output colors of a device. Computing this function is difficult when devices do not follow traditionally coveted properties like channel independency or color constancy, as is the case with most commodity capture and display devices (like projectors, camerass and printers). In this paper we present a novel representation for the color transfer function of any device, using higher-dimensional Bezier patches, that does not rely on any restrictive assumptions and hence can handle devices that do not behave in an ideal manner. Using this representation and a novel reparametrization technique, we design a color transformation method that is more accurate and free of local artifacts compared to existing color transformation methods. We demonstrate this method's generality by using it for color management on a variety of input and output devices. Our method shows significant improvement in the appearance of seamlessness when used in the particularly demanding application of color matching across multi-projector displays or multicamera systems. Finally we demonstrate that our color transformation method can be performed efficiently using a real-time GPU implementation.

Proceedings ArticleDOI
03 Aug 2010
TL;DR: The presented asynchronous, time-based CMOS dynamic vision and image sensor is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and PWM imaging circuitry that ideally results in optimal lossless video compression through complete temporal redundancy suppression at the focal-plane.
Abstract: The presented asynchronous, time-based CMOS dynamic vision and image sensor is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and PWM imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a brightness change in its field-of-view. Thus pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new illumination values to communicate. Communication is address-event based (AER)-gray-levels are encoded in inter-event intervals. Pixels that are not stimulated visually do not produce output. This pixel-autonomous and massively parallel operation ideally results in optimal lossless video compression through complete temporal redundancy suppression at the focal-plane. Compression factors depend on scene activity. Due to the time-based encoding of the illumination information, very high dynamic range — intra-scene DR of 143dB static and 125dB at 30fps equivalent temporal resolution — is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of 56dB (9.3bit) for >10Lx.

Proceedings ArticleDOI
11 Nov 2010
TL;DR: An adaptive method to predict NIR channel image from color iris images is introduced and visual inspection of the predicted image and the verification performance indicate that the adaptive mapping linking NIR image and color image is a potential solution to the problem of matching NIR images vs. color images in practice.
Abstract: An adaptive method to predict NIR channel image from color iris images is introduced. Both visual inspection of the predicted image and the verification performance indicate that the adaptive mapping linking NIR image and color image is a potential solution to the problem of matching NIR images vs. color images in practice. When matched against NIR enrolled image the predicted NIR image achieves significantly high performance compared to the case when the same NIR image is matched against R channel alone.

Proceedings ArticleDOI
14 Mar 2010
TL;DR: This paper proposes an electrophotographic printer identification scheme from its printed material, in which imperceptible halftone patterns are contained inherently, and supports that the presented scheme clearly recognizes different halft one textures.
Abstract: Estimating printing source is applicable in many forensic situations. In this paper, we propose an electrophotographic printer identification scheme from its printed material, in which imperceptible halftone patterns are contained inherently. The halftone textures in each channel of CMYK domain are analyzed. We construct a histogram from angle values of linear features extracted by Hough transform. By averaging the histograms from multiple images, a printer's reference pattern is identified. The soure printer is determined by a maximum correlation value between the reference patterns and the histogram of given image. Experiments are performed on 9,000 images made by 9 printers. The result supports that the presented scheme clearly recognizes different halftone textures.

Journal ArticleDOI
TL;DR: Computer simulations with various sets of real, low dynamic range images show the effectiveness of the proposed tone mapping (TM) algorithm in terms of the visual quality as well the local contrast.
Abstract: In this paper, we propose a tone mapping (TM) method using color correction function (CCF) and image decomposition in high dynamic range (HDR) imaging. The CCF in the proposed TM is derived from the luminance compression function with the color constraint under which the color ratios, between the three color channels of the radiance map and dynamic range compression term, are preserved and color saturation is controlled. The proposed CCF is developed to locally perform the luminance compression and color saturation control in local TM. For image decomposition, we use a bilateral filter and apply the adaptive weight to the base layer of the luminance. Computer simulations with various sets of real, low dynamic range images show the effectiveness of the proposed TM algorithm in terms of the visual quality as well the local contrast. It can be used for contrast and color enhancement in various display and acquisition devices.

Proceedings ArticleDOI
01 Nov 2010
TL;DR: Experimental results demonstrate that the proposed algorithm can remove image degradation caused by fog, clouds, smoke, and dust in digital imaging devices.
Abstract: In this paper, we present a weighted adaptive image defogging method by extracting features in the RGB color channels. We adaptively detect an atmospheric light through undesired fog in the dark channel prior obtained in the YCbCr color channels and generate a transmission map based on the detected atmospheric light. We adaptively remove the fog by applying the color correction algorithm based on the feature extraction in the RGB color channels. The proposed algorithm can overcome the problem of local color distortion, which is known to be the limitations of existing defogging techniques. Experimental results demonstrate that the proposed algorithm can remove image degradation caused by fog, clouds, smoke, and dust in digital imaging devices.

Proceedings ArticleDOI
19 Jul 2010
TL;DR: Results show that the method employing inter-channel traces can distinguish between sophisticated demosaicking algorithms and can complement existing classifiers based on inter-pixel correlations by providing a new feature dimension.
Abstract: Digital image forensics seeks to detect statistical traces left by image acquisition or post-processing in order to establish an images source and authenticity. Digital cameras acquire an image with one sensor overlayed with a color filter array (CFA), capturing at each spatial location one sample from the three necessary color channels. The missing pixels must be interpolated in a process known as demosaicking. This process is highly nonlinear and can vary greatly between different camera brands and models. Most practical algorithms, however, introduce correlations between the color channels, which are often different between algorithms. In this paper, we show how these correlations can be used to construct a characteristic map that is useful in matching an image to its source. Results show that our method employing inter-channel traces can distinguish between sophisticated demosaicking algorithms. It can complement existing classifiers based on inter-pixel correlations by providing a new feature dimension.