scispace - formally typeset
Search or ask a question

Showing papers on "Grayscale published in 2012"


Journal ArticleDOI
TL;DR: In this paper, linear-time algorithms for solving a class of problems that involve transforming a cost function on a grid using spatial information are described, where the binary image is replaced by an arbitrary function on the grid.
Abstract: We describe linear-time algorithms for solving a class of problems that involve transforming a cost function on a grid using spatial information. These problems can be viewed as a generalization of classical distance transforms of binary images, where the binary image is replaced by an arbitrary function on a grid. Alternatively they can be viewed in terms of the minimum convolution of two functions, which is an important operation in grayscale morphology. A consequence of our techniques is a simple and fast method for computing the Euclidean distance transform of a binary image. Our algorithms are also applicable to Viterbi decoding, belief propagation, and optimal control.

925 citations


Journal ArticleDOI
01 May 2012
TL;DR: Experimental results and security analysis show that the scheme can not only achieve good encryption result, but also the key space is large enough to resist against common attacks.
Abstract: This paper proposes a novel confusion and diffusion method for image encryption. One innovation is to confuse the pixels by transforming the nucleotide into its base pair for random times, the other is to generate the new keys according to the plain image and the common keys, which can make the initial conditions of the chaotic maps change automatically in every encryption process. For any size of the original grayscale image, after being permuted the rows and columns respectively by the arrays generated by piecewise linear chaotic map (PWLCM), each pixel of the original image is encoded into four nucleotides by the deoxyribonucleic acid (DNA) coding, then each nucleotide is transformed into its base pair for random time(s) using the complementary rule, the times is generated by Chebyshev maps. Experiment results and security analysis show that the scheme can not only achieve good encryption result, but also the key space is large enough to resist against common attacks.

523 citations


Journal ArticleDOI
10 Jan 2012-PLOS ONE
TL;DR: A simple method is identified that generally works best for face and object recognition, and two that work well for recognizing textures, which are tested using a modern descriptor-based image recognition framework.
Abstract: In image recognition it is often assumed the method used to convert color images to grayscale has little impact on recognition performance. We compare thirteen different grayscale algorithms with four types of image descriptors and demonstrate that this assumption is wrong: not all color-to-grayscale algorithms work equally well, even when using descriptors that are robust to changes in illumination. These methods are tested using a modern descriptor-based image recognition framework, on face, object, and texture datasets, with relatively few training instances. We identify a simple method that generally works best for face and object recognition, and two that work well for recognizing textures.

278 citations


Proceedings ArticleDOI
16 Jun 2012
TL;DR: The model, which the authors call SAIFS (shape, albedo, and illumination from shading) produces reasonable results on arbitrary grayscale images taken in the real world, and outperforms all previousgrayscale “intrinsic image” - style algorithms on the MIT IntrINSic Images dataset.
Abstract: We address the problem of recovering shape, albedo, and illumination from a single grayscale image of an object, using shading as our primary cue. Because this problem is fundamentally underconstrained, we construct statistical models of albedo and shape, and define an optimization problem that searches for the most likely explanation of a single image. We present two priors on albedo which encourage local smoothness and global sparsity, and three priors on shape which encourage flatness, outward-facing orientation at the occluding contour, and local smoothness. We present an optimization technique for using these priors to recover shape, albedo, and a spherical harmonic model of illumination. Our model, which we call SAIFS (shape, albedo, and illumination from shading) produces reasonable results on arbitrary grayscale images taken in the real world, and outperforms all previous grayscale “intrinsic image” — style algorithms on the MIT Intrinsic Images dataset.

181 citations


Book ChapterDOI
07 Oct 2012
TL;DR: The proposed implementation demonstrates that the bilateral filter can be as efficient as the recent edge-preserving filtering methods, especially for high-dimensional images, and derives a new filter named gradient domain bilateral filter from the proposed recursive implementation.
Abstract: This paper proposes a recursive implementation of the bilateral filter. Unlike previous methods, this implementation yields an bilateral filter whose computational complexity is linear in both input size and dimensionality. The proposed implementation demonstrates that the bilateral filter can be as efficient as the recent edge-preserving filtering methods, especially for high-dimensional images. Let the number of pixels contained in the image be N, and the number of channels be D, the computational complexity of the proposed implementation will be O(ND). It is more efficient than the state-of-the-art bilateral filtering methods that have a computational complexity of O(ND2) [1] (linear in the image size but polynomial in dimensionality) or O(Nlog(N)D) [2] (linear in the dimensionality thus faster than [1] for high-dimensional filtering). Specifically, the proposed implementation takes about 43 ms to process a one megapixel color image (and about 14 ms to process a 1 megapixel grayscale image) which is about 18 × faster than [1] and 86× faster than [2]. The experiments were conducted on a MacBook Air laptop computer with a 1.8 GHz Intel Core i7 CPU and 4 GB memory. The memory complexity of the proposed implementation is also low: as few as the image memory will be required (memory for the images before and after filtering is excluded). This paper also derives a new filter named gradient domain bilateral filter from the proposed recursive implementation. Unlike the bilateral filter, it performs bilateral filtering on the gradient domain. It can be used for edge-preserving filtering but avoids sharp edges that are observed to cause visible artifacts in some computer graphics tasks. The proposed implementations were proved to be effective for a number of computer vision and computer graphics applications, including stylization, tone mapping, detail enhancement and stereo matching.

154 citations


Journal ArticleDOI
TL;DR: Compared with grayscale texture features, the proposed color local texture features are able to provide excellent recognition rates for face images taken under severe variation in illumination, as well as for small- (low-) resolution face images.
Abstract: This paper proposes new color local texture features, i.e., color local Gabor wavelets (CLGWs) and color local binary pattern (CLBP), for the purpose of face recognition (FR). The proposed color local texture features are able to exploit the discriminative information derived from spatiochromatic texture patterns of different spectral channels within a certain local face region. Furthermore, in order to maximize a complementary effect taken by using both color and texture information, the opponent color texture features that capture the texture patterns of spatial interactions between spectral channels are also incorporated into the generation of CLGW and CLBP. In addition, to perform the final classification, multiple color local texture features (each corresponding to the associated color band) are combined within a feature-level fusion framework. Extensive and comparative experiments have been conducted to evaluate our color local texture features for FR on five public face databases, i.e., CMU-PIE, Color FERET, XM2VTSDB, SCface, and FRGC 2.0. Experimental results show that FR approaches using color local texture features impressively yield better recognition rates than FR approaches using only color or texture information. Particularly, compared with grayscale texture features, the proposed color local texture features are able to provide excellent recognition rates for face images taken under severe variation in illumination, as well as for small- (low-) resolution face images. In addition, the feasibility of our color local texture features has been successfully demonstrated by making comparisons with other state-of-the-art color FR methods.

153 citations


Journal ArticleDOI
TL;DR: This paper analytically study the proposed variational model for image denoising, and justifies why the model can preserve object corners and image contrasts.
Abstract: We propose a new variational model for image denoising, which employs the $L^{1}$-norm of the mean curvature of the image surface $(x,f(x))$ of a given image $f:\Omega\rightarrow\mathbb{R}$. Besides eliminating noise and preserving edges of objects efficiently, our model can keep corners of objects and greyscale intensity contrasts of images and also remove the staircase effect. In this paper, we analytically study the proposed model and justify why our model can preserve object corners and image contrasts. We apply the proposed model to the denoising of curves and plane images, and also compare the results with those obtained by using the classical Rudin-Osher-Fatemi model [Phys. D, 60 (1992), pp. 259-268].

146 citations


Proceedings ArticleDOI
23 Mar 2012
TL;DR: A method of weakening the sky region, aims to improve the adaptability of the algorithm and effectively restores the contrast and color of the scene, significantly improves the visual effects of the image.
Abstract: In the frog and haze climatic condition, the captured picture will become blurred and the color is partial gray and white, due to the effect of atmospheric scattering. This situation brings a great deal of inconvenience to the video surveillance system, so the study of defogging algorithm in this weather is of great importance. This paper deeply analyzes the physical process of imaging in foggy weather. After full study on the haze removal algorithm of single image over the last decade, we propo se a fast haze removal algorithm which based on a fast bilateral filtering combined with dark colors prior. This algorithm starts with the atmospheric scattering model, derives a estimated transmission map by using dark channel prior, and then combines with grayscale to extract refined transmission map by using the fast bilateral filter. This algorithm has a fast execution speed and greatly improves the original algorithm which is morre time-consuming. On this basis, we analyzed the reasons why the image is dim after the haze removal using dark channel prior, and proposed the improved transmission map formula. Experimental-results show that this algorithm is feasible which effectively restores the contrast and color of the scene, significantly improves the visual effects of the image. Those image with large area of sky usually prone to distortion when using the dark channel prior, Therefore we propose a method of weakening the sky region, aims to improve the adaptability of the algorithm.

137 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed LCVBP feature is able to yield excellent FR performance for challenging face images, and has successfully been tested by comparing other state-of-the-art face descriptors.
Abstract: This paper proposes a novel face descriptor based on color information, i.e., so-called local color vector binary patterns (LCVBPs), for face recognition (FR). The proposed LCVBP consists of two discriminative patterns: color norm patterns and color angular patterns. In particular, we have designed a method for extracting color angular patterns, which enables to encode the discriminating texture patterns derived from spatial interactions among different spectral-band images. In order to perform FR tasks, the proposed LCVBP feature is generated by combining multiple features extracted from both color norm patterns and color angular patterns. Extensive and comparative experiments have been conducted to evaluate the proposed LCVBP feature on five public databases. Experimental results show that the proposed LCVBP feature is able to yield excellent FR performance for challenging face images. In addition, the effectiveness of the proposed LCVBP feature has successfully been tested by comparing other state-of-the-art face descriptors.

133 citations


Book ChapterDOI
05 Nov 2012
TL;DR: A novel efficient LBP-based descriptor, namely Gradient-L BP (G-LBP), specialized to encode the facial depth information inspired by 3DLBP, yet resolves its inherent drawbacks is proposed, applied to gender recognition task and shows its superiority to 3DL BP in all the experimental setups.
Abstract: RGB-D is a powerful source of data providing the aligned depth information which has great potentials in improving the performance of various problems in image understanding, while Local Binary Patterns (LBP) have shown excellent results in representing faces. In this paper, we propose a novel efficient LBP-based descriptor, namely Gradient-LBP (G-LBP), specialized to encode the facial depth information inspired by 3DLBP, yet resolves its inherent drawbacks. The proposed descriptor is applied to gender recognition task and shows its superiority to 3DLBP in all the experimental setups on both Kinect and range scanner databases. Furthermore, a weighted combination scheme of the proposed descriptor for depth images and the state-of-the-art LBPU2 for grayscale images applied in gender recognition is proposed and evaluated. The result reinforces the effectiveness of the proposed descriptor in complementing the source of information from the luminous intensity. All the experiments are carried out on both the high quality 3D range scanner database - Texas 3DFR and images of lower quality obtained from Kinect - EURECOM Kinect Face Dataset to show the consistency of the performance on different sources of RGB-D data.

125 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that the watermarks generated with the proposed algorithm are invisible and the quality of watermarked image and the recovered image are improved.
Abstract: We have implemented a robust image watermarking technique for the copyright protection based on 3-level discrete wavelet transform (DWT). In this technique a multi-bit watermark is embedded into the low frequency sub-band of a cover image by using alpha blending technique. The insertion and extraction of the watermark in the grayscale cover image is found to be simpler than other transform techniques. The proposed method is compared with the 1-level and 2-level DWT based image watermarking methods by using statistical parameters such as peak-signal-to-noise-ratio (PSNR) and mean square error (MSE). The experimental results demonstrate that the watermarks generated with the proposed algorithm are invisible and the quality of watermarked image and the recovered image are improved.

Book ChapterDOI
07 Oct 2012
TL;DR: This work creates and makes publicly available a ground-truth dataset for symmetry detection in natural images, and uses supervised learning to learn how to combine these cues, and employs MIL to accommodate the unknown scale and orientation of the symmetric structures.
Abstract: In this work we propose a learning-based approach to symmetry detection in natural images. We focus on ribbon-like structures, i.e. contours marking local and approximate reflection symmetry and make three contributions to improve their detection. First, we create and make publicly available a ground-truth dataset for this task by building on the Berkeley Segmentation Dataset. Second, we extract features representing multiple complementary cues, such as grayscale structure, color, texture, and spectral clustering information. Third, we use supervised learning to learn how to combine these cues, and employ MIL to accommodate the unknown scale and orientation of the symmetric structures. We systematically evaluate the performance contribution of each individual component in our pipeline, and demonstrate that overall we consistently improve upon results obtained using existing alternatives.

Journal ArticleDOI
TL;DR: A new algorithm for medical image retrieval which shows a significant improvement in terms of their evaluation measures as compared to LBP and LBP with Gabor transform is presented.
Abstract: A new algorithm for medical image retrieval is presented in the paper. An 8-bit grayscale image is divided into eight binary bit-planes, and then binary wavelet transform (BWT) which is similar to the lifting scheme in real wavelet transform (RWT) is performed on each bitplane to extract the multi-resolution binary images. The local binary pattern (LBP) features are extracted from the resultant BWT sub-bands. Three experiments have been carried out for proving the effectiveness of the proposed algorithm. Out of which two are meant for medical image retrieval and one for face retrieval. It is further mentioned that the database considered for three experiments are OASIS magnetic resonance imaging (MRI) database, NEMA computer tomography (CT) database and PolyU-NIRFD face database. The results after investigation shows a significant improvement in terms of their evaluation measures as compared to LBP and LBP with Gabor transform.

Patent
10 Jul 2012
TL;DR: A data decoding system can comprise a client computer including an imaging device and one or more servers executing at least one decoding process as discussed by the authors, which can be configured to acquire an image of decodable indicia and to process the acquired image by: (i) identifying areas of interest within the image; (ii) cropping the image based on the identified regions of interest; (iii clipping one or multiple images from the image, and (iv) increasing or reducing a pixel resolution of at least part of the image.
Abstract: A data decoding system can comprise a client computer including an imaging device and one or more servers executing at least one decoding process. The client computer can be configured to acquire an image of decodable indicia and to process the acquired image by: (i) identifying one or more areas of interest within the image; (ii) cropping the image based on the identified areas of interest; (iii) clipping one or more images from the image based on the identified areas of interest; (iv) increasing or reducing a pixel resolution of at least part of the image; (v) converting the image to a grayscale image or to a monochrome image; and/or (vi) compressing the image using a compression algorithm. The decoding process can be configured, responsive to receiving a decoding request comprising the processed image, to decode the decodable indicia and to transmit the decoding operation result to the client computer.

Journal ArticleDOI
Jun Jin1
TL;DR: Results on some grayscale and color images show that the proposed image encryption method satisfies the properties of confusion and diffusion, execution speed and has perfect information concealing.

Journal ArticleDOI
TL;DR: An interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system is demonstrated, and a ``color diffusion'' phenomenon was observed whereby high-quality true- color images are produced not only inside the region of interest, but also in neighboring regions.
Abstract: X-ray micro-CT is an important imaging tool for biomedical researchers Our group has recently proposed a hybrid “true-color” micro-CT system to improve contrast resolution with lower system cost and radiation dose The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess Principal component analysis was used to map the spectral reconstructions into the color space The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system Additionally, a ``color diffusion'' phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose

Proceedings ArticleDOI
28 Apr 2012
TL;DR: The main contribution is to alleviate a strict order constraint for color mapping based on human vision system, which enables the employment of a bimodal distribution to constrain spatial pixel difference and allows for automatic selection of suitable gray scale in order to preserve the original contrast.
Abstract: Decolorization - the process to transform a color image to a grayscale one - is a basic tool in digital printing, stylized black-and-white photography, and in many single channel image processing applications. In this paper, we propose an optimization approach aiming at maximally preserving the original color contrast. Our main contribution is to alleviate a strict order constraint for color mapping based on human vision system, which enables the employment of a bimodal distribution to constrain spatial pixel difference and allows for automatic selection of suitable gray scale in order to preserve the original contrast. Both the quantitative and qualitative evaluation bears out the effectiveness of the proposed method.

Journal ArticleDOI
Caifeng Shan1
TL;DR: This paper presents an efficient approach to smile detection, in which the intensity differences between pixels in the grayscale face images are used as features and AdaBoost is adopted to choose and combine weak classifiers based on intensity differences to form a strong classifier.
Abstract: Smile detection in face images captured in unconstrained real-world scenarios is an interesting problem with many potential applications. This paper presents an efficient approach to smile detection, in which the intensity differences between pixels in the grayscale face images are used as features. We adopt AdaBoost to choose and combine weak classifiers based on intensity differences to form a strong classifier. Experiments show that our approach has similar accuracy to the state-of-the-art method but is significantly faster. Our approach provides 85% accuracy by examining 20 pairs of pixels and 88% accuracy with 100 pairs of pixels. We match the accuracy of the Gabor-feature-based support vector machine using as few as 350 pairs of pixels.

Journal ArticleDOI
21 May 2012-Small
TL;DR: The development of a lithography system using a digital mirror device which allows fast patterning of proteins by immobilizing fluorescently labeled molecules via photobleaching is reported, allowing the rapid and inexpensive generation of protein patterns definable by any user-defined grayscale digital image on substrate areas in the mm(2) to cm(2).
Abstract: Protein patterns of different shapes and densities are useful tools for studies of cell behavior and to create biomaterials that induce specific cellular responses. Up to now the dominant techniques for creating protein patterns are mostly based on serial writing processes or require templates such as photomasks or elastomer stamps. Only a few of these techniques permit the creation of grayscale patterns. Herein, the development of a lithography system using a digital mirror device which allows fast patterning of proteins by immobilizing fluorescently labeled molecules via photobleaching is reported. Grayscale patterns of biotin with pixel sizes in the range of 2.5 μm are generated within 10 s of exposure on an area of about 5 mm(2) . This maskless projection lithography method permits the rapid and inexpensive generation of protein patterns definable by any user-defined grayscale digital image on substrate areas in the mm(2) to cm(2) range.

Journal ArticleDOI
TL;DR: A single channel asymmetric color image encryption scheme is proposed that uses an amplitude- and phase- truncation approach with interference of polarized wavefronts to alleviate the alignment problem of interference and does not need iterative encoding and offers multiple levels of security.
Abstract: A single channel asymmetric color image encryption scheme is proposed that uses an amplitude- and phase- truncation approach with interference of polarized wavefronts. Instead of commonly used random phase masks, wavelength-dependent structured phase masks (SPM) are used in the fractional Fourier transform domain for image encoding. The primary color components bonded with different SPMs are combined into one grayscale image using convolution. We then apply the amplitude and phase truncation to the fractional spectrum, which helps generate unique decryption keys. The encrypted image bonded with a different SPM is then encoded into a polarization selective diffractive optical element. The proposed scheme alleviates the alignment problem of interference and does not need iterative encoding and offers multiple levels of security. The effect of a special attack to the proposed asymmetric cryptosystem has been studied. To measure the effectiveness of the proposed method, we calculated the mean square error between the original and the decrypted images. The computer simulation results support the proposed idea.

Book ChapterDOI
07 Oct 2012
TL;DR: A hierarchical non-linear spatio-chromatic operator yields spatial and chromatic opponent channels, which mimics processing in the primate visual cortex, which is shown to outperform standard grayscale/shape-based descriptors as well as alternative color processing schemes on several datasets.
Abstract: We describe a novel framework for the joint processing of color and shape information in natural images. A hierarchical non-linear spatio-chromatic operator yields spatial and chromatic opponent channels, which mimics processing in the primate visual cortex. We extend two popular object recognition systems (i.e., the Hmax hierarchical model of visual processing and a sift-based bag-of-words approach) to incorporate color information along with shape information. We further use the framework in combination with the gist algorithm for scene categorization as well as the Berkeley segmentation algorithm. In all cases, the proposed approach is shown to outperform standard grayscale/shape-based descriptors as well as alternative color processing schemes on several datasets.

Proceedings ArticleDOI
28 Mar 2012
TL;DR: The fuzzy grayscale image enhancement technique is proposed by maximizing fuzzy measures contained in the image by using power-law transformation and saturation operator and produced better quality enhanced image and required minimum processing time than the other methods.
Abstract: This paper presents a fuzzy grayscale enhancement technique for low contrast image. The degradation of the low contrast image is mainly caused by the inadequate lighting during image capturing and thus eventually resulted in nonuniform illumination in the image. Most of the developed contrast enhancement techniques improved image quality without considering the nonuniform lighting in the image. The fuzzy grayscale image enhancement technique is proposed by maximizing fuzzy measures contained in the image. The membership function is then modified to enhance the image by using power-law transformation and saturation operator. The qualitative and quantitative performances of the proposed method are compared with the other methods. The proposed method produced better quality enhanced image and required minimum processing time than the other methods.

Patent
06 Feb 2012
TL;DR: In this paper, a voltage transfer function, a luminance transfer function and a transfer factors (for example, efficiency, critical point, and slope) between these two functions, derives the correlation (based on the condition change in all cases) between an input grayscale voltage and output luminance, and calibrates the input gagescale voltage by a difference between measurement luminance and target luminance using the transfer functions.
Abstract: The present invention provides a voltage transfer function, a luminance transfer function, and a transfer factors (for example, efficiency, critical point, and slope) between these two functions, derives the correlation (based on the condition change in all cases) between an input grayscale voltage and output luminance, and calibrates the input grayscale voltage by a difference between measurement luminance and target luminance using the transfer functions. Therefore, the present invention can respond to change in conditions for all cases, and increase the accuracy, easiness, and generalization of calibration compared to the existing calibration scheme that relies on the lookup table by checking the actual measurement data and readjusting the transfer factors in each calibration stage. Moreover, the present invention can further increase the manufacturing yield by an average of 35% than the existing yield, significantly saving the manufacturing cost.

Patent
06 Dec 2012
TL;DR: In this article, a handheld imaging device has a data receiver that is configured to receive reference encoded image data, which includes reference code values, which are encoded by an external coding system, and the device-specific code values are configured to produce gray levels that are specific to the imaging device.
Abstract: A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.

Journal ArticleDOI
TL;DR: It is shown that this test maximizes the probability of detection as the image size becomes arbitrarily large and the quantization step vanishes, providing an asymptotic upper-bound for the detection of hidden bits based on the LSB replacement mechanism.
Abstract: This paper deals with the detection of hidden bits in the Least Significant Bit (LSB) plane of a natural image. The mean level and the covariance matrix of the image, considered as a quantized Gaussian random matrix, are unknown. An adaptive statistical test is designed such that its probability distribution is always independent of the unknown image parameters, while ensuring a high probability of hidden bits detection. This test is based on the likelihood ratio test except that the unknown parameters are replaced by estimates based on a local linear regression model. It is shown that this test maximizes the probability of detection as the image size becomes arbitrarily large and the quantization step vanishes. This provides an asymptotic upper-bound for the detection of hidden bits based on the LSB replacement mechanism. Numerical results on real natural images show the relevance of the method and the sharpness of the asymptotic expression for the probability of detection.

Journal ArticleDOI
TL;DR: In this article, a frame work was proposed to detect the background in images characterized by poor contrast and image enhancement has been carried out by the two methods based on the Weber's law notion, which employed information from image background analysis by blocks, while the second transformation method utilizes the opening operation, closing operation, which is employed to define the multi-background gray scale images.
Abstract: This paper deals with enhancement of images with poor contrast and detection of background. Proposes a frame work which is used to detect the background in images characterized by poor contrast. Image enhancement has been carried out by the two methods based on the Weber's law notion. The first method employs information from image background analysis by blocks, while the second transformation method utilizes the opening operation, closing operation, which is employed to define the multi-background gray scale images. The complete image processing is done using MATLAB simulation model. Finally, this paper is organized as follows as Morphological transformation and Weber's law. Image background approximation to the background by means of block analysis in conjunction with transformations that enhance images with poor lighting. The multibackground notion is introduced by means of the opening by reconstruction shows a comparison among several techniques to improve contrast in images. Finally, conclusions are presented.

Journal ArticleDOI
TL;DR: Good experimental results prove the effectiveness of the proposed blind authentication method for grayscale document images via the use of the Portable Network Graphics (PNG) image, and measures for protecting the security of the data hidden in the alpha channel are proposed.
Abstract: A new blind authentication method based on the secret sharing technique with a data repair capability for grayscale document images via the use of the Portable Network Graphics (PNG) image is proposed. An authentication signal is generated for each block of a grayscale document image, which, together with the binarized block content, is transformed into several shares using the Shamir secret sharing scheme. The involved parameters are carefully chosen so that as many shares as possible are generated and embedded into an alpha channel plane. The alpha channel plane is then combined with the original grayscale image to form a PNG image. During the embedding process, the computed share values are mapped into a range of alpha channel values near their maximum value of 255 to yield a transparent stego-image with a disguise effect. In the process of image authentication, an image block is marked as tampered if the authentication signal computed from the current block content does not match that extracted from the shares embedded in the alpha channel plane. Data repairing is then applied to each tampered block by a reverse Shamir scheme after collecting two shares from unmarked blocks. Measures for protecting the security of the data hidden in the alpha channel are also proposed. Good experimental results prove the effectiveness of the proposed method for real applications.

Journal ArticleDOI
TL;DR: A novel class of active contour models for image segmentation that makes use of nonlocal comparisons between pairs of patches within each region to be segmented and shows examples of efficient segmentation of natural color images.
Abstract: This article introduces a novel class of active contour models for image segmentation. It makes use of nonlocal comparisons between pairs of patches within each region to be segmented. The corresponding variational segmentation problem is implemented using a level set formulation that can handle an arbitrary number of regions. The pairwise interaction of features constrains only the local homogeneity of image features, which is crucial in capturing regions with smoothly spatially varying features. This segmentation method is generic and can be adapted to various segmentation problems by designing an appropriate metric between patches. We instantiate this framework using several classes of features and metrics. Piecewise smooth grayscale and color images are handled using L 2 distance between image patches. We show examples of efficient segmentation of natural color images. Locally oriented textures are segmented using the L 2 distance between patches of Gabor coefficients. We use a Wasserstein distance between local empirical distributions for locally homogeneous random textures. A correlation metric between local motion signatures is able to segment piecewise smooth optical flows.

Journal ArticleDOI
TL;DR: This paper presents the general overview of image watermarking and different security issues, and Least Significant Bit algorithm has been used for embedding the message/logo into the image.
Abstract: In recent years, internet revolution resulted in an explosive growth in multimedia applications The rapid advancement of internet has made it easier to send the data/image accurate and faster to the destination Besides this, it is easier to modify and misuse the valuable information through hacking at the same time Digital watermarking is one of the proposed solutions for copyright protection of multimedia data A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity In this paper an invisible watermarking technique (least significant bit) and a visible watermarking technique is implemented This paper presents the general overview of image watermarking and different security issues Various attacks are also performed on watermarked images and their impact on quality of images is also studied In this paper, Image Watermarking using Least Significant Bit (LSB) algorithm has been used for embedding the message/logo into the image This work has been implemented through MATLAB Keywords - Watermarking, Least Significant Bit (LSB), JPEG (Joint Photographic Experts Group), Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR)

Proceedings ArticleDOI
28 Nov 2012
TL;DR: A very fast and yet effective decolorization approach is proposed that is borne out by a new quantitative metric as well as qualitative comparisons with state-of-the-art methods.
Abstract: Decolorization -- the process to transform a color image to a grayscale one -- is a basic tool in digital printing, stylized black-and-white photography, and in many single channel image and video processing applications While recent research focuses on retaining meaningful visual features and color contrast, less attention has been paid to the complexity issue of the conversion Consequently, the resulting decolorization methods could be orders of magnitude slower than simple procedures, eg, Matlab built-in rgb2gray function, which could hamper them from being used practically In this paper, we propose a very fast and yet effective decolorization approach The effectiveness of the method is borne out by a new quantitative metric as well as qualitative comparisons with state-of-the-art methods