scispace - formally typeset
Search or ask a question

Showing papers on "Grayscale published in 2011"


Journal ArticleDOI
TL;DR: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA array of fully autonomous pixels containing event-based change detection and pulse-width-modulation imaging circuitry, which ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level.
Abstract: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view. Pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new grayscale values to communicate. Pixels that are not stimulated visually do not produce output. The visual information acquired from the scene, temporal contrast and grayscale data, are communicated in the form of asynchronous address-events (AER), with the grayscale values being encoded in inter-event intervals. The pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level. Compression factors depend on scene activity and peak at ~1000 for static scenes. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143 dB static and 125 dB at 30 fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of 56 dB (9.3 bit) for >10 Lx illuminance.

632 citations


Journal ArticleDOI
TL;DR: A modified decision based unsymmetrical trimmed median filter algorithm for the restoration of gray scale, and color images that are highly corrupted by salt and pepper noise is proposed and it gives better Peak Signal-to-Noise Ratio (PSNR) and Image Enhancement Factor (IEF).
Abstract: A modified decision based unsymmetrical trimmed median filter algorithm for the restoration of gray scale, and color images that are highly corrupted by salt and pepper noise is proposed in this paper. The proposed algorithm replaces the noisy pixel by trimmed median value when other pixel values, 0's and 255's are present in the selected window and when all the pixel values are 0's and 255's then the noise pixel is replaced by mean value of all the elements present in the selected window. This proposed algorithm shows better results than the Standard Median Filter (MF), Decision Based Algorithm (DBA), Modified Decision Based Algorithm (MDBA), and Progressive Switched Median Filter (PSMF). The proposed algorithm is tested against different grayscale and color images and it gives better Peak Signal-to-Noise Ratio (PSNR) and Image Enhancement Factor (IEF).

550 citations


Journal ArticleDOI
TL;DR: A bit-level permutation and high-dimension chaotic map to encrypt color image and security analysis show that the scheme can achieve good encryption result, but also that the key space is large enough to resist against common attack.

503 citations


Journal ArticleDOI
TL;DR: An algorithm for determining the Morse complex of a two or three-dimensional grayscale digital image that agrees with the digital image and has exactly the number and type of critical cells necessary to characterize the topological changes in the level sets is presented.
Abstract: We present an algorithm for determining the Morse complex of a two or three-dimensional grayscale digital image. Each cell in the Morse complex corresponds to a topological change in the level sets (i.e., a critical point) of the grayscale image. Since more than one critical point may be associated with a single image voxel, we model digital images by cubical complexes. A new homotopic algorithm is used to construct a discrete Morse function on the cubical complex that agrees with the digital image and has exactly the number and type of critical cells necessary to characterize the topological changes in the level sets. We make use of discrete Morse theory and simple homotopy theory to prove correctness of this algorithm. The resulting Morse complex is considerably simpler than the cubical complex originally used to represent the image and may be used to compute persistent homology.

275 citations


Journal ArticleDOI
TL;DR: The different techniques developed in image analysis are reviewed and the evolution in the information provided by the different methodologies is shown, which has been heavily pushed by the increasing complexity of the image measurements in the spatial and, particularly, in the spectral direction.

267 citations


Proceedings ArticleDOI
12 Dec 2011
TL;DR: A colorization system that leverages the rich image content on the internet and the user needs only to provide a semantic text label and segmentation cues for major foreground objects in the scene to achieve the desired result.
Abstract: Colorization of a grayscale photograph often requires considerable effort from the user, either by placing numerous color scribbles over the image to initialize a color propagation algorithm, or by looking for a suitable reference image from which color information can be transferred. Even with this user supplied data, colorized images may appear unnatural as a result of limited user skill or inaccurate transfer of colors. To address these problems, we propose a colorization system that leverages the rich image content on the internet. As input, the user needs only to provide a semantic text label and segmentation cues for major foreground objects in the scene. With this information, images are downloaded from photo sharing websites and filtered to obtain suitable reference images that are reliable for color transfer to the given grayscale photo. Different image colorizations are generated from the various reference images, and a graphical user interface is provided to easily select the desired result. Our experiments and user study demonstrate the greater effectiveness of this system in comparison to previous techniques.

257 citations


Journal ArticleDOI
TL;DR: A novel fingerprint reconstruction algorithm is proposed to reconstruct the phase image, which is then converted into the grayscale image, and it is shown that both types of attacks can be successfully launched against a fingerprint recognition system.
Abstract: Fingerprint matching systems generally use four types of representation schemes: grayscale image, phase image, skeleton image, and minutiae, among which minutiae-based representation is the most widely adopted one. The compactness of minutiae representation has created an impression that the minutiae template does not contain sufficient information to allow the reconstruction of the original grayscale fingerprint image. This belief has now been shown to be false; several algorithms have been proposed that can reconstruct fingerprint images from minutiae templates. These techniques try to either reconstruct the skeleton image, which is then converted into the grayscale image, or reconstruct the grayscale image directly from the minutiae template. However, they have a common drawback: Many spurious minutiae not included in the original minutiae template are generated in the reconstructed image. Moreover, some of these reconstruction techniques can only generate a partial fingerprint. In this paper, a novel fingerprint reconstruction algorithm is proposed to reconstruct the phase image, which is then converted into the grayscale image. The proposed reconstruction algorithm not only gives the whole fingerprint, but the reconstructed fingerprint contains very few spurious minutiae. Specifically, a fingerprint image is represented as a phase image which consists of the continuous phase and the spiral phase (which corresponds to minutiae). An algorithm is proposed to reconstruct the continuous phase from minutiae. The proposed reconstruction algorithm has been evaluated with respect to the success rates of type-I attack (match the reconstructed fingerprint against the original fingerprint) and type-II attack (match the reconstructed fingerprint against different impressions of the original fingerprint) using a commercial fingerprint recognition system. Given the reconstructed image from our algorithm, we show that both types of attacks can be successfully launched against a fingerprint recognition system.

253 citations


Journal ArticleDOI
01 Mar 2011
TL;DR: A fully automated method for cell nuclei detection in Pap smear images that includes a priori knowledge about the circumference of each nucleus and the application of classification algorithms.
Abstract: In this paper, we present a fully automated method for cell nuclei detection in Pap smear images. The locations of the candidate nuclei centroids in the image are detected with morphological analysis and they are refined in a second step, which incorporates a priori knowledge about the circumference of each nucleus. The elimination of the undesirable artifacts is achieved in two steps: the application of a distance-dependent rule on the resulted centroids; and the application of classification algorithms. In our method, we have examined the performance of an unsupervised (fuzzy C-means) and a supervised (support vector machines) classification technique. In both classification techniques, the effect of the refinement step improves the performance of the clustering algorithm. The proposed method was evaluated using 38 cytological images of conventional Pap smears containing 5617 recognized squamous epithelial cells. The results are very promising, even in the case of images with high degree of cell overlapping.

209 citations


Journal ArticleDOI
TL;DR: A novel feature extraction method for sound event classification, based on the visual signature extracted from the sound's time-frequency representation, which shows a significant improvement over other methods in mismatched conditions, without the need for noise reduction.
Abstract: In this letter, we present a novel feature extraction method for sound event classification, based on the visual signature extracted from the sound's time-frequency representation. The motivation stems from the fact that spectrograms form recognisable images, that can be identified by a human reader, with perception enhanced by pseudo-coloration of the image. The signal processing in our method is as follows. 1) The spectrogram is normalised into greyscale with a fixed range. 2) The dynamic range is quantized into regions, each of which is then mapped to form a monochrome image. 3) The monochrome images are partitioned into blocks, and the distribution statistics in each block are extracted to form the feature. The robustness of the proposed method comes from the fact that the noise is normally more diffuse than the signal and therefore the effect of the noise is limited to a particular quantization region, leaving the other regions less changed. The method is tested on a database of 60 sound classes containing a mixture of collision, action and characteristic sounds and shows a significant improvement over other methods in mismatched conditions, without the need for noise reduction.

196 citations


01 Jan 2011
TL;DR: Comparative analysis of different enhancement techniques for contrast enhancement will be carried out on the basis of subjective and objective parameters.
Abstract: Various enhancement schemes are used for enhancing an image which includes gray scale manipulation, filtering and Histogram Equalization (HE). Histogram equalization is one of the well known imaget enhancement technique. It became a popular technique for contrast enhancement because this method is simple and effective. In the latter case, preserving the input brightness of the image is required to avoid the generation of non-existing artifacts in the output image. Although these methods preserve the input brightness on the output image with a significant contrast enhancement, they may produce images with do not look as natural as the input ones. The basic idea of HE method is to re-map the gray levels of an image. HE tends to introduce some annoying artifacts and unnatural enhancement. To overcome these drawbacks different brightness preserving techniques are used which are covered in the literature survey. Comparative analysis of different enhancement techniques will be carried out. This comparison will be done on the basis of subjective and objective parameters. Subjective parameters are visual quality and computation time and objective parameters are Peak signalto-noise ratio (PSNR), Mean squared error (MSE), Normalized Absolute Error (NAE), Normalized Correlation, Error Color and Composite Peak Signal to Noise Ratio (CPSNR).

186 citations


Journal ArticleDOI
Miao Ma1, Jianhui Liang1, Min Guo1, Yi Fan1, Yilong Yin2 
01 Dec 2011
TL;DR: Experimental results indicate that the proposed fast SAR image segmentation method is superior to Genetic Algorithm based and Artificial Fish Swarm based segmentation methods in terms of segmentation accuracy and segmentation time.
Abstract: Due to the presence of speckle noise, segmentation of Synthetic Aperture Radar (SAR) images is still a challenging problem. This paper proposes a fast SAR image segmentation method based on Artificial Bee Colony (ABC) algorithm. In this method, threshold estimation is regarded as a search procedure that searches for an appropriate value in a continuous grayscale interval. Hence, ABC algorithm is introduced to search for the optimal threshold. In order to get an efficient fitness function for ABC algorithm, after the definition of grey number in Grey theory, the original image is decomposed by discrete wavelet transform. Then, a filtered image is produced by performing a noise reduction to the approximation image reconstructed with low-frequency coefficients. At the same time, a gradient image is reconstructed with some high-frequency coefficients. A co-occurrence matrix based on the filtered image and the gradient image is therefore constructed, and an improved two-dimensional grey entropy is defined to serve as the fitness function of ABC algorithm. Finally, by the swarm intelligence of employed bees, onlookers and scouts in honey bee colony, the optimal threshold is rapidly discovered. Experimental results indicate that the proposed method is superior to Genetic Algorithm (GA) based and Artificial Fish Swarm (AFS) based segmentation methods in terms of segmentation accuracy and segmentation time.

Journal ArticleDOI
TL;DR: A speckle-reduction technique employing a time-multiplexing method is proposed, which decomposed into multiple bit planes to represent the grayscale of object points, and binary holograms are generated from the bit plane patterns by using a half-zone plate technique.
Abstract: Speckle generation is an inherent problem of holography. A speckle-reduction technique employing a time-multiplexing method is proposed. Object points constituting a reconstructed image are divided into multiple object point groups consisting of sparse object points, and the object point groups are displayed time sequentially. The sparseness and temporal summation enable the suppression of speckle generation. The object point group is decomposed into multiple bit planes to represent the grayscale of object points, and binary holograms are generated from the bit plane patterns by using a half-zone plate technique. The binary holograms are displayed by a high-speed spatial light modulator.

Journal ArticleDOI
TL;DR: This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality.
Abstract: Color visual cryptography (VC) encrypts a color secret message into color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

Journal ArticleDOI
TL;DR: A new spectral multiple image fusion analysis based on the discrete cosine transform (DCT) and a specific spectral filtering method which is based on an adapted spectral quantization and provides a viable solution for simultaneous compression and encryption of multiple images.
Abstract: We report a new spectral multiple image fusion analysis based on the discrete cosine transform (DCT) and a specific spectral filtering method. In order to decrease the size of the multiplexed file, we suggest a procedure of compression which is based on an adapted spectral quantization. Each frequency is encoded with an optimized number of bits according its importance and its position in the DC domain. This fusion and compression scheme constitutes a first level of encryption. A supplementary level of encryption is realized by making use of biometric information. We consider several implementations of this analysis by experimenting with sequences of gray scale images. To quantify the performance of our method we calculate the MSE (mean squared error) and the PSNR (peak signal to noise ratio). Our results consistently improve performances compared to the well-known JPEG image compression standard and provide a viable solution for simultaneous compression and encryption of multiple images.

Journal ArticleDOI
TL;DR: A robust 2D shape reconstruction and simplification algorithm which takes as input a defect‐laden point set with noise and outliers and construct the resulting simplicial complex through greedy decimation of a Delaunay triangulation of the input point set is proposed.
Abstract: We propose a robust 2D shape reconstruction and simplification algorithm which takes as input a defect-laden point set with noise and outliers. We introduce an optimal-transport driven approach where the input point set, considered as a sum of Dirac measures, is approximated by a simplicial complex considered as a sum of uniform measures on 0- and 1-simplices. A fine-to-coarse scheme is devised to construct the resulting simplicial complex through greedy decimation of a Delaunay triangulation of the input point set. Our method performs well on a variety of examples ranging from line drawings to grayscale images, with or without noise, features, and boundaries.

Journal ArticleDOI
TL;DR: A system for recognizing static gestures of alphabets in Persian sign language (PSL) using Wavelet transform and neural networks (NN) and this system only requires the images of the bare hand for the recognition.
Abstract: This paper presents a system for recognizing static gestures of alphabets in Persian sign language (PSL) using Wavelet transform and neural networks (NN). The required images for the selected alphabets are obtained using a digital camera. The color images are cropped, resized, and converted to grayscale images. Then, the discrete wavelet transform (DWT) is applied on the gray scale images, and some features are extracted. Finally, the extracted features are used to train a Multi-Layered Perceptron (MLP) NN. Our recognition system does not use any gloves or visual marking systems. This system only requires the images of the bare hand for the recognition. The system is implemented and tested using a data set of 640 samples of Persian sign images; 20 images for each sign. Experimental results show that our system is able to recognize 32 selected PSL alphabets with an average classification accuracy of 94.06%.

Journal ArticleDOI
TL;DR: A new type of computer art image called secret-fragment-visible mosaic image is proposed, which is created automatically by composing small fragments of a given image to become a target image in a mosaic form, achieving an effect of embedding the given image visibly but secretly in the resulting mosaic image.
Abstract: A new type of computer art image called secret-fragment-visible mosaic image is proposed, which is created automatically by composing small fragments of a given image to become a target image in a mosaic form, achieving an effect of embedding the given image visibly but secretly in the resulting mosaic image. This effect of information hiding is useful for covert communication or secure keeping of secret images. To create a mosaic image of this type from a given secret color image, the 3-D color space is transformed into a new 1-D colorscale, based on which a new image similarity measure is proposed for selecting from a database a target image that is the most similar to the given secret image. A fast greedy search algorithm is proposed to find a similar tile image in the secret image to fit into each block in the target image. The information of the tile image fitting sequence is embedded into randomly-selected pixels in the created mosaic image by a lossless LSB replacement scheme using a secret key; without the key, the secret image cannot be recovered. The proposed method, originally designed for dealing with color images, is also extended to create grayscale mosaic images which are useful for hiding text-type grayscale document images. An additional measure to enhance the embedded data security is also proposed. Good experimental results show the feasibility of the proposed method.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This work presents a new approach to capture video at high spatial and spectral resolutions using a hybrid camera system that propagates the multispectral information into the RGB video to produce a video with both high spectral and spatial resolution.
Abstract: We present a new approach to capture video at high spatial and spectral resolutions using a hybrid camera system. Composed of an RGB video camera, a grayscale video camera and several optical elements, the hybrid camera system simultaneously records two video streams: an RGB video with high spatial resolution, and a multispectral video with low spatial resolution. After registration of the two video streams, our system propagates the multispectral information into the RGB video to produce a video with both high spectral and spatial resolution. This propagation between videos is guided by color similarity of pixels in the spectral domain, proximity in the spatial domain, and the consistent color of each scene point in the temporal domain. The propagation algorithm is designed for rapid computation to allow real-time video generation at the original frame rate, and can thus facilitate real-time video analysis tasks such as tracking and surveillance. Hardware implementation details and design tradeoffs are discussed. We evaluate the proposed system using both simulations with ground truth data and on real-world scenes. The utility of this high resolution multispectral video data is demonstrated in dynamic white balance adjustment and tracking.

Journal ArticleDOI
TL;DR: The effectiveness of the proposed automatic exact histogram specification technique in enhancing contrasts of images is demonstrated through qualitative analysis and the proposed image contrast measure based quantitative analysis.
Abstract: Histogram equalization, which aims at information maximization, is widely used in different ways to perform contrast enhancement in images. In this paper, an automatic exact histogram specification technique is proposed and used for global and local contrast enhancement of images. The desired histogram is obtained by first subjecting the image histogram to a modification process and then by maximizing a measure that represents increase in information and decrease in ambiguity. A new method of measuring image contrast based upon local band-limited approach and center-surround retinal receptive field model is also devised in this paper. This method works at multiple scales (frequency bands) and combines the contrast measures obtained at different scales using Lp-norm. In comparison to a few existing methods, the effectiveness of the proposed automatic exact histogram specification technique in enhancing contrasts of images is demonstrated through qualitative analysis and the proposed image contrast measure based quantitative analysis.

Journal ArticleDOI
TL;DR: The present work enhances the basic DUDE scheme by incorporating statistical modeling tools that have proven successful in addressing similar issues in lossless image compression, and significantly surpass the state of the art in the case of salt and pepper (S&P) and -ary symmetric noise, and perform well for Gaussian noise.
Abstract: We present an extension of the discrete universal denoiser DUDE, specialized for the denoising of grayscale images. The original DUDE is a low-complexity algorithm aimed at recovering discrete sequences corrupted by discrete memoryless noise of known statistical characteristics. It is universal, in the sense of asymptotically achieving, without access to any information on the statistics of the clean sequence, the same performance as the best denoiser that does have access to such information. The DUDE, however, is not effective on grayscale images of practical size. The difficulty lies in the fact that one of the DUDE's key components is the determination of conditional empirical probability distributions of image samples, given the sample values in their neighborhood. When the alphabet is relatively large (as is the case with grayscale images), even for a small-sized neighborhood, the required distributions would be estimated from a large collection of sparse statistics, resulting in poor estimates that would not enable effective denoising. The present work enhances the basic DUDE scheme by incorporating statistical modeling tools that have proven successful in addressing similar issues in lossless image compression. Instantiations of the enhanced framework, which is referred to as iDUDE, are described for examples of additive and nonadditive noise. The resulting denoisers significantly surpass the state of the art in the case of salt and pepper (S&P) and -ary symmetric noise, and perform well for Gaussian noise.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: Since the method accurately preserves the finest details while enhancing the chromatic contrast, the utility and versatility of the operator have been proved for several other challenging applications such as video decolorization, detail enhancement, single image dehazing and segmentation under different illuminants.
Abstract: This paper introduces an effective decolorization algorithm that preserves the appearance of the original color image. Guided by the original saliency, the method blends the luminance and the chrominance information in order to conserve the initial color disparity while enhancing the chromatic contrast. As a result, our straightforward fusing strategy generates a new spatial distribution that discriminates better the illuminated areas and color features. Since we do not employ quantization or a per-pixel optimization (computationally expensive), the algorithm has a linear runtime, and depending on the image resolution it could be used in real-time applications. Extensive experiments and a comprehensive evaluation against existing state-of-the-art methods demonstrate the potential of our grayscale operator. Furthermore, since the method accurately preserves the finest details while enhancing the chromatic contrast, the utility and versatility of our operator have been proved for several other challenging applications such as video decolorization, detail enhancement, single image dehazing and segmentation under different illuminants.

Book ChapterDOI
18 May 2011
TL;DR: This paper summarizes several iterations in the cat-andmouse game between digital image forensics and counter-forensics related to an image's JPEG compression history, and presents an improved scheme which uses imputation to deal with cases that lack an estimate.
Abstract: This paper summarizes several iterations in the cat-andmouse game between digital image forensics and counter-forensics related to an image's JPEG compression history. Building on the counterforensics algorithm by Stamm et al. [1], we point out a vulnerability in this scheme when a maximum likelihood estimator has no solution. We construct a targeted detector against it, and present an improved scheme which uses imputation to deal with cases that lack an estimate. While this scheme is secure against our targeted detector, it is detectable by a further improved detector, which borrows from steganalysis and uses a calibrated feature. All claims are backed with experimental results from 2 × 800 never-compressed never-resampled grayscale images.

Journal ArticleDOI
TL;DR: The experimental results exemplify that the proposed algorithms yields segmented gray scale image of perfect accuracy and the required computer time reasonable and also reveal the improved fuzzy c mean achieve better segmentation compare to others.
Abstract: segmentation plays a significant role in computer vision. It aims at extracting meaningful objects lying in the image. Generally there is no unique method or approach for image segmentation. Clustering is a powerful technique that has been reached in image segmentation. The cluster analysis is to partition an image data set into a number of disjoint groups or clusters. The clustering methods such as k means, improved k mean, fuzzy c mean (FCM) and improved fuzzy c mean algorithm (IFCM) have been proposed. K means clustering is one of the popular method because of its simplicity and computational efficiency. The number of iterations will be reduced in improved K compare to conventional K means. FCM algorithm has additional flexibility for the pixels to belong to multiple classes with varying degrees of membership. Demerit of conventional FCM is time consuming which is overcome by improved FCM. The experimental results exemplify that the proposed algorithms yields segmented gray scale image of perfect accuracy and the required computer time reasonable and also reveal the improved fuzzy c mean achieve better segmentation compare to others. The quality of segmented image is measured by statistical parameters: rand index (RI), global consistency error (GCE), variations of information (VOI) and boundary displacement error (BDE).

Journal ArticleDOI
TL;DR: Experimental results show that the proposed background subtraction method is a good solution to obtain high accuracy and low resource requirements simultaneously, and is preferable for implementation in real-time embedded systems such as smart cameras.
Abstract: This letter proposes a background subtraction method for Bayer-pattern image sequences. The proposed method models the background in a Bayer-pattern domain using a mixture of Gaussians (MoG) and classifies the foreground in an interpolated red, green, and blue (RGB) domain. This method can achieve almost the same accuracy as MoG using RGB color images while maintaining computational resources (time and memory) similar to MoG using grayscale images. Experimental results show that the proposed method is a good solution to obtain high accuracy and low resource requirements simultaneously. This improvement is important for a low-level task like background subtraction since its accuracy affects the performance of high-level tasks, and is preferable for implementation in real-time embedded systems such as smart cameras.

Journal ArticleDOI
TL;DR: Experiments show that the single-loop BGC texture operator outperforms the well-known LBP, and statistical significance of the achieved accuracy improvement has been demonstrated through the Wilkoxon signed rank test.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: A novel technique is proposed by coinciding the 1st principal component of the segmented hand gestures with vertical axes to obtain a rotation invariant gesture image and performs with 99.6% classification accuracy which is better than earlier reported technique.
Abstract: The accurate classification of static hand gestures is a vital role to develop a hand gesture recognition system which is used for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC) application. A vision-based static hand gesture recognition algorithm consists of three stages: preprocessing, feature extraction and classification. The preprocessing stage involves following three sub-stages: segmentation which segments hand region from its background images using a histogram based thresholding algorithm and transforms into binary silhouette; rotation that rotates segmented gesture to make the algorithm, rotation invariant; filtering that effectively removes background noise and object noise from binary image by morphological filtering technique. To obtain a rotation invariant gesture image, a novel technique is proposed in this paper by coinciding the 1st principal component of the segmented hand gestures with vertical axes. A localized contour sequence (LCS) based feature is used here to classify the hand gestures. A k-mean based radial basis function neural network (RBFNN) is also proposed here for classification of hand gestures from LCS based feature set. The experiment is conducted on 500 train images and 500 test images of 25 class grayscale static hand gesture image dataset of Danish/international sign language hand alphabet. The proposed method performs with 99.6% classification accuracy which is better than earlier reported technique.

Journal ArticleDOI
TL;DR: A minimally interactive high-throughput system which employs a color gradient based active contour model for rapid and accurate segmentation of multiple target objects on very large images is presented and it is shown that HNCut-CGAC is computationally efficient and may be easily applied to a variety of different problems and applications.

Journal ArticleDOI
TL;DR: A new objective color image quality measure in spatial domain is proposed that overcomes the limitation of these existing methods significantly, is easy to calculate and applicable to various image processing applications.
Abstract: Humans have always seen the world in color. In the last three decades, there has been rapid and enormous transition from grayscale images to color ones. Well-known objective evaluation algorithms for measuring image quality include mean squared error (MSE), peak signal-to-noise ratio (PSNR), and human Visual System based one are structural similarity measures and edge based similarity measures. One of the common and major limitations of these objective measures is that they evaluate the quality of grayscale images only and don’t make use of image color information. Since, Color is a powerful descriptor that often simplifies the object identification and extraction from a scene so color information also could influence human beings’ judgments. So, in this paper new objective color image quality measure in spatial domain is proposed that overcomes the limitation of these existing methods significantly, is easy to calculate and applicable to various image processing applications. The proposed quality measure has been designed as a combination of four main factors: luminance similarity, structure correlation, edge similarity, and color similarity. This proposed index is mathematically defined and in it HVS model is explicitly employed. Experiments on various image distortion types indicate that this index performs significantly better than other traditional error summation methods and existing similarity measures.

Journal ArticleDOI
TL;DR: This paper clarifies how to reduce the GAP modeling time and presents experimental results comparing GAP with existing object detection methods, demonstrating that superior object detection with higher precision and recall rates is achieved by GAP.

Journal ArticleDOI
TL;DR: The experimental results show that TiBS does not provide high compression ratios, but it enables energy-efficient image communication, even for the source camera node, and even for high packet loss rates.
Abstract: This article presents a lightweight image compression algorithm explicitly designed for resource-constrained wireless camera sensors, called TiBS (tiny block-size image coding). TiBS operates on blocks of 2x2 pixels (this makes it easy for the end-user to conceal missing blocks due to packet losses) and is based on pixel removal. Furthermore, TiBS is combined with a chaotic pixel mixing scheme to reinforce the robustness of image communication against packet losses. For validation purposes, TiBS as well as a JPEG-like algorithm have been implemented on a real wireless camera sensor composed of a Mica2 mote and a Cyclops imager. The experimental results show that TiBS does not provide high compression ratios, but it enables energy-efficient image communication, even for the source camera node, and even for high packet loss rates. Considering an original 8-bpp grayscale image for instance, the amount of energy consumed by the Cyclops/Mica2 can be reduced by around 60% when the image is compressed using TiBS, compared to the scenario without compression. Moreover, the visual quality of reconstructed images is usually acceptable under packet losses conditions up to 40-50%. In comparison, the JPEG-like algorithm results in clearly more energy consumption than TiBS at similar image quality and, of course, its resilience to packet losses is lower because of the larger size of encoded blocks. Adding redundant packets to the JPEG-encoded data packets may be considered to deal with packet losses, but the energy problem remains.