scispace - formally typeset
Search or ask a question

Showing papers on "Grayscale published in 2010"


Book ChapterDOI
05 Sep 2010
TL;DR: A spatiotemporal stereo matching approach based on a reformulation of Yoon and Kweon's adaptive support weights algorithm that incorporates temporal evidence in real time and visibly reduces flickering and outperforms per-frame approaches in the presence of image noise is presented.
Abstract: We introduce a real-time stereo matching technique based on a reformulation of Yoon and Kweon's adaptive support weights algorithm [1]. Our implementation uses the bilateral grid to achieve a speedup of 200× compared to a straightforward full-kernel GPU implementation, making it the fastest technique on the Middlebury website. We introduce a colour component into our greyscale approach to recover precision and increase discriminability. Using our implementation, we speed up spatialdepth superresolution 100×. We further present a spatiotemporal stereo matching approach based on our technique that incorporates temporal evidence in real time (>14 fps). Our technique visibly reduces flickering and outperforms per-frame approaches in the presence of image noise. We have created five synthetic stereo videos, with ground truth disparity maps, to quantitatively evaluate depth estimation from stereo video. Source code and datasets are available on our project website.

262 citations


Journal ArticleDOI
TL;DR: A resolution progressive compression scheme which compresses an encrypted image progressively in resolution, such that the decoder can observe a low-resolution version of the image, study local statistics based on it, and use the statistics to decode the next resolution level.
Abstract: Lossless compression of encrypted sources can be achieved through Slepian-Wolf coding. For encrypted real-world sources, such as images, the key to improve the compression efficiency is how the source dependency is exploited. Approaches in the literature that make use of Markov properties in the Slepian-Wolf decoder do not work well for grayscale images. In this correspondence, we propose a resolution progressive compression scheme which compresses an encrypted image progressively in resolution, such that the decoder can observe a low-resolution version of the image, study local statistics based on it, and use the statistics to decode the next resolution level. Good performance is observed both theoretically and experimentally.

217 citations


Proceedings ArticleDOI
19 Mar 2010
TL;DR: The grayscale images generated using the algorithm in the experiment confirms that the algorithm has preserved the salient features of the color image such as contrasts, sharpness, shadow, and image structure.
Abstract: Conversion of a color image into a grayscale image inclusive of salient features is a complicated process. The converted grayscale image may lose contrasts, sharpness, shadow, and structure of the color image. To preserve contrasts, sharpness, shadow, and structure of the color image a new algorithm has proposed. To convert the color image into grayscale image the new algorithm performs RGB approximation, reduction, and addition of chrominance and luminance. The grayscale images generated using the algorithm in the experiment confirms that the algorithm has preserved the salient features of the color image such as contrasts, sharpness, shadow, and image structure.

183 citations


Journal ArticleDOI
TL;DR: The use of color in image processing is motivated by two principal factors; first color is a powerful descriptor that often simplifies object identification and extraction from a scene, and second, human can discern thousands of color shades and intensities, compared to about only two dozen shades of gray.
Abstract: The use of color in image processing is motivated by two principal factors; First color is a powerful descriptor that often simplifies object identification and extraction from a scene. Second, human can discern thousands of color shades and intensities, compared to about only two dozen shades of gray. In RGB model, each color appears in its primary spectral components of red, green and blue. This model is based on Cartesian coordinate system. Images represented in RGB color model consist of three component images. One for each primary, when fed into an RGB monitor, these three images combines on the phosphor screen to produce a composite color image. The number of bits used to represent each pixel in RGB space is called the pixel depth. Consider an RGB image in which each of the red, green and blue images is an 8-bit image. Under these conditions each RGB color pixel is said to have a depth of 24 bit. MATLAB 7.0 2007b was used for the implementation of all results.

182 citations


Journal ArticleDOI
TL;DR: A new image-based approach to tracking the six-degree-of-freedom trajectory of a stereo camera pair is described which directly uses all grayscale information available within the stereo pair (or stereo region) leading to very robust and precise results.
Abstract: In this paper we describe a new image-based approach to tracking the six-degree-of-freedom trajectory of a stereo camera pair. The proposed technique estimates the pose and subsequently the dense pixel matching between temporal image pairs in a sequence by performing dense spatial matching between images of a stereo reference pair. In this way a minimization approach is employed which directly uses all grayscale information available within the stereo pair (or stereo region) leading to very robust and precise results. Metric 3D structure constraints are imposed by consistently warping corresponding stereo images to generate novel viewpoints at each stereo acquisition. An iterative non-linear trajectory estimation approach is formulated based on a quadrifocal relationship between the image intensities within adjacent views of the stereo pair. A robust M-estimation technique is used to reject outliers corresponding to moving objects within the scene or other outliers such as occlusions and illumination changes. The technique is applied to recovering the trajectory of a moving vehicle in long and difficult sequences of images.

138 citations


Journal ArticleDOI
TL;DR: A new variational functional with constraints is proposed by introducing fuzzy membership functions which represent several different regions in an image by proposing three methods for handling the constraints of membership functions in the minimization.
Abstract: The goal of this paper is to develop a multiphase image segmentation method based on fuzzy region competition. A new variational functional with constraints is proposed by introducing fuzzy membership functions which represent several different regions in an image. The existence of a minimizer of this functional is established. We propose three methods for handling the constraints of membership functions in the minimization. We also add auxiliary variables to approximate the membership functions in the functional such that Chambolle's fast dual projection method can be used. An alternate minimization method can be employed to find the solution, in which the region parameters and the membership functions have closed form solutions. Numerical examples using grayscale and color images are given to demonstrate the effectiveness of the proposed methods.

126 citations


Journal ArticleDOI
TL;DR: A new learning-based approach for super-resolving an image captured at low spatial resolution using a regularization framework that can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.
Abstract: In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.

104 citations


Journal ArticleDOI
TL;DR: A new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field is proposed and it is demonstrated that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency.
Abstract: Both commercial and scientific applications often need to transform color images into gray-scale images, e.g., to reduce the publication cost in printing color images or to help color blind people see visual cues of color images. However, conventional color to gray algorithms are not ready for practical applications because they encounter the following problems: 1) Visual cues are not well defined so it is unclear how to preserve important cues in the transformed gray-scale images; 2) some algorithms have extremely high time cost for computation; and 3) some require human-computer interactions to have a reasonable transformation. To solve or at least reduce these problems, we propose a new algorithm based on a probabilistic graphical model with the assumption that the image is defined over a Markov random field. Thus, color to gray procedure can be regarded as a labeling process to preserve the newly well--defined visual cues of a color image in the transformed gray-scale image. Visual cues are measurements that can be extracted from a color image by a perceiver. They indicate the state of some properties of the image that the perceiver is interested in perceiving. Different people may perceive different cues from the same color image and three cues are defined in this paper, namely, color spatial consistency, image structure information, and color channel perception priority. We cast color to gray as a visual cue preservation procedure based on a probabilistic graphical model and optimize the model based on an integral minimization problem. We apply the new algorithm to both natural color images and artificial pictures, and demonstrate that the proposed approach outperforms representative conventional algorithms in terms of effectiveness and efficiency. In addition, it requires no human-computer interactions.

104 citations


Journal ArticleDOI
TL;DR: This methodology has shown 92.22% accuracy to automatically identify the textures of basaltic rock using digitized image of thin sections of 140 rock samples and can be used to identify the texture of rock fast and accurate in geosciences.
Abstract: A new approach to identify the texture based on image processing of thin sections of different basalt rock samples is proposed here. This methodology uses RGB or grayscale image of thin section of rock sample as an input and extracts 27 numerical parameters. A multilayer perceptron neural network takes as input these parameters and provides, as output, the estimated class of texture of rock. For this purpose, we have use 300 different thin sections and extract 27 parameters from each one to train the neural network, which identifies the texture of input image according to previously defined classification. To test the methodology, 90 images (30 in each section) from different thin sections of different areas are used. This methodology has shown 92.22% accuracy to automatically identify the textures of basaltic rock using digitized image of thin sections of 140 rock samples. Therefore, present technique is further promising in geosciences and can be used to identify the texture of rock fast and accurate.

89 citations


Patent
09 Sep 2010
TL;DR: In this paper, the authors used raw grayscale image data, representing images to be displayed in successive frames, is used to drive a display having pixels that include a drive transistor and an organic light emitting device by dividing each frame into at least first and second-frames, and supplying each pixel with a drive current that is higher in the first subframe than in the second sub-frame.
Abstract: Raw grayscale image data, representing images to be displayed in successive frames, is used to drive a display having pixels that include a drive transistor and an organic light emitting device by dividing each frame into at least first and second-frames, and supplying each pixel with a drive current that is higher in the first sub-frame than in the second sub-frame for raw grayscale values in a first preselected range, and higher in the second sub-frame than in the first sub-frame for raw grayscale values in a second preselected range. The display may be an active matrix display, such as an AMOLED display.

86 citations


Proceedings ArticleDOI
07 Jul 2010
TL;DR: This paper investigates the relationship, if there any, between the PSNR and the subjective quality of stego images and adopts an adapted double stimulus continuous quality scale (DSCQS) method.
Abstract: Digital image steganography is the art of hiding information in other digital images. Moreover, image quality evaluation has many difficulties such as the amount of degradation or distortion induced in the reconstructed image. The peak signal-to-noise ratio (PSNR) is the most common metric used to evaluate the stego image quality. However, subjective evaluation is the most reliable method to measure the image quality. Therefore, we try to give an answer to the following question: “does the PSNR value of a stego image reflect its actual quality?”. However, JPEG steganography represents a distortion source in addition to the image compression. Therefore, this paper investigates the relationship, if there any, between the PSNR and the subjective quality of stego images. Four steganography methods and five grayscale images are used in this paper. Moreover, an adapted double stimulus continuous quality scale (DSCQS) method has been adopted. As a result, PSNR can not be reliably used because it has poor correlation with the mean opinion score (MOS). Moreover, conclusions derived from only PSNR values of different stego images are quite different from that derived from the MOS values. Additionally, MOS shows that a particular steganography method modifies different test images quality in different ways.

Journal ArticleDOI
TL;DR: Combining a suitable number of training sets using a subset of the input videos was shown to be possible and not only reduced computation costs but also produced better detection accuracies by minimizing visual-selection errors, especially when processing large numbers of WCE videos.

01 Jan 2010
TL;DR: In this article, a fast algorithm for computing the 2D Tchebichef moments of binary and grayscale images was proposed, which can speed up the computational efficiency as far as the number of blocks is smaller than the image size.
Abstract: Discrete orthogonal moments have been recently introduced in the field of image analysis. It was shown that they have better image representation capability than the continuous orthogonal moments. One problem concerning the use of moments as feature descriptors is the high computational cost, which may limit their application to the problems where the online computation is required. In this paper, we present a new approach for fast computation of the 2-D Tchebichef moments. By deriving some properties of Tchebichef polynomials, and using the image block representation for binary images and intensity slice representation for grayscale images, a fast algorithm is proposed for computing the moments of binary and grayscale images. The theoretical analysis shows that the computational complexity of the proposed method depends upon the number of blocks of the image, thus, it can speed up the computational efficiency as far as the number of blocks is smaller than the image size.

Journal ArticleDOI
TL;DR: A new approach for fast computation of the 2-D Tchebichef moments is presented, using the image block representation for binary images and intensity slice representation for grayscale images, and deriving some properties of TcheBichef polynomials.
Abstract: Discrete orthogonal moments have been recently introduced in the field of image analysis. It was shown that they have better image representation capability than the continuous orthogonal moments. One problem concerning the use of moments as feature descriptors is the high computational cost, which may limit their application to the problems where the online computation is required. In this paper, we present a new approach for fast computation of the 2-D Tchebichef moments. By deriving some properties of Tchebichef polynomials, and using the image block representation for binary images and intensity slice representation for grayscale images, a fast algorithm is proposed for computing the moments of binary and grayscale images. The theoretical analysis shows that the computational complexity of the proposed method depends upon the number of blocks of the image, thus, it can speed up the computational efficiency as far as the number of blocks is smaller than the image size.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: A novel feature representation based on color-based Local Binary Pattern (LBP) texture analysis for face recognition (FR) that exploits both color and texture discriminative features of a face image for FR purpose is proposed.
Abstract: In this paper, we propose a novel feature representation based on color-based Local Binary Pattern (LBP) texture analysis for face recognition (FR). The proposed method exploits both color and texture discriminative features of a face image for FR purpose. We evaluate the proposed feature using three public face databases: CMU-PIE, Color FERET, and XM2VTSDB. Experimental results show that the results of the proposed feature impressively better than the results of grayscale LBP and color features. In particular, it is shown that the proposed feature is highly robust against severe variations in illumination and spatial resolution.

Journal ArticleDOI
TL;DR: This work proposes an extension of the standard GMM for image segmentation, which utilizes a novel approach to incorporate the spatial relationships between neighboring pixels into the standardGMM, and proposes a new method to estimate the model parameters in order to minimize the higher bound on the data negative log-likelihood, based on the gradient method.
Abstract: Standard Gaussian mixture modeling (GMM) is a well-known method for image segmentation. However, the pixels themselves are considered independent of each other, making the segmentation result sensitive to noise. To reduce the sensitivity of the segmented result with respect to noise, Markov random field (MRF) models provide a powerful way to account for spatial dependences between image pixels. However, their main drawback is that they are computationally expensive to implement, and require large numbers of parameters. Based on these considerations, we propose an extension of the standard GMM for image segmentation, which utilizes a novel approach to incorporate the spatial relationships between neighboring pixels into the standard GMM. The proposed model is easy to implement and compared with MRF models, requires lesser number of parameters. We also propose a new method to estimate the model parameters in order to minimize the higher bound on the data negative log-likelihood, based on the gradient method. Experimental results obtained on noisy synthetic and real world grayscale images demonstrate the robustness, accuracy and effectiveness of the proposed model in image segmentation, as compared to other methods based on standard GMM and MRF models.

Journal ArticleDOI
TL;DR: It was observed that performing principal component analysis (PCA) calculations on multidimensional or multispectral information not only provides the combination of variables that explain most of the variance at a certain time instance but also decreases the autocorrelation of the resulting time series.

Journal ArticleDOI
TL;DR: A simple three-dimensional approach to numerically correct for image artifacts using sequential segmentation is developed, which leads to a significant improvement of grayscale data as well as final segmentation results with reasonable computational demand.
Abstract: Nondestructive imaging methods such as x-ray computed tomography (CT) yield high-resolution, grayscale, three-dimensional visualizations of pore structures and fluid interfaces in porous media. To separate solid and fluid phases for quantitative analysis and fluid dynamics modeling, segmentation is applied to convert grayscale CT volumes to discrete representations of media pore space. Unfortunately, x-ray CT is not free of artifacts, which complicates segmentation and quantitative image analysis due to obscuration of significant features or misinterpretation of attenuation values of a single material in different image sections. Images or volumes emanating from polychromatic (industrial) scanners are especially prone to high noise levels, beam hardening, scattered x-rays, or ring artifacts. These problems can be alleviated to a certain extent through application of metal filters, careful detector calibration, and sample centering, but they cannot be completely avoided. We have developed a simple three-dimensional approach to numerically correct for image artifacts using sequential segmentation. This procedure leads to a significant improvement of grayscale data as well as final segmentation results with reasonable computational demand.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: A forensic scheme for identifying and reconstructing gamma correction operations in digital images, and the validity of the proposed gamma estimation algorithm is shown.
Abstract: In the digital era, digital photographs become pervasive and are frequently used to record event facts. Authenticity and integrity of such photos can be ascertained by discovering more information about the previously applied operations. In this paper, we propose a forensic scheme for identifying and reconstructing gamma correction operations in digital images. Statistical abnormity on image grayscale histograms, which is caused by the contrast enhancement, is analyzed theoretically and measured effectively. Graylevel mapping functions involved in gamma correction can be estimated blindly. Experiments both on globally and locally applied corrected images show the validity of our proposed gamma estimation algorithm.

Journal ArticleDOI
TL;DR: It is demonstrated that under certain conditions visual quality of compressed images can be slightly better than quality of original noisy images due to image filtering through lossy compression.
Abstract: This paper concerns lossy compression of images corrupted by additive noise The main contribution of the paper is that analysis is carried out from the viewpoint of compressed image visual quality Several coders for which the compression ratio is controlled in different manner are considered Visual quality metrics that are the most adequate for the considered application (WSNR, MSSIM, PSNR-HVS-M, and PSNR-HVS) are used It is demonstrated that under certain conditions visual quality of compressed images can be slightly better than quality of original noisy images due to image filtering through lossy compression The "optimal" parameters of coders for which this positive effect can be observed depend upon standard deviation of the noise This allows proposing automatic procedure for compressing noisy images in the neighborhood of optimal operation point, that is, when visual quality either improves or degrades insufficiently Comparison results for a set of grayscale test images and several variances of noise are presented

Journal ArticleDOI
TL;DR: Results show that the performance of the proposed fusion method is better than that of other methods in terms of several frequently-used metrics, such as the structural similarity, peak signal-to-noise ratio and cross-entropy, as well as in the visual quality, even in the case of correlated noise.
Abstract: Development of efficient fusion algorithms is becoming increasingly important for obtaining a more informative image from several source images captured by different modes of imaging systems or multiple sensors. Since noise is inherent in practical imaging systems or sensors, an integrated approach of image fusion and noise reduction is essential. The discrete wavelet transform has been significantly successful in the development of fusion algorithms for noise-free images as well as in image denoising algorithms. A novel contrast-based image fusion algorithm is proposed in the wavelet domain for noisy source images. Novel features of the proposed fusion method are the noise reduction taking into consideration the linear dependency among the noisy source images and introducing an appropriate modification of the magnitude of the wavelet coefficients depending on the noise strength. Experiments are carried out on a number of commonly-used greyscale and colour test images to evaluate the performance of the proposed method. Results show that the performance of the proposed fusion method is better than that of other methods in terms of several frequently-used metrics, such as the structural similarity, peak signal-to-noise ratio and cross-entropy, as well as in the visual quality, even in the case of correlated noise.

Patent
09 Aug 2010
TL;DR: In this article, a luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image, followed by application of a contrast sensitivity filter.
Abstract: The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image, or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image. Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer, SSO. Some embodiments include masking functions, window functions, special treatment for images lying on or near borders and pre-processing of test images.

Proceedings ArticleDOI
09 Sep 2010
TL;DR: A novel method to classify insects by analyzing color histogram and GLCM (Gray-Level Co-occurrence Matrices) of wing images is proposed and the winner-take-all policy is adopted in deciding most matched species in k nearest neighbors.
Abstract: Aims to provide general technicians who manage pects in production with a convenient way to recognize them, a novel method to classify insects by analyzing color histogram and GLCM (Gray-Level Co-occurrence Matrices) of wing images is proposed. The wing image of lepidopteran insect is preprocessed to get the ROI (Region of Interest); then the color image is first converted from RGB(Red-Green-Blue) to HSV (Hue-Saturation-Value) space, and the 1D color histograms of ROI are generated from hue and saturation distributions. Afterward, the color image is converted to grayscale one, rotated and transformed to a standard position, and their GLCM features are extracted. Matching is first undergone by computing the correlation of the histogram vectors between testing and template images; if the correlation is higher than certain threshold, then their GLCM features are further matched. The winner-take-all policy is adopted in deciding most matched species in k nearest neighbors. The method is tested at the lepidopteran insect database with 100 species. The recognition rate is as high as 71.1%. An ideal time performance is also achieved. The experimental results testify the efficiency of proposed method.

Journal ArticleDOI
TL;DR: This work proposes new transformation models and optimization methods for directly and robustly registering images (including color ones) of rigid and deformable objects, all in a unified manner, and shows that widely adopted models are in fact particular cases of the proposed ones.
Abstract: The fundamental task of visual tracking is considered in this work as an incremental direct image registration problem. Direct methods refer to those that exploit the pixel intensities without resorting to image features. We propose new transformation models and optimization methods for directly and robustly registering images (including color ones) of rigid and deformable objects, all in a unified manner. We also show that widely adopted models are in fact particular cases of the proposed ones. Indeed, the proposed general models combine various classes of image warps and ensure robustness to generic lighting changes. Finally, the proposed optimization method together with the exploitation of all possible image information allow the algorithm to achieve high levels of accuracy. Extensive experiments are reported to demonstrate that visual tracking can indeed be highly accurate and robust despite deforming objects and severe illumination changes.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A background model that differentiates between background motion and foreground objects is presented, and changes in intensity/color histograms of pixel neighborhoods can be used to discriminate foreground and background regions.
Abstract: We present a background model that differentiates between background motion and foreground objects. Unlike most models that represent the variability of pixel intensity at a particular location in the image, we model the underlying warping of pixel locations arising from background motion. The background is modeled as a set of warping layers, where at any given time, different layers may be visible due to the motion of an occluding layer. Foreground regions are thus defined as those that cannot be modeled by some composition of some warping of these background layers. We illustrate this concept by first reducing the possible warps to those where the pixels are restricted to displacements within a spatial neighborhood, and then learning the appropriate size of that spatial neighborhood. Then, we show how changes in intensity/color histograms of pixel neighborhoods can be used to discriminate foreground and background regions. We find that this approach compares favorably with the state of the art, while requiring less computation.

Journal ArticleDOI
TL;DR: Celiac videocapsule images have textural properties that vary linearly along the small intestine and can assist in screening for celiac disease and localize extent and degree of pathology throughout theSmall intestine.
Abstract: Quantitative disease markers were developed to assess videocapsule images acquired from celiac disease patients with villous atrophy, and from control patients. Capsule endoscopy videoclip images (576 × 576 pixels) were acquired at 2/second frame rate (11 celiacs, 10 controls) at regions: 1. bulb, 2. duodenum, 3. jejunum, 4. ileum and 5. distal ileum. Each of 200 images per videoclip (= 100s) were subdivided into 10 × 10 pixel subimages for which mean grayscale brightness level and its standard deviation (texture) were calculated. Pooled subimage values were grouped into low, intermediate, and high texture bands, and mean brightness, texture, and number of subimages in each band (nine features in all) were used for quantifying regions 1-5, and to determine the three best features for threshold and incremental learning classification. Classifiers were developed using 6 celiac and 5 control patients' data as exemplars, and tested on 5 celiacs and 5 controls. Pooled from all regions, the threshold classifier had 80% sensitivity and 96% specificity and the incremental classifier had 88% sensitivity and 80% specificity for predicting celiac versus control videoclips in the test set. Trends of increasing texture from regions 1 to 5 occurred in the low and high texture bands in celiacs, and the number of subimages in the low texture band diminished (r2 > 0.5). No trends occurred in controls. Celiac videocapsule images have textural properties that vary linearly along the small intestine. Quantitative markers can assist in screening for celiac disease and localize extent and degree of pathology throughout the small intestine.

Proceedings ArticleDOI
10 Aug 2010
TL;DR: A kind of parallel processing construction of Sobel edge detection enhancement algorithm, which can quickly get the result of one pixel in only one clock periods is presented.
Abstract: Field Programmable Gate Array (FPGA) is an effective device to realize real-time parallel processing of vast amounts of video data because of the fine-grain reconfigurable structures. This paper presents a kind of parallel processing construction of Sobel edge detection enhancement algorithm, which can quickly get the result of one pixel in only one clock periods. The algorithm is designed with a FPGA chip called XC3S200- 5ft256, and it can process 1024×1024×8 Gray Scale Image successfully. The design can locate the edge of the gray image quickly and efficiently.

Posted Content
TL;DR: Experimental results show that the proposed scheme is efficient against changing the watermarked image to grayscale, image cropping, and JPEG compression.
Abstract: In this paper a new watermarking scheme is presented based on log-average luminance. A colored-image is divided into blocks after converting the RGB colored image to YCbCr color space. A monochrome image of 1024 bytes is used as the watermark. To embed the watermark, 16 blocks of size 8X8 are selected and used to embed the watermark image into the original image. The selected blocks are chosen spirally (beginning form the center of the image) among the blocks that have log-average luminance higher than or equal the log-average luminance of the entire image. Each byte of the monochrome watermark is added by updating a luminance value of a pixel of the image. If the byte of the watermark image represented white color (255) a value is added to the image pixel luminance value, if it is black (0) the is subtracted from the luminance value. To extract the watermark, the selected blocks are chosen as the above, if the difference between the luminance value of the watermarked image pixel and the original image pixel is greater than 0, the watermark pixel is supposed to be white, otherwise it supposed to be black. Experimental results show that the proposed scheme is efficient against changing the watermarked image to grayscale, image cropping, and JPEG compression.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed wavelet-based watermarking scheme for color images is more robust than the existing scheme while retaining the watermark transparency and performance in terms of robustness and transparency is obtained.
Abstract: In this paper, a wavelet-based watermarking scheme for color images is proposed. The watermarking scheme is based on the design of a color visual model that is the modification of a perceptual model used in the image coding of gray scale images. The model is to estimate the noise detection threshold of each wavelet coefficient in luminance and chrominance components of color images in order to satisfy transparency and robustness required by the color image watermarking technique. The noise detection thresholds of coefficients in each color component are derived in a locally adaptive fashion based on the wavelet decomposition, by which perceptually significant coefficients are selected and a perceptually lossless quantization matrix is constructed for embedding watermarks. Performance in terms of robustness and transparency is obtained by embedding the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme is more robust than the existing scheme while retaining the watermark transparency.

Journal ArticleDOI
TL;DR: The role of intensity standardization in registration tasks with systematic and analytic evaluations involving clinical MR images is investigated and it is implied that the accuracy of image registration not only depends on spatial and geometric similarity but also on the similarity of the intensity values for the same tissues in different images.