scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Electronic Imaging in 2008"


Journal ArticleDOI
TL;DR: A finite mixture model of generalized Gaussian distributions (GDD) for robust segmentation and data modeling in the presence of noise and outliers and an information-theory based approach for the selection of the number of classes is proposed.
Abstract: We propose a new finite mixture model based on the formalism of general Gaussian distribution (GGD). Because it has the flexibility to adapt to the shape of the data better than the Gaussian, the GGD is less prone to overfitting the number of mixture classes when dealing with noisy data. In the first part of this work, we propose a derivation of the maximum likelihood estimation for the parameters of the new mixture model, and elaborate an information-theoretic approach for the selection of the number of classes. In the second part, we validate the proposed model by comparing it to the Gaussian mixture in applications related to image and video foreground segmentation.

102 citations


Journal ArticleDOI
TL;DR: High efficiency in the detection of blood vessels with the area under the receiver operating characteristics curve of up to 0.96 was achieved with a combination of Gabor filters at three scales.
Abstract: We propose image processing techniques for the detection of blood vessels in fundus images of the retina The methods include the design of a bank of directionally sensitive Gabor filters with tunable scale and elongation parameters Forty images of the retina from the Digital Retinal Images for Vessel Extraction database were used to evaluate the performance of the methods The results of blood vessel detection using inverted green-channel images were compared with the corresponding manually segmented blood vessels High efficiency in the detection of blood vessels with the area under the receiver operating characteristics curve of up to 096 was achieved with a combination of Gabor filters at three scales

76 citations


Journal ArticleDOI
TL;DR: A new liveness detection method based on noise analysis along the valleys in the ridge-valley structure of fingerprint images that can provide antispoofing protection for fingerprint scanners.
Abstract: Recent research has shown that it is possible to spoof a variety of fingerprint scanners using some simple techniques with molds made from plastic, clay, Play-Doh, silicon, or gelatin materials. To protect against spoofing, methods of liveness detection measure physiological signs of life from fingerprints, ensuring that only live fingers are captured for enrollment or authentication. We propose a new liveness detection method based on noise analysis along the valleys in the ridge-valley structure of fingerprint images. Unlike live fingers, which have a clear ridge-valley structure, artificial fingers have a distinct noise distribution due to the material’s properties when placed on a fingerprint scanner. Statistical features are extracted in multiresolution scales using the wavelet decomposition technique. Based on these features, liveness separation (live/ nonlive) is performed using classification trees and neural networks. We test this method on the data set, that contains about 58 live, 80 spoof (50 made from Play-Doh and 30 made from gelatin), and 25 cadaver subjects for 3 different scanners. We also test this method on a second data set that contains 28 live and 28 spoof (made from silicon) subjects. Results show that we can get approximately 90.9– 100% classification of spoof and live fingerprints. The proposed liveness detection method is purely software-based, and application of this method can provide antispoofing protection for fingerprint scanners.

72 citations


Journal ArticleDOI
TL;DR: This work provides a survey of hand biometric techniques in the literature and incorporates several novel results of hand-based per- sonal identification and verification and compares several feature sets in the shape-only and shape-plus-texture categories to assess the relevance of a proper hand normalization scheme in the success of any biometric scheme.
Abstract: We provide a survey of hand biometric techniques in the literature and incorporate several novel results of hand-based per- sonal identification and verification. We compare several feature sets in the shape-only and shape-plus-texture categories, empha- sizing the relevance of a proper hand normalization scheme in the success of any biometric scheme. The preference of the left and right hands or of ambidextrous access control is explored. Since the business case of a biometric device partly hinges on the longevity of its features and the generalization ability of its database, we have tested our scheme with time-lapse data as well as with subjects that were unseen during the training stage. Our experiments were con- ducted on a hand database that is an order of magnitude larger than any existing one in the literature. © 2008 SPIE and IS&T.

72 citations


Journal ArticleDOI
TL;DR: An order-statistics-based vector filter for the removal of impulsive noise from color images by switching between the identity (no filtering) operation and the vector median filter operation based on the robust univariate median operator is presented.
Abstract: We present an order-statistics-based vector filter for the removal of impulsive noise from color images. The filter preserves the edges and fine image details by switching between the identity (no filtering) operation and the vector median filter operation based on the robust univariate median operator. Experiments on a diverse set of images and comparisons with state of the art filters shows that the proposed filter combines simplicity, flexibility, excellent filtering quality, and low computational requirements.

51 citations


Journal ArticleDOI
TL;DR: The morphological shape decomposition role to serve as an efficient image decomposition tool is extended to interpolation of images by means of generalized morphologicalshape decomposition.
Abstract: One of the main image representations in mathematical morphology is the shape decomposition representation, useful for image compression and pattern recognition. The morphological shape decomposition representation can be generalized to extend the scope of its algebraic characteristics as much as possible. With these generalizations, the morphological shape decomposition (MSD) role to serve as an efficient image decomposition tool is extended to interpolation of images. We address the binary and grayscale interframe interpolation by means of generalized morphological shape decomposition. Computer simulations illustrate the results.

49 citations


Journal ArticleDOI
TL;DR: This work employs cryptographic tech- niques to protect dynamic signature features, making it impossible to derive the original biometrics from the stored templates, while maintaining good recognition performances.
Abstract: Biometrics is rapidly becoming the principal technology for automatic people authentication. The main advantage in using biometrics over traditional recognition approaches relies in the diffi- culty of losing, stealing, or copying individual behavioral or physical traits. The major weakness of biometrics-based systems relies in their security: in order to avoid data stealing or corruption, storing raw biometric data is not advised. The same problem occurs when biometric templates are employed, since they can be used to re- cover the original biometric data. We employ cryptographic tech- niques to protect dynamic signature features, making it impossible to derive the original biometrics from the stored templates, while maintaining good recognition performances. Together with protec- tion, we also guarantee template cancellability and renewability. Moreover, the proposed authentication scheme is tailored to the sig- nature variability of each user, thus obtaining a user adaptive sys- tem with enhanced performances with respect to a nonadaptive one. Experimental results show the effectiveness of our approach when compared to both traditional nonsecure classifiers and other, already proposed protection schemes. © 2008 SPIE and IS&T.

48 citations


Journal ArticleDOI
TL;DR: It is confirmed that the best and the worst algorithms do not exist at all among the state-of-the-art ones and show that simple combining strategies improve the illuminant estimation.
Abstract: Several algorithms were proposed in the literature to re- cover the illuminant chromaticity of the original scene. These algo- rithms work well only when prior assumptions are satisfied, and the best and the worst algorithms may be different for different scenes. We investigate the idea of not relying on a single method but instead consider a consensus decision that takes into account the re- sponses of several algorithms and adaptively chooses the algo- rithms to be combined. We investigate different combining strate- gies of state-of-the-art algorithms to improve the results in the illuminant chromaticity estimation. Single algorithms and combined ones are evaluated for both synthetic and real image databases using the angular error between the RGB triplets of the measured illuminant and the estimated one. Being interested in comparing the performance of the methods over large data sets, experimental re- sults are also evaluated using the Wilcoxon signed rank test. Our experiments confirm that the best and the worst algorithms do not exist at all among the state-of-the-art ones and show that simple combining strategies improve the illuminant estimation. © 2008 SPIE

47 citations


Journal ArticleDOI
TL;DR: This work investigates the potential of foot biometric features based on geometry, shape, and texture and presents algorithms for a prototype rotation invariant verification system and an introduction to origins and fields of application for footprint-based personal recognition.
Abstract: We investigate the potential of foot biometric features based on geometry, shape, and texture and present algorithms for a prototype rotation invariant verification system. An introduction to origins and fields of application for footprint-based personal recognition is accompanied by a comparison with traditional hand biometry systems. Image enhancement and feature extraction steps emphasizing specific characteristics of foot geometry and their permanence and distinctiveness properties, respectively, are discussed. Collectability and universality issues are considered as well. A visualization of various test results comparing discriminative power of foot shape and texture is given. The impact on real-world scenarios is pointed out, and a summary of results is presented.

46 citations


Journal ArticleDOI
TL;DR: A removable visible watermarking scheme, which operates in the discrete cosine transform (DCT) domain, is proposed for combating copyright piracy and test results show that the introduced scheme succeeds in preventing the embedded watermark from illegal removal.
Abstract: A removable visible watermarking scheme, which operates in the discrete cosine transform (DCT) domain, is proposed for combating copyright piracy. First, the original watermark image is divided into 16×16 blocks and the preprocessed watermark to be embedded is generated by performing element-by-element matrix multiplication on the DCT coefficient matrix of each block and a key-based matrix. The intention of generating the preprocessed watermark is to guarantee the infeasibility of the illegal removal of the embedded watermark by the unauthorized users. Then, adaptive scaling and embedding factors are computed for each block of the host image and the preprocessed watermark according to the features of the corresponding blocks to better match the human visual system characteristics. Finally, the significant DCT coefficients of the preprocessed watermark are adaptively added to those of the host image to yield the watermarked image. The watermarking system is robust against compression to some extent. The performance of the proposed method is verified, and the test results show that the introduced scheme succeeds in preventing the embedded watermark from illegal removal. Moreover, experimental results demonstrate that legally recovered images can achieve superior visual effects, and peak signal-to-noise ratio values of these images are >50 dB.

45 citations


Journal ArticleDOI
TL;DR: A new class of top-hat transformation through structuring element construction and operation reorganization based on the property of the infrared small target image can greatly improve the performance of small target enhancement.
Abstract: To improve the performance of a top-hat transformation for infrared small target enhancement, a new class of top-hat transformation through structuring element construction and operation reorganization is proposed. The structuring element construction and operation reorganization are based on the property of the infrared small target image and thus can greatly improve the performance of small target enhancement. Experimental results verified that it was very efficient.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method for watermarking stereo images is a semifragile one that is robust toward JPEG and JPEG2000 compression and fragile with respect to other signal manipulations.
Abstract: We present an object-oriented method for watermarking stereo images. Since stereo images are characterized by the perception of depth, the watermarking scheme we propose relies on the extraction of a depth map from the stereo pairs to embed the mark. The watermark embedding is performed in the wavelet domain using the quantization index modulation method. Experimental results show that the proposed method is a semifragile one that is robust toward JPEG and JPEG2000 compression and fragile with respect to other signal manipulations.

Journal ArticleDOI
TL;DR: Results demonstrate that compression up to 50:1 can be used with minimal effects on recognition of an iris recognition system, and the imagery used includes both the CASIA iris database as well as the University of Bath database.
Abstract: The human iris is perhaps the most accurate biometric for use in identification. Commercial iris recognition systems currently can be found in several types of settings where a person’s true identity is required: to allow passengers in some airports to be rapidly processed through security; for access to secure areas; and for secure access to computer networks. The growing employment of iris recognition systems and the associated research to develop new algorithms will require large databases of iris images. If the required storage space is not adequate for these databases, image compression is an alternative. Compression allows a reduction in the storage space needed to store these iris images. This may, however, come at a cost: some amount of information may be lost in the process. We investigate the effects of image compression on the performance of an iris recognition system. Compression is performed using JPEG-2000 and JPEG, and the iris recognition algorithm used is an implementation of the Daugman algorithm. The imagery used includes both the CASIA iris database as well as the iris database collected by the University of Bath. Results demonstrate that compression up to 50:1 can be used with minimal effects on recognition.

Journal ArticleDOI
TL;DR: 3D Object Processing: Compression, Indexing and Watermarking is an invaluable resource for graduate students and researchers working in signal and image processing, computer aided design, animation and imaging systems andpractising engineers who want to expand their knowledge of 3D video objects will find this book of great use.
Abstract: The arrival, and continuing evolution, of high quality 3D objects has been made possible by recent progress in 3D scanner acquisition and 3D graphics rendering. With this increasing quality comes a corresponding increase in the size and complexity of the data files and the necessity for advances in compression techniques. Effective indexing to facilitate the retrieval of the 3D data is then required to efficiently store, search and recapture the objects that have been compressed. The application of 3D images in fields such as communications, medicine and the military also calls for copyright protection, or watermarking, to secure the data for transmission. Written by expert contributors, this timely text brings together the three important and complementary topics of compression, retrieval and watermarking techniques for 3D objects. 3D object processing applications are developing rapidly and this book tackles the challenges and opportunities presented, focusing on the secure transmission, sharing and searching of 3D objects on networks, and includes: an introduction to the commonly used 3D representation schemes; the characteristics, advantages and limitations of polygonal meshes, surface based models and volumetric models; 3D compression techniques; the 3D coding and decoding schemes for reducing the size of 3D data to reduce transmission time and minimize distortion; state of the art responses to the intrinsic challenges of building a 3D-model search engine, considering view-based, structural and full-3D approaches; watermarking techniques for ensuring intellectual property protection and content security without altering the visual quality of the 3D object. 3D Object Processing: Compression, Indexing and Watermarking is an invaluable resource for graduate students and researchers working in signal and image processing, computer aided design, animation and imaging systems. Practising engineers who want to expand their knowledge of 3D video objects, including data compression, indexing, security, and copyrighting of information, will also find this book of great use.

Journal ArticleDOI
TL;DR: A novel denoising method is presented that outperforms its wavelet-based counterpart and pro- duces results that are close to those of state-of-the-art denoisers.
Abstract: We perform a statistical analysis of curvelet coefficients, distinguishing between two classes of coefficients: those that contain a significant noise-free component, which we call the “signal of interest,” and those that do not. By investigating the marginal statistics, we develop a prior model for curvelet coefficients. The analysis of the joint intra- and inter-band statistics enables us to develop an appropriate local spatial activity indicator for curvelets. Finally, based on our findings, we present a novel denoising method, inspired by a recent wavelet domain method called ProbShrink. The new method outperforms its wavelet-based counterpart and produces results that are close to those of state-of-the-art denoisers.

Journal ArticleDOI
TL;DR: A technique for viewpoint and illumination-independent digital archiving of art paintings in which the painting surface is regarded as a 2-D rough surface with gloss and shading, which confirms the feasibility of the proposed technique in experiments using oil paintings.
Abstract: We propose a technique for viewpoint and illumination-independent digital archiving of art paintings in which the painting surface is regarded as a 2-D rough surface with gloss and shading. Surface materials like oil paints are inhomogeneously dielectric with the dichromatic reflection property. The procedure for total digital archiving is divided into three main steps: acquisition, analysis, and rendering. In the first stage, we acquire images of a painting using a multiband imaging system with six spectral channels at different illumination directions. In the second stage, we estimate the surface properties of surface-spectral reflectance functions, surface normal vectors, and 3-D reflection model parameters. The principal component analysis suggests that the estimated spectral reflectances have the potential for high data compression. In the third stage, we combine all the estimates for rendering the painting under arbitrary illumination and viewing conditions. We confirm the feasibility of the proposed technique in experiments using oil paintings.

Journal ArticleDOI
TL;DR: This work reviews the recent published work dealing with industrial applications of the wavelet and, more generally, multiresolution analysis and presents more than 190 recent papers.
Abstract: Twenty five years after the seminal work of Jean Morlet, the wavelet transform, multiresolution analysis, and other space-frequency or space-scale approaches are considered standard tools by researchers in image processing. Many applications that point out the interest of these techniques have been proposed. We review the recent published work dealing with industrial applications of the wavelet and, more generally speaking, multiresolution analysis. We present more than 190 recent papers.

Journal ArticleDOI
TL;DR: Visual perception experiments undertaken with 192 normally sighted viewers to simulate artificially induced vision expected from emerging electronic visual prosthesis designs show that ROI processing improves scene understanding for low-quality images when used in a zoom application.
Abstract: Electronic visual prostheses, or “bionic eyes,” are likely to provide some coarse visual sensations to blind patients who have these systems implanted. The quality of artificially induced vision is anticipated to be very poor initially. Research described explores image processing techniques that improve perception for users of visual prostheses. We describe visual perception experiments undertaken with 192 normally sighted viewers to simulate artificially induced vision expected from emerging electronic visual prosthesis designs. Several variations of region-of-interest (ROI) processing were applied to images that were presented to subjects as low-resolution 25×25 binary images. Several additional processing methods were compared to determine their suitability for use in automatically controlling a zoom-type function for visual prostheses. The experiments show that ROI processing improves scene understanding for low-quality images when used in a zoom application.

Journal ArticleDOI
TL;DR: A comparison of image quality measures for fingerprints across different capture devices shows how differences among capture devices impact the image quality.
Abstract: Although many image quality measures have been proposed for fingerprints, few works have taken into account how differences among capture devices impact the image quality. Several representative me ...

Journal ArticleDOI
TL;DR: Different computational strategies for colorimetric characterization of scanners using multidimensional polynomials are presented, and it is shown how genetic programming could be used to generate the best polynomial.
Abstract: We present different computational strategies for colorimetric characterization of scanners using multidimensional polynomials The designed strategies allow us to determine the coefficients of an a priori fixed polynomial, taking into account different color error statistics Moreover, since there is no clear relationship between the polynomial chosen for the characterization and the intrinsic characteristics of the scanner, we show how genetic programming could be used to generate the best polynomial Experimental results on different devices are reported to confirm the effectiveness of our methods with respect to others in the state of the art

Journal ArticleDOI
TL;DR: Experimental results show that the noise in digital image sequences is efficiently removed by the proposed fuzzy motion and detail adaptive video filter (FMDAF), and that the method outperforms other state of the art filters of comparable complexity on different video sequences.
Abstract: A new fuzzy-rule-based algorithm for the denoising of video sequences corrupted with additive Gaussian noise is presented. The proposed method constitutes a fuzzy-logic-based improvement of a recent detail and motion adaptive multiple class averaging filter (MCA). The method is first explained in the pixel domain for grayscale sequences, and is later extended to the wavelet domain and to color sequences. Experimental results show that the noise in digital image sequences is efficiently removed by the proposed fuzzy motion and detail adaptive video filter (FMDAF), and that the method outperforms other state of the art filters of comparable complexity on different video sequences.

Journal ArticleDOI
TL;DR: A comparative study permits us to show the relative performance of invariant descriptors used in both a global and a local context and identifies the different situations for which they are best suited.
Abstract: Although many object invariant descriptors have been proposed in the literature, putting them into practice to obtain a robust recognition system that is able to face several perturbations is still a studied problem. After presenting the most commonly used global invariant descriptors, a comparative study permits us to show their ability to discriminate between objects with little training. The Columbia Object Image Library database (COIL-100), which presents a same object translated, rotated, and scaled, is used to test the invariant features of geometrical transforms. Partial object occultation or presence of complex background are examples of used images to test the robustness of the studied descriptors. We compare them in both a global and a local context (computed on the neighborhood of a pixel). The scale invariant feature transform descriptor is used as a reference for local invariant descriptors. This study shows the relative performance of invariant descriptors used in both a global and a local context and identifies the different situations for which they are best suited.

Journal ArticleDOI
TL;DR: The automatic clustering approach separates the faces into gender and morphology groups, consistent with the other race effect reported in the psychology literature.
Abstract: The accuracy of a three-dimensional (3-D) face recogni- tion system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results ob- tained so far use a costly one-to-all registration approach, which requires the registration of each facial surface to all faces in the gallery. We explore the approach of registering the new facial sur- face to an average face model (AFM), which automatically estab- lishes correspondence to the preregistered gallery faces. We pro- pose a new algorithm for constructing an AFM and show that it works better than a recent approach. We inspect thin-plate spline and iterative closest-point-based registration schemes under manual or automatic landmark detection prior to registration. Ex- tending the single-AFM approach, we consider employing category- specific alternative AFMs for registration and evaluate the effect on subsequent classification. We perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender- and morphology-based groupings. We show that the automatic clustering approach separates the faces into gender and morphology groups, consistent with the other race effect reported in the psychology literature. Last, we describe and analyze a regular resampling method, that significantly in- creases the accuracy of registration. © 2008 SPIE and

Journal ArticleDOI
TL;DR: This work analyzes some of the most frequently cited binary skin classifiers based on explicit color cluster definition and presents possible strategies to improve their performance and shows that the fitness function can be tuned to favor either recall or precision in pixel classification.
Abstract: Skin detection is a preliminary step in many applications. We analyze some of the most frequently cited binary skin classifiers based on explicit color cluster definition and present possible strategies to improve their performance. In particular, we demonstrate how this can be accomplished by using genetic algorithms to redefine the cluster boundaries. We also show that the fitness function can be tuned to favor either recall or precision in pixel classification. Some combining strategies are then proposed to further improve the performance of these binary classifiers in terms of recall or precision. Finally, we show that, whatever the method or the strategy employed, the performance can be enhanced by preprocessing the images with a white balance algorithm. All the experiments reported here have been run on a large and heterogeneous image database.

Journal ArticleDOI
TL;DR: A MIE encryption algorithm based on an elliptic curve cryptosystem (ECC), which extends the single image encryption algorithm to a multiple images encryption algorithm and can accomplish a high level of security concerning information interaction on the network platform.
Abstract: By defining block image element, mixed image element (MIE), and composite image element (CIE), we propose a MIE encryption algorithm based on an elliptic curve cryptosystem (ECC), which extends the single image encryption algorithm to a multiple images encryption algorithm. The recipient’s (Bob’s) and the sender’s (Alice’s) detailed encryption and decryption steps and the core technology on the network platform are discussed. The correctness of this algorithm is verified. Experimental results and theoretical analysis show that the algorithm possesses large enough key space and can accomplish a high level of security concerning information interaction on the network platform. It can be particularly applicable to the highly confidential fields of information interaction. Finally, some problems in algorithm implementation are analyzed, and the prospects concerning the anomalous image element, rotary image element, and CIE encryption algorithms based on ECC are briefly described.

Journal ArticleDOI
TL;DR: The characteristics of a prototype dual-layer HDR display are described, a complex, spatially adaptive algorithm is necessary to generate the images used to drive the two panels, and the issues involved in the image-splitting algorithms are discussed.
Abstract: Liquid crystal displays (LCDs) are replacing analog film in radiology and reducing diagnosis times. Their typical dynamic range, however, can be too low for some applications, and their poor ability to reproduce low-luminance areas represents a critical drawback. The black level of an LCD can be drastically improved by stacking two liquid crystal panels in series. In this way the global transmittance is the pointwise product of the transmittances of the two panels and the theoretical dynamic range is squared. Such a high dynamic range (HDR) display also permits the reproduction of a larger number of gray levels, increasing the bit depth of the device. The two panels, however, are placed at a small distance from each other due to mechanical constraints, and this introduces a parallax error when the display is observed off-axis. A complex, spatially adaptive algorithm is therefore necessary to generate the images used to drive the two panels. We describe the characteristics of a prototype dual-layer HDR display and discuss the issues involved in the image-splitting algorithms. We propose some solutions and analyze their performance, giving a measure of the capabilities and limitations of the device.

Journal Article
TL;DR: The automatic clustering approach separates the faces into gender and morphology groups, consistent with the other race effect reported in the psychology literature, and a regular re-sampling method is described and analysed that significantly increases the accuracy of registration.
Abstract: The accuracy of a 3D face recognition system depends on a correct registration that aligns the facial surfaces and makes a comparison possible. The best results obtained so far use a costly one-to-all registration approach, which requires the registration of each facial surface to all faces in the gallery. We explore the approach of registering the new facial surface to an average face model (AFM), which automatically establishes correspondence to the pre-registered gallery faces. We propose a new algorithm for constructing an AFM, and show that it works better than a recent approach. Extending the single-AFM approach, we propose to employ category-specific alternative AFMs for registration, and evaluate the effect on subsequent classification. We perform simulations with multiple AFMs that correspond to different clusters in the face shape space and compare these with gender and morphology based groupings. We show that the automatic clustering approach separates the faces into gender and morphology groups, consistent with the other race effect reported in the psychology literature. We inspect thin-plate spline and iterative closest point based registration schemes under manual or automatic landmark detection prior to registration. Finally, we describe and analyse a regular re-sampling method that significantly increases the accuracy of registration.

Journal ArticleDOI
TL;DR: It is shown that the watermarking algorithm has a very high probability of a false-positive answer and has its limitations in practice, and the intrinsic reasons are as follows: the basis space of singular value decomposition is image content dependent, there is no one-to-one correspondence between singular value vector and image content, because singular value vectors have no information on the structure of image.
Abstract: It is shown that the watermarking algorithm presented in another paper [Ganic and Eskicioglu, J. Electron. Imaging 14, 043004 (2005)] has a very high probability of a false-positive answer and has its limitations in practice. Furthermore, the intrinsic reasons of the high false-alarm probability are as follows: the basis space of singular value decomposition is image content dependent, there is no one-to-one correspondence between singular value vector and image content, because singular value vectors have no information on the structure of image. Thus, the most important reason is a result of a false conception to insert watermark singular value vectors without information on the structure of the watermark. Finally, some examples are given to prove our results of theoretical analysis.

Journal ArticleDOI
TL;DR: This work exploits the framework in a hybrid analytical-numerical simulation that allows it to obtain quan- titative estimates of the color shifts due to misregistration, thereby providing a characterization for these shifts as a function of the op- tical dot gain, halftone periodicities, spot shapes, and interseparation misreg registration amounts.
Abstract: Halftoned separations of individual colorants, typically cyan, magenta, yellow, and black, are overlaid on a print substrate in typical color printing systems. Displacements between these separations, commonly referred to as "interseparation misregistra- tion", can cause objectionable color shifts in the prints. We study this misregistration-induced color shift for periodic clustered-dot half- tones using a spatiospectral model for the printed output that com- bines the Neugebauer model with a periodic lattice representation for the individual halftones. Using Fourier analysis in the framework of this model, we obtain an analytic characterization for the condi- tions for misregistration invariance in terms of colorant spectra, pe- riodicity of the individual separation halftones, dot shapes, and mis- registration displacements. We further exploit the framework in a hybrid analytical-numerical simulation that allows us to obtain quan- titative estimates of the color shifts due to misregistration, thereby providing a characterization for these shifts as a function of the op- tical dot gain, halftone periodicities, spot shapes, and intersepara- tion misregistration amounts. We present simulation results that demonstrate the impact of each of these parameters on the color shift and demonstrate qualitative agreement between our approxi- mation and experimental data. © 2008 SPIE and IS&T.

Journal ArticleDOI
TL;DR: It is demonstrated that a motion-blurred image acquired in the vicinity of the disc by a low-cost imaging system can provide the three-dimensional components of the outlet velocity of the particles.
Abstract: The management of mineral fertilization using centrifugal spreaders calls for the development of spread pattern characterization devices to improve the quality of fertilizer spreading. In order to predict spread pattern deposition using a ballistic flight model, several parameters need to be determined, in particular, the velocity of the granules when they leave the spinning disc. We demonstrate that a motion-blurred image acquired in the vicinity of the disc by a low-cost imaging system can provide the three-dimensional components of the outlet velocity of the particles. A binary image is first obtained using a recursive linear filter. Then an original method based on the Hough transform is developed to identify the particle trajectories and to measure their horizontal outlet angles, not only in the case of horizontal motion but also in the case of three-dimensional motion. The method combines a geometric approach and mechanical knowledge derived from spreading analysis. The outlet velocities are deduced from outlet angle measurements using kinematic relationships. Experimental results provide preliminary validations of the technique.