scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Forensics and Security in 2006"


Journal ArticleDOI
TL;DR: A new method is proposed for the problem of digital camera identification from its images based on the sensor's pattern noise, which serves as a unique identification fingerprint for each camera under investigation by averaging the noise obtained from multiple images using a denoising filter.
Abstract: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction.

1,195 citations


Journal ArticleDOI
TL;DR: An overview of biometrics is provided and some of the salient research issues that need to be addressed for making biometric technology an effective tool for providing information security are discussed.
Abstract: Establishing identity is becoming critical in our vastly interconnected society. Questions such as "Is she really who she claims to be?," "Is this person authorized to use this facility?," or "Is he in the watchlist posted by the government?" are routinely being posed in a variety of scenarios ranging from issuing a driver's license to gaining entry into a country. The need for reliable user authentication techniques has increased in the wake of heightened concerns about security and rapid advancements in networking, communication, and mobility. Biometrics, described as the science of recognizing an individual based on his or her physical or behavioral traits, is beginning to gain acceptance as a legitimate method for determining an individual's identity. Biometric systems have now been deployed in various commercial, civilian, and forensic applications as a means of establishing identity. In this paper, we provide an overview of biometrics and discuss some of the salient research issues that need to be addressed for making biometric technology an effective tool for providing information security. The primary contribution of this overview includes: 1) examining applications where biometric scan solve issues pertaining to information security; 2) enumerating the fundamental challenges encountered by biometric systems in real-world applications; and 3) discussing solutions to address the problems of scalability and security in large-scale authentication systems.

1,067 citations


Journal ArticleDOI
TL;DR: A novel algorithm for generating an image hash based on Fourier transform features and controlled randomization is developed and it is shown that the proposed hash function is resilient to content-preserving modifications, such as moderate geometric and filtering distortions.
Abstract: Image hash functions find extensive applications in content authentication, database search, and watermarking. This paper develops a novel algorithm for generating an image hash based on Fourier transform features and controlled randomization. We formulate the robustness of image hashing as a hypothesis testing problem and evaluate the performance under various image processing operations. We show that the proposed hash function is resilient to content-preserving modifications, such as moderate geometric and filtering distortions. We introduce a general framework to study and evaluate the security of image hashing systems. Under this new framework, we model the hash values as random variables and quantify its uncertainty in terms of differential entropy. Using this security framework, we analyze the security of the proposed schemes and several existing representative methods for image hashing. We then examine the security versus robustness tradeoff and show that the proposed hashing methods can provide excellent security and robustness.

542 citations


Journal ArticleDOI
Siwei Lyu1, Hany Farid1
TL;DR: A universal approach to steganalysis for detecting the presence of hidden messages embedded within digital images, which shows that, within multiscale, multiorientation image decompositions (e.g., wavelets), first- and higher-order magnitude and phase statistics are relatively consistent across a broad range of images, but are disturbed by the existence of embedded hidden messages.
Abstract: Techniques for information hiding (steganography) are becoming increasingly more sophisticated and widespread. With high-resolution digital images as carriers, detecting hidden messages is also becoming considerably more difficult. We describe a universal approach to steganalysis for detecting the presence of hidden messages embedded within digital images. We show that, within multiscale, multiorientation image decompositions (e.g., wavelets), first- and higher-order magnitude and phase statistics are relatively consistent across a broad range of images, but are disturbed by the presence of embedded hidden messages. We show the efficacy of our approach on a large collection of images, and on eight different steganographic embedding algorithms.

398 citations


Journal ArticleDOI
TL;DR: Two new approaches to matrix embedding for large payloads suitable for practical steganographic schemes are presented-one based on a family of codes constructed from simplex codes and the second one based on random linear codes of small dimension.
Abstract: Matrix embedding is a previously introduced coding method that is used in steganography to improve the embedding efficiency (increase the number of bits embedded per embedding change). Higher embedding efficiency translates into better steganographic security. This gain is more important for long messages than for shorter ones because longer messages are, in general, easier to detect. In this paper, we present two new approaches to matrix embedding for large payloads suitable for practical steganographic schemes-one based on a family of codes constructed from simplex codes and the second one based on random linear codes of small dimension. The embedding efficiency of the proposed methods is evaluated with respect to theoretically achievable bounds

282 citations


Journal ArticleDOI
TL;DR: The proposed multistream HMM facial expression system, which utilizes stream reliability weights, achieves relative reduction of the facial expression recognition error of 44% compared to the single-stream HMM system.
Abstract: The performance of an automatic facial expression recognition system can be significantly improved by modeling the reliability of different streams of facial expression information utilizing multistream hidden Markov models (HMMs). In this paper, we present an automatic multistream HMM facial expression recognition system and analyze its performance. The proposed system utilizes facial animation parameters (FAPs), supported by the MPEG-4 standard, as features for facial expression classification. Specifically, the FAPs describing the movement of the outer-lip contours and eyebrows are used as observations. Experiments are first performed employing single-stream HMMs under several different scenarios, utilizing outer-lip and eyebrow FAPs individually and jointly. A multistream HMM approach is proposed for introducing facial expression and FAP group dependent stream reliability weights. The stream weights are determined based on the facial expression recognition results obtained when FAP streams are utilized individually. The proposed multistream HMM facial expression system, which utilizes stream reliability weights, achieves relative reduction of the facial expression recognition error of 44% compared to the single-stream HMM system.

244 citations


Journal ArticleDOI
TL;DR: The experimental results indicate the new approach for discriminating fake fingers from real ones, based on the analysis of skin distortion, to be a very promising technique for making fingerprint recognition systems more robust against fake-finger-based spoofing attempts.
Abstract: Attacking fingerprint-based biometric systems by presenting fake fingers at the sensor could be a serious threat for unattended applications. This work introduces a new approach for discriminating fake fingers from real ones, based on the analysis of skin distortion. The user is required to move the finger while pressing it against the scanner surface, thus deliberately exaggerating the skin distortion. Novel techniques for extracting, encoding and comparing skin distortion information are formally defined and systematically evaluated over a test set of real and fake fingers. The proposed approach is privacy friendly and does not require additional expensive hardware besides a fingerprint scanner capable of capturing and delivering frames at proper rate. The experimental results indicate the new approach to be a very promising technique for making fingerprint recognition systems more robust against fake-finger-based spoofing attempts

241 citations


Journal ArticleDOI
TL;DR: The results show that in addition to its capability of handling bitewing and periapical dental radiographic views, the approach exhibits the lowest failure rate among all approaches studied.
Abstract: Automating the process of postmortem identification of individuals using dental records is receiving increased attention. Teeth segmentation from dental radiographic films is an essential step for achieving highly automated postmortem identification. In this paper, we offer a mathematical morphology approach to the problem of teeth segmentation. We also propose a grayscale contrast stretching transformation to improve the performance of teeth segmentation. We compare and contrast our approach with other approaches proposed in the literature based on a theoretical and empirical basis. The results show that in addition to its capability of handling bitewing and periapical dental radiographic views, our approach exhibits the lowest failure rate among all approaches studied.

147 citations


Journal ArticleDOI
TL;DR: This paper proposes a new approach to wet paper codes using random linear codes of small codimension that at the same time improves the embedding efficiency (number of random message bits embedded per embedding change).
Abstract: Wet paper codes were previously proposed as a tool for construction of steganographic schemes with arbitrary (nonshared) selection channels. In this paper, we propose a new approach to wet paper codes using random linear codes of small codimension that at the same time improves the embedding efficiency (number of random message bits embedded per embedding change). Practical algorithms are given and their performance is evaluated experimentally and compared to theoretically achievable bounds. An approximate formula for the embedding efficiency of the proposed scheme is derived. The proposed coding method can be modularly combined with most steganographic schemes to improve their security.

127 citations


Journal ArticleDOI
TL;DR: Simulations of semi-fragile authentication methods on real images demonstrate the effectiveness of the MSB-LSB approach in simultaneously achieving security, robustness, and fragility objectives.
Abstract: This paper focuses on a coding approach for effective analysis and design of secure watermark-based multimedia authentication systems. We provide a design framework for semi-fragile watermark-based authentication such that both objectives of robustness and fragility are effectively controlled and achieved. Robustness and fragility are characterized as two types of authentication errors. The authentication embedding and verification structures of the semi-fragile schemes are derived and implemented using lattice codes to minimize these errors. Based on the specific security requirements of authentication, cryptographic techniques are incorporated to design a secure authentication code structure. Using nested lattice codes, a new approach, called MSB-LSB decomposition, is proposed which we show to be more secure than previous methods. Tradeoffs between authentication distortion and implementation efficiency of the secure authentication code are also investigated. Simulations of semi-fragile authentication methods on real images demonstrate the effectiveness of the MSB-LSB approach in simultaneously achieving security, robustness, and fragility objectives.

126 citations


Journal ArticleDOI
TL;DR: This paper proposes a polynomial-time heuristic clustering algorithm that automatically determines the final hash length needed to satisfy a specified distortion and proves that the decision version of the clustering problem is NP complete.
Abstract: A perceptual image hash function maps an image to a short binary string based on an image's appearance to the human eye. Perceptual image hashing is useful in image databases, watermarking, and authentication. In this paper, we decouple image hashing into feature extraction (intermediate hash) followed by data clustering (final hash). For any perceptually significant feature extractor, we propose a polynomial-time heuristic clustering algorithm that automatically determines the final hash length needed to satisfy a specified distortion. We prove that the decision version of our clustering problem is NP complete. Based on the proposed algorithm, we develop two variations to facilitate perceptual robustness versus fragility tradeoffs. We validate the perceptual significance of our hash by testing under Stirmark attacks. Finally, we develop randomized clustering algorithms for the purposes of secure image hashing.

Journal ArticleDOI
TL;DR: This paper investigates detection-theoretic performance benchmarks for steganalysis when the cover data are modeled as a Markov chain and provides an analytically tractable framework whose predictions are consistent with the performance of practical Steganalysis algorithms that account for spatial dependencies.
Abstract: The difficult task of steganalysis, or the detection of the presence of hidden data, can be greatly aided by exploiting the correlations inherent in typical host or cover signals. In particular, several effective image steganalysis techniques are based on the strong interpixel dependencies exhibited by natural images. Thus, existing theoretical benchmarks based on independent and identically distributed (i.i.d.) models for the cover data underestimate attainable steganalysis performance and, hence, overestimate the security of the steganography technique used for hiding the data. In this paper, we investigate detection-theoretic performance benchmarks for steganalysis when the cover data are modeled as a Markov chain. The main application explored here is steganalysis of data hidden in images. While the Markov chain model does not completely capture the spatial dependencies, it provides an analytically tractable framework whose predictions are consistent with the performance of practical steganalysis algorithms that account for spatial dependencies. Numerical results are provided for image steganalysis of spread-spectrum and perturbed quantization data hiding.

Journal ArticleDOI
TL;DR: This work jointly considers the coding and embedding issues for coded fingerprinting systems and examines their performance in terms of collusion resistance, detection computational complexity, and distribution efficiency and proposes a permuted subsegment embedding technique and a group-based joint coding and embeddedding technique.
Abstract: Digital fingerprinting protects multimedia content from illegal redistribution by uniquely marking every copy of the content distributed to each user. The collusion attack is a powerful attack where several different fingerprinted copies of the same content are combined together to attenuate or even remove the fingerprints. One major category of collusion-resistant fingerprinting employs an explicit step of coding. Most existing works on coded fingerprinting mainly focus on the code-level issues and treat the embedding issues through abstract assumptions without examining the overall performance. In this paper, we jointly consider the coding and embedding issues for coded fingerprinting systems and examine their performance in terms of collusion resistance, detection computational complexity, and distribution efficiency. Our studies show that coded fingerprinting has efficient detection but rather low collusion resistance. Taking advantage of joint coding and embedding, we propose a permuted subsegment embedding technique and a group-based joint coding and embedding technique to improve the collusion resistance of coded fingerprinting while maintaining its efficient detection. Experimental results show that the number of colluders that the proposed methods can resist is more than three times as many as that of the conventional coded fingerprinting approaches.

Journal ArticleDOI
TL;DR: A reversible embedding scheme for VQ-compressed images that is based on side matching and relocation that achieves reversibility without using the location map is proposed.
Abstract: The reversible steganographic method allows an original image to be completely reconstructed from the stegoimage after the extraction of the embedded data. The traditional reversible embedding schemes are not suitable for images compressed using vector quantization (VQ) and usually require the use of the location map for reversibility. In this paper, we propose a reversible embedding scheme for VQ-compressed images that is based on side matching and relocation. The new method achieves reversibility without using the location map. The experimental results show that the proposed method is practical for VQ-compressed images and provides high image quality and embedding capacity

Journal ArticleDOI
TL;DR: A new distance measure is proposed that better quantifies the similarity evaluation between two orientation fields than the conventional Euclidean and Manhattan distance measures and is applicable to large databases.
Abstract: This paper presents a front-end filtering algorithm for fingerprint identification, which uses orientation field and dominant ridge distance as retrieval features. We propose a new distance measure that better quantifies the similarity evaluation between two orientation fields than the conventional Euclidean and Manhattan distance measures. Furthermore, fingerprints in the data base are clustered to facilitate a fast retrieval process that avoids exhaustive comparisons of an input fingerprint with all fingerprints in the data base. This makes the proposed approach applicable to large databases. Experimental results on the National Institute of Standards and Technology data base-4 show consistent better retrieval performance of the proposed approach compared to other continuous and exclusive fingerprint classification methods as well as minutia-based indexing schemes

Journal ArticleDOI
TL;DR: Methods to hide information into images that achieve robustness against printing and scanning with blind decoding and a novel approach for estimating the rotation undergone by the image during the scanning process are proposed.
Abstract: Print-scan resilient data hiding finds important applications in document security and image copyright protection. This paper proposes methods to hide information into images that achieve robustness against printing and scanning with blind decoding. The selective embedding in low frequencies scheme hides information in the magnitude of selected low-frequency discrete Fourier transform coefficients. The differential quantization index modulation scheme embeds information in the phase spectrum of images by quantizing the difference in phase of adjacent frequency locations. A significant contribution of this paper is analytical and experimental modeling of the print-scan process, which forms the basis of the proposed embedding schemes. A novel approach for estimating the rotation undergone by the image during the scanning process is also proposed, which specifically exploits the knowledge of the digital halftoning scheme employed by the printer. Using the proposed methods, several hundred information bits can be embedded into images with perfect recovery against the print-scan operation. Moreover, the hidden images also survive several other attacks, such as Gaussian or median filtering, scaling or aspect ratio change, heavy JPEG compression, and rows and/or columns removal

Journal ArticleDOI
TL;DR: Experimental results confirm that the proposed FFM based on the local triangle feature set is a reliable and effective algorithm for fingerprint matching with nonlinear distortions.
Abstract: Coping with nonlinear distortions in fingerprint matching is a challenging task. This paper proposes a novel method, a fuzzy feature match (FFM) based on a local triangle feature set to match the deformed fingerprints. The fingerprint is represented by the fuzzy feature set: the local triangle feature set. The similarity between the fuzzy feature set is used to characterize the similarity between fingerprints. A fuzzy similarity measure for two triangles is introduced and extended to construct a similarity vector including the triangle-level similarities for all triangles in two fingerprints. Accordingly, a similarity vector pair is defined to illustrate the similarities between two fingerprints. The FFM method maps the similarity vector pair to a normalized value which quantifies the overall image to image similarity. The proposed algorithm has been evaluated with NIST 24 and FVC2004 fingerprint databases. Experimental results confirm that the proposed FFM based on the local triangle feature set is a reliable and effective algorithm for fingerprint matching with nonlinear distortions.

Journal ArticleDOI
TL;DR: The proposed steganalysis methods are successful in detecting hidden watermarks bearing low energy with high accuracy and the simulation results show the improved performance of the proposed temporal-based methods over purely spatial methods.
Abstract: In this paper, we present effective steganalysis techniques for digital video sequences based on interframe collusion that exploits the temporal statistical visibility of a hidden message Steganalysis is the process of detecting, with high probability, the presence of covert data in multimedia Present image steganalysis algorithms when applied directly to video sequences on a frame-by-frame basis are suboptimal; we present methods that overcome this limitation by using redundant information present in the temporal domain to detect covert messages embedded via spread-spectrum steganography Our performance gains are achieved by exploiting the collusion attack that has recently been studied in the field of digital video watermarking and pattern recognition tools Through analysis and simulations, we evaluate the effectiveness of the video steganalysis based on linear collusion approaches The proposed steganalysis methods are successful in detecting hidden watermarks bearing low energy with high accuracy The simulation results also show the improved performance of the proposed temporal-based methods over purely spatial methods

Journal ArticleDOI
TL;DR: It is shown though that in most cases the detector performs better if the proposed mask is employed, and the improved performance of the proposed detection scheme has been justified theoretically for the case of linear filtering plus noise attack and through extensive simulations.
Abstract: The aim of this paper is to improve the performance of spatial domain watermarking. To this end, a new perceptual mask and a new detection scheme are proposed. The proposed spatial perceptual mask is based on the cover image prediction error sequence and matches very well with the properties of the human visual system. It exhibits superior performance compared to existing spatial masking schemes. Moreover, it allows for a significantly increased strength of the watermark while, at the same time, the watermark visibility is decreased. The new blind detection scheme comprises an efficient prewhitening process and a correlation-based detector. The prewhitening process is based on the least-squares prediction error filter and substantially improves the detector's performance. The correlation-based detector that was selected is shown to be the most suitable for the problem at hand. The improved performance of the proposed detection scheme has been justified theoretically for the case of linear filtering plus noise attack and through extensive simulations. The theoretical analysis is independent of the proposed mask and the derived expressions can be used for any watermarking technique based on spatial masking. It is shown though that in most cases the detector performs better if the proposed mask is employed.

Journal ArticleDOI
TL;DR: This paper introduces a framework for evaluating the performance limits of high-rate multilevel 2-D bar codes by studying an intersymbol-interference (ISI)-free, synchronous, and noiseless print-and-scan channel, and adapts the theory of multileVEL coding with multistage decoding (MLC/MSD) to the paper's channel.
Abstract: In this paper, we deal with the design of high-rate multilevel 2-D bar codes for the print-and-scan channel. First, we introduce a framework for evaluating the performance limits of these codes by studying an intersymbol-interference (ISI)-free, synchronous, and noiseless print-and-scan channel, where the input and output alphabets are finite and the printer device uses halftoning to simulate multiple gray levels. Second, we present a new model for the print-and-scan channel specifically adapted to the problem of communications via multilevel 2-D bar codes. This model, inspired by our experimental work, assumes perfect synchronization and absence of ISI, but independence between the channel input and the noise is not assumed. We adapt the theory of multilevel coding with multistage decoding (MLC/MSD) to the print-and-scan channel. Finally, we present experimental results confirming the utility of our channel model, and showing that multilevel 2-D bar codes using MLC/MSD can reliably achieve the high-capacity storage requirements of many multimedia security and management applications

Journal ArticleDOI
TL;DR: This paper generalizes Blonder's graphical passwords to arbitrary images and solves a robustness problem that this generalization entails and introduces a robust discretization, based on multigrid discretizations.
Abstract: This paper generalizes Blonder's graphical passwords to arbitrary images and solves a robustness problem that this generalization entails. The password consists of user-chosen click points in a displayed image. In order to store passwords in cryptographically hashed form, we need to prevent small uncertainties in the click points from having any effect on the password. We achieve this by introducing a robust discretization, based on multigrid discretization

Journal ArticleDOI
TL;DR: This paper evaluates the performance of many easy-to-compute short-time Fourier transform features, such as Shannon entropy, Renyi entropy, spectral centroid, spectral bandwidth, spectral flatness measure, spectral crest factor, and Mel-frequency cepstral coefficients in modeling audio clips using GMM for fingerprinting.
Abstract: In audio fingerprinting, an audio clip must be recognized by matching an extracted fingerprint to a database of previously computed fingerprints. The fingerprints should reduce the dimensionality of the input significantly, provide discrimination among different audio clips, and, at the same time, be invariant to distorted versions of the same audio clip. In this paper, we design fingerprints addressing the above issues by modeling an audio clip by Gaussian mixture models (GMM). We evaluate the performance of many easy-to-compute short-time Fourier transform features, such as Shannon entropy, Renyi entropy, spectral centroid, spectral bandwidth, spectral flatness measure, spectral crest factor, and Mel-frequency cepstral coefficients in modeling audio clips using GMM for fingerprinting. We test the robustness of the fingerprints under a large number of distortions. To make the system robust, we use some of the distorted versions of the audio for training. However, we show that the audio fingerprints modeled using GMM are not only robust to the distortions used in training but also to distortions not used in training. Among the features tested, spectral centroid performs best with an identification rate of 99.2% at a false positive rate of 10-4. All of the features give an identification rate of more than 90% at a false positive rate of 10-3

Journal ArticleDOI
TL;DR: Efficient, close to real-time algorithms for hand segmentation, localization and 3-D feature measurement are described and tested on an image database simulating a variety of working conditions and are shown to be similar to state-of-the-art hand geometry authentication techniques but without sacrificing the convenience of the user.
Abstract: In this paper, a biometric authentication system based on measurements of the user's three-dimensional (3-D) hand geometry is proposed. The system relies on a novel real-time and low-cost 3-D sensor that generates a dense range image of the scene. By exploiting 3-D information we are able to limit the constraints usually posed on the environment and the placement of the hand, and this greatly contributes to the unobtrusiveness of the system. Efficient, close to real-time algorithms for hand segmentation, localization and 3-D feature measurement are described and tested on an image database simulating a variety of working conditions. The performance of the system is shown to be similar to state-of-the-art hand geometry authentication techniques but without sacrificing the convenience of the user.

Journal ArticleDOI
TL;DR: This paper uses empirical approach, Chernoff bound, and Large Deviations approach to predict the performance of the iris-based identification system and shows that the loglikelihood ratio with well-estimated maximum-likelihood parameters in it often outperforms the average Hamming distance statistic.
Abstract: Practical iris-based identification systems are easily accessible for data collection at the matching score level. In a typical setting, a video camera is used to collect a single frontal view image of good quality. The image is then preprocessed, encoded, and compared with all entries in the biometric database resulting in a single highest matching score. In this paper, we assume that multiple scans from the same iris are available and design the decision rules based on this assumption. We consider the cases where vectors of matching scores may be described by a Gaussian model with dependent components under both genuine and imposter hypotheses. Two test statistics: the plug-in loglikelihood ratio and the average Hamming distance are designed. We further analyze the performance of filter-based iris recognition systems. The model fit is verified using the Shapiro-Wilk test for normality. We show that the loglikelihood ratio with well-estimated maximum-likelihood parameters in it often outperforms the average Hamming distance statistic. The problem of identification with M iris classes is further stated as an (M+1)ary hypothesis testing problem. We use empirical approach, Chernoff bound, and Large Deviations approach to predict the performance of the iris-based identification system. The bound on the probability of error is evaluated as a function of the number of classes and the number of iris scans per class.

Journal ArticleDOI
TL;DR: The Noise Tolerant Message Authentication Code can tolerate a small number of errors, such as might be caused by a noisy communications channel, and gives an indication of the number and locations of the errors.
Abstract: This paper introduces a new construct, called the Noise Tolerant Message Authentication Code (NTMAC), for noisy message authentication. The NTMAC can tolerate a small number of errors, such as might be caused by a noisy communications channel. The NTMAC uses a conventional Message Authentication Code (MAC) in its constructions and it inherits the conventional MAC's resistance to forgeries. Furthermore, the NTMAC gives an indication of the number and locations of the errors.

Journal ArticleDOI
TL;DR: Current QIM watermarking schemes have a relative low security level against this scenario because a small number of observed watermarked signals yields a sufficiently accurate estimate of the secret dither.
Abstract: Security of quantization index modulation (QIM) watermarking methods is usually sought through a pseudorandom dither signal which randomizes the codebook. This dither plays the role of the secret key (i.e., a parameter only shared by the watermarking embedder and decoder), which prevents unauthorized embedding and/or decoding. However, if the same dither signal is reused, the observation of several watermarked signals can provide sufficient information for an attacker to estimate the dither signal. This paper focuses on the cases when the embedded messages are either known or constant. In the first part of this paper, a theoretical security analysis of QIM data hiding measures the information leakage about the secret dither as the mutual information between the dither and the watermarked signals. In the second part, we show how set-membership estimation techniques successfully provide accurate estimates of the dither from observed watermarked signals. The conclusion of this twofold study is that current QIM watermarking schemes have a relative low security level against this scenario because a small number of observed watermarked signals yields a sufficiently accurate estimate of the secret dither. The analysis presented in this paper also serves as the basis for more involved scenarios

Journal ArticleDOI
TL;DR: A new algorithm that performs a Monte Carlo based Bayesian scheme for online signature verification that achieved an EER of 1.2% against the MCYT signature corpus where random forgeries are used for learning and skilled forgery data for evaluation.
Abstract: Authentication of handwritten signatures is becoming increasingly important. With a rapid increase in the number of people who access Tablet PCs and PDAs, online signature verification is one of the most promising techniques for signature verification. This paper proposes a new algorithm that performs a Monte Carlo based Bayesian scheme for online signature verification. The new algorithm consists of a learning phase and a testing phase. In the learning phase, semi-parametric models are trained using the Markov Chain Monte Carlo (MCMC) technique to draw posterior samples of the parameters involved. In the testing phase, these samples are used to evaluate the probability that a signature is genuine. The proposed algorithm achieved an EER of 1.2% against the MCYT signature corpus where random forgeries are used for learning and skilled forgeries are used for evaluation. An experimental result is also reported with skilled forgery data for learning.

Journal ArticleDOI
TL;DR: A compressed-domain informed embedding algorithm, which incorporates the Lagrangian multiplier optimization approach and an adjustment procedure, is developed to achieve the functionality of dual protection of JPEG images.
Abstract: In this paper, the authors propose a watermarking scheme that embeds both image-dependent and fixed-part marks for dual protection (content authentication and copyright claim) of JPEG images. To achieve the goals of efficiency, imperceptibility, and robustness, a compressed-domain informed embedding algorithm, which incorporates the Lagrangian multiplier optimization approach and an adjustment procedure, is developed. A two-stage watermark extraction procedure is devised to achieve the functionality of dual protection. In the first stage, the semifragile watermark in each local channel is extracted for content authentication. Then, in the second stage, a weighted soft-decision decoder, which weights the signal detected in each channel according to the estimated channel condition, is used to improve the recovery rate of the fixed-part watermark for copyright protection. The experiment results manifest that the proposed scheme not only achieve dual protection of the image content, but also maintain higher visual quality (an average of 6.69 dB better than a comparable approach) for a specified level of watermark robustness. In addition, the overall computing load is low enough to be practical in real-time applications

Journal ArticleDOI
TL;DR: The results of this study demonstrate that the proposed algorithm can blindly and successfully remove the visible watermarks without knowing the watermarking methods in advance.
Abstract: A novel image recovery algorithm for removing visible watermarks is presented. Independent component analysis (ICA) is utilized to separate source images from watermarked and reference images. Three independent component analysis approaches are examined in the proposed algorithm, which includes joint approximate diagonalization of eigenmatrices, second-order blind identification, and FastICA. Moreover, five different visible watermarking methods to embed uniform and linear-gradient watermarks are implemented. The experimental results show that visible watermarks are successfully removed, and that the proposed algorithm is independent of both the adopted ICA approach and the visible watermarking method. In the final experiment, several public domain images sourced from various websites are tested. The results of this study demonstrate that the proposed algorithm can blindly and successfully remove the visible watermarks without knowing the watermarking methods in advance

Journal ArticleDOI
TL;DR: The generalized generalized AMAC algorithm and its probabilistic model are developed and a statistical analysis characterizing the behavior of the AMACs is provided along with the simulations illustrating their properties.
Abstract: Approximate message authentication codes (AMACs) for binary alphabets have been introduced recently as noise-tolerant authenticators. Different from conventional “hard” message authentications that are designed to detect even the slightest changes in messages, AMACs are designed to tolerate a small amount of noise in messages for applications where slight noise is acceptable, such as in multimedia communications. Binary AMACs, however, have several limitations. First, they do not naturally deal with messages having $N$ -ary alphabets $(N≫2)$ . AMACs are distance-preserving codes; i.e., the distance between two authentication tags reflects the distance between two messages. Binary representation of $N$ -ary alphabets, however, may destroy the original distance information between $N$ -ary messages. Second, binary AMACs lack a means to adjust authentication sensitivity. Different applications may require different sensitivities against noise. AMACs for $N$ -ary alphabets are designed as a cryptographic primitive to overcome the limitations of binary AMACs. $N$ -ary AMACs not only directly process messages having $N$ -ary alphabets but also provide sensitivity control on the authentication of binary and of $N$ -ary messages. The generalized $N$ -ary AMAC algorithm and its probabilistic model are developed. A statistical analysis characterizing the behavior of $N$ -ary AMACs is provided along with the simulations illustrating their properties. Security analysis under chosen message attack is also developed.