scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Forensics and Security in 2008"


Journal ArticleDOI
TL;DR: A unified framework for identifying the source digital camera from its images and for revealing digitally altered images using photo-response nonuniformity noise (PRNU), which is a unique stochastic fingerprint of imaging sensors is provided.
Abstract: In this paper, we provide a unified framework for identifying the source digital camera from its images and for revealing digitally altered images using photo-response nonuniformity noise (PRNU), which is a unique stochastic fingerprint of imaging sensors. The PRNU is obtained using a maximum-likelihood estimator derived from a simplified model of the sensor output. Both digital forensics tasks are then achieved by detecting the presence of sensor PRNU in specific regions of the image under investigation. The detection is formulated as a hypothesis testing problem. The statistical distribution of the optimal test statistics is obtained using a predictor of the test statistics on small image blocks. The predictor enables more accurate and meaningful estimation of probabilities of false rejection of a correct camera and missed detection of a tampered region. We also include a benchmark implementation of this framework and detailed experimental validation. The robustness of the proposed forensic methods is tested on common image processing, such as JPEG compression, gamma correction, resizing, and denoising.

850 citations


Journal ArticleDOI
TL;DR: A new adaptive least-significant- bit (LSB) steganographic method using pixel-value differencing (PVD) that provides a larger embedding capacity and imperceptible stegoimages and when compared to the past study of Wu et al.'s PVD and LSB replacement method, the experimental results show that the proposed approach provides both larger embeding capacity and higher image quality.
Abstract: This paper proposes a new adaptive least-significant- bit (LSB) steganographic method using pixel-value differencing (PVD) that provides a larger embedding capacity and imperceptible stegoimages. The method exploits the difference value of two consecutive pixels to estimate how many secret bits will be embedded into the two pixels. Pixels located in the edge areas are embedded by a k-bit LSB substitution method with a larger value of k than that of the pixels located in smooth areas. The range of difference values is adaptively divided into lower level, middle level, and higher level. For any pair of consecutive pixels, both pixels are embedded by the k-bit LSB substitution method. However, the value k is adaptive and is decided by the level which the difference value belongs to. In order to remain at the same level where the difference value of two consecutive pixels belongs, before and after embedding, a delicate readjusting phase is used. When compared to the past study of Wu et al.'s PVD and LSB replacement method, our experimental results show that our proposed approach provides both larger embedding capacity and higher image quality.

429 citations


Journal ArticleDOI
TL;DR: Performance of the proposed scheme is shown to be better than the original difference expansion scheme by Tian and its improved version by Kamstra and Heijmans and can be possible by exploiting the quasi-Laplace distribution of the difference values.
Abstract: Reversible data embedding theory has marked a new epoch for data hiding and information security. Being reversible, the original data and the embedded data should be completely restored. Difference expansion transform is a remarkable breakthrough in reversible data-hiding schemes. The difference expansion method achieves high embedding capacity and keeps distortion low. This paper shows that the difference expansion method with the simplified location map and new expandability can achieve more embedding capacity while keeping the distortion at the same level as the original expansion method. Performance of the proposed scheme in this paper is shown to be better than the original difference expansion scheme by Tian and its improved version by Kamstra and Heijmans. This improvement can be possible by exploiting the quasi-Laplace distribution of the difference values.

330 citations


Journal ArticleDOI
TL;DR: It is shown that interpolated signals and their derivatives contain specific detectable periodic properties, and a blind, efficient, and automatic method capable of finding traces of resampling and interpolation is proposed.
Abstract: In this paper, we analyze and analytically describe the specific statistical changes brought into the covariance structure of signal by the interpolation process. We show that interpolated signals and their derivatives contain specific detectable periodic properties. Based on this, we propose a blind, efficient, and automatic method capable of finding traces of resampling and interpolation. The proposed method can be very useful in many areas, especially in image security and authentication. For instance, when two or more images are spliced together, to create high quality and consistent image forgeries, almost always geometric transformations, such as scaling, rotation, or skewing are needed. These procedures are typically based on a resampling and interpolation step. By having a method capable of detecting the traces of resampling, we can significantly reduce the successful usage of such forgeries. Among other points, the presented method is also very useful in estimation of the geometric transformations factors.

304 citations


Journal ArticleDOI
TL;DR: A method for the detection of double JPEG compression and a maximum-likelihood estimator of the primary quality factor are presented, essential for construction of accurate targeted and blind steganalysis methods for JPEG images.
Abstract: This paper presents a method for the detection of double JPEG compression and a maximum-likelihood estimator of the primary quality factor. These methods are essential for construction of accurate targeted and blind steganalysis methods for JPEG images. The proposed methods use support vector machine classifiers with feature vectors formed by histograms of low-frequency discrete cosine transformation coefficients. The performance of the algorithms is compared to selected prior art.

284 citations


Journal ArticleDOI
TL;DR: This paper demonstrates the effectiveness of the proposed framework for nonintrusive digital image forensics by demonstrating the absence of camera-imposed fingerprints from a test image indicates that the test image is not a camera output and is possibly generated by other image production processes.
Abstract: Digital imaging has experienced tremendous growth in recent decades, and digital camera images have been used in a growing number of applications. With such increasing popularity and the availability of low-cost image editing software, the integrity of digital image content can no longer be taken for granted. This paper introduces a new methodology for the forensic analysis of digital camera images. The proposed method is based on the observation that many processing operations, both inside and outside acquisition devices, leave distinct intrinsic traces on digital images, and these intrinsic fingerprints can be identified and employed to verify the integrity of digital data. The intrinsic fingerprints of the various in-camera processing operations can be estimated through a detailed imaging model and its component analysis. Further processing applied to the camera captured image is modelled as a manipulation filter, for which a blind deconvolution technique is applied to obtain a linear time-invariant approximation and to estimate the intrinsic fingerprints associated with these postcamera operations. The absence of camera-imposed fingerprints from a test image indicates that the test image is not a camera output and is possibly generated by other image production processes. Any change or inconsistencies among the estimated camera-imposed fingerprints, or the presence of new types of fingerprints suggest that the image has undergone some kind of processing after the initial capture, such as tampering or steganographic embedding. Through analysis and extensive experimental studies, this paper demonstrates the effectiveness of the proposed framework for nonintrusive digital image forensics.

281 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that using 28 small regions on the face allow for the highest level of 3D face recognition, and show the robustness of the algorithm by simulating large holes and artifacts in images.
Abstract: In this paper, we introduce a new system for 3D face recognition based on the fusion of results from a committee of regions that have been independently matched. Experimental results demonstrate that using 28 small regions on the face allow for the highest level of 3D face recognition. Score-based fusion is performed on the individual region match scores and experimental results show that the Borda count and consensus voting methods yield higher performance than the standard sum, product, and min fusion rules. In addition, results are reported that demonstrate the robustness of our algorithm by simulating large holes and artifacts in images. To our knowledge, no other work has been published that uses a large number of 3D face regions for high-performance face matching. Rank one recognition rates of 97.2% and verification rates of 93.2% at a 0.1% false accept rate are reported and compared to other methods published on the face recognition grand challenge v2 data set.

259 citations


Journal ArticleDOI
TL;DR: A general analysis and design framework for authentication at the physical layer where the authentication information is transmitted concurrently with the data by superimposing a carefully designed secret modulation on the waveforms is introduced.
Abstract: Authentication is the process where claims of identity are verified. Most mechanisms of authentication (e.g., digital signatures and certificates) exist above the physical layer, though some (e.g., spread-spectrum communications) exist at the physical layer often with an additional cost in bandwidth. This paper introduces a general analysis and design framework for authentication at the physical layer where the authentication information is transmitted concurrently with the data. By superimposing a carefully designed secret modulation on the waveforms, authentication is added to the signal without requiring additional bandwidth, as do spread-spectrum methods. The authentication is designed to be stealthy to the uninformed user, robust to interference, and secure for identity verification. The tradeoffs between these three goals are identified and analyzed in block fading channels. The use of the authentication for channel estimation is also considered, and an improved bit-error rate is demonstrated for time-varying channels. Finally, simulation results are given that demonstrate the potential application of this authentication technique.

236 citations


Journal ArticleDOI
TL;DR: This paper presents two vector watermarking schemes that are based on the use of complex and quaternion Fourier transforms and demonstrates, for the first time, how to embed watermarks into the frequency domain that is consistent with the human visual system.
Abstract: This paper presents two vector watermarking schemes that are based on the use of complex and quaternion Fourier transforms and demonstrates, for the first time, how to embed watermarks into the frequency domain that is consistent with our human visual system. Watermark casting is performed by estimating the just-noticeable distortion of the images, to ensure watermark invisibility. The first method encodes the chromatic content of a color image into the CIE chromaticity coordinates while the achromatic content is encoded as CIE tristimulus value. Color watermarks (yellow and blue) are embedded in the frequency domain of the chromatic channels by using the spatiochromatic discrete Fourier transform. It first encodes and as complex values, followed by a single discrete Fourier transform. The most interesting characteristic of the scheme is the possibility of performing watermarking in the frequency domain of chromatic components. The second method encodes the components of color images and watermarks are embedded as vectors in the frequency domain of the channels by using the quaternion Fourier transform. Robustness is achieved by embedding a watermark in the coefficient with positive frequency, which spreads it to all color components in the spatial domain and invisibility is satisfied by modifying the coefficient with negative frequency, such that the combined effects of the two are insensitive to human eyes. Experimental results demonstrate that the two proposed algorithms perform better than two existing algorithms - ac- and discrete cosine transform-based schemes.

210 citations


Journal ArticleDOI
TL;DR: Early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Abstract: Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.

201 citations


Journal ArticleDOI
TL;DR: An elastically deformable model algorithm that establishes correspondence among a set of faces is proposed first and then bilinear models that decouple the identity and facial expression factors are constructed, enabling face recognition invariant to facial expressions and facialexpression recognition with unknown identity.
Abstract: In this paper, we explore bilinear models for jointly addressing 3D face and facial expression recognition. An elastically deformable model algorithm that establishes correspondence among a set of faces is proposed first and then bilinear models that decouple the identity and facial expression factors are constructed. Fitting these models to unknown faces enables us to perform face recognition invariant to facial expressions and facial expression recognition with unknown identity. A quantitative evaluation of the proposed technique is conducted on the publicly available BU-3DFE face database in comparison with our previous work on face recognition and other state-of-the-art algorithms for facial expression recognition. Experimental results demonstrate an overall 90.5% facial expression recognition rate and an 86% rank-1 face recognition rate.

Journal ArticleDOI
TL;DR: 1-SVM-based multiclass classification approach overperforms the conventional hidden Markov model-based system in the experiments conducted, the improvement in the error rate can reach 50%.
Abstract: This paper presents a method aimed at recognizing environmental sounds for surveillance and security applications. We propose to apply one-class support vector machines (1-SVMs) together with a sophisticated dissimilarity measure in order to address audio classification, and more specifically, sound recognition. We illustrate the performance of this method on an audio database, which consists of 1015 sounds belonging to nine classes. The database used presents high intraclass diversity in temps of signal properties and some kind of interclass similarities. A large discrepancy in the number of items in each class implies nonuniform probability of sound appearances. The method proceeds as follows: first, the use of a set of state-of-the-art audio features is studied. Then, we introduce a set of novel features obtained by combining elementary features. Experiments conducted on a nine-class classification problem show the superiority of this novel sound recognition method. The best recognition accuracy (96.89%) is obtained when combining wavelet-based features, MFCCs, and individual temporal and frequency features. Our 1-SVM-based multiclass classification approach overperforms the conventional hidden Markov model-based system in the experiments conducted, the improvement in the error rate can reach 50%. Besides, we provide empirical results showing that the single-class SVM outperforms a combination of binary SVMs. Additional experiments demonstrate our method is robust to environmental noise.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed detection scheme can be used in identification of the source digital single lens reflex camera at low false positive rates, even under heavy compression and downsampling.
Abstract: Digital single lens reflex cameras suffer from a well-known sensor dust problem due to interchangeable lenses that they deploy. The dust particles that settle in front of the imaging sensor create a persistent pattern in all captured images. In this paper, we propose a novel source camera identification method based on detection and matching of these dust-spot characteristics. Dust spots in the image are detected based on a (Gaussian) intensity loss model and shape properties. To prevent false detections, lens parameter-dependent characteristics of dust spots are also taken into consideration. Experimental results show that the proposed detection scheme can be used in identification of the source digital single lens reflex camera at low false positive rates, even under heavy compression and downsampling.

Journal ArticleDOI
TL;DR: This paper demonstrates that the camera model identification algorithm achieves more accurate identification, and that it can be made robust to a host of image manipulations.
Abstract: The various image-processing stages in a digital camera pipeline leave telltale footprints, which can be exploited as forensic signatures. These footprints consist of pixel defects, of unevenness of the responses in the charge-coupled device sensor, black current noise, and may originate from proprietary interpolation algorithms involved in color filter array. Various imaging device (camera, scanner, etc.) identification methods are based on the analysis of these artifacts. In this paper, we set to explore three sets of forensic features, namely binary similarity measures, image-quality measures, and higher order wavelet statistics in conjunction with SVM classifiers to identify the originating camera. We demonstrate that our camera model identification algorithm achieves more accurate identification, and that it can be made robust to a host of image manipulations. The algorithm has the potential to discriminate camera units within the same model.

Journal ArticleDOI
TL;DR: It is shown that it is possible to compress iris images to as little as 2000 bytes with minimal impact on recognition performance, approaching a convergence of image data size and template size.
Abstract: We investigate three schemes for severe compression of iris images in order to assess what their impact would be on recognition performance of the algorithms deployed today for identifying people by this biometric feature. Currently, standard iris images are 600 times larger than the IrisCode templates computed from them for database storage and search; but it is administratively desired that iris data should be stored, transmitted, and embedded in media in the form of images rather than as templates computed with proprietary algorithms. To reconcile that goal with its implications for bandwidth and storage, we present schemes that combine region-of-interest isolation with JPEG and JPEG2000 compression at severe levels, and we test them using a publicly available database of iris images. We show that it is possible to compress iris images to as little as 2000 bytes with minimal impact on recognition performance. Only some 2% to 3% of the bits in the IrisCode templates are changed by such severe image compression, and we calculate the entropy per code bit introduced by each compression scheme. Error tradeoff curve metrics document very good recognition performance despite this reduction in data size by a net factor of 150, approaching a convergence of image data size and template size.

Journal ArticleDOI
TL;DR: A fast search algorithm for a large fuzzy database that stores iris codes or data with a similar binary structure, Beacon Guided Search (BGS) is proposed, showing a substantial improvement in search speed with a negligible loss of accuracy.
Abstract: In this paper, we propose a fast search algorithm for a large fuzzy database that stores iris codes or data with a similar binary structure. The fuzzy nature of iris codes and their high dimensionality render many modern search algorithms, mainly relying on sorting and hashing, inadequate. The algorithm that is used in all current public deployments of iris recognition is based on a brute force exhaustive search through a database of iris codes, looking for a match that is close enough. Our new technique, Beacon Guided Search (BGS), tackles this problem by dispersing a multitude of ldquobeaconsrdquo in the search space. Despite random bit errors, iris codes from the same eye are more likely to collide with the same beacons than those from different eyes. By counting the number of collisions, BGS shrinks the search range dramatically with a negligible loss of precision. We evaluate this technique using 632,500 iris codes enrolled in the United Arab Emirates (UAE) border control system, showing a substantial improvement in search speed with a negligible loss of accuracy. In addition, we demonstrate that the empirical results match theoretical predictions.

Journal ArticleDOI
TL;DR: Experiments on the face recognition grand challenge (FRGC) and the biometric experimentation environment (BEE) show that for the most challenging FRGC version 2 Experiment 4, the ICS, DCS, and UCS achieve the face verification rate (ROC III) of 73.69%, 71.42%, and 69.92%, respectively.
Abstract: This paper presents learning the uncorrelated color space (UCS), the independent color space (ICS), and the discriminating color space (DCS) for face recognition. The new color spaces are derived from the RGB color space that defines the tristimuli R, G, and B component images. While the UCS decorrelates its three component images using principal component analysis (PCA), the ICS derives three independent component images by means of blind source separation, such as independent component analysis (ICA). The DCS, which applies discriminant analysis, defines three new component images that are effective for face recognition. Effective color image representation is formed in these color spaces by concatenating their component images, and efficient color image classification is achieved using the effective color image representation and an enhanced Fisher model (EFM). Experiments on the face recognition grand challenge (FRGC) and the biometric experimentation environment (BEE) show that for the most challenging FRGC version 2 Experiment 4, which contains 12 776 training images, 16 028 controlled target images, and 8014 uncontrolled query images, the ICS, DCS, and UCS achieve the face verification rate (ROC III) of 73.69%, 71.42%, and 69.92%, respectively, at the false accept rate of 0.1%, compared to the RGB color space, the 2-D Karhunen-Loeve (KL) color space, and the FRGC baseline algorithm with the face verification rate of 67.13%, 59.16%, and 11.86%, respectively, with the same false accept rate.

Journal ArticleDOI
TL;DR: A method that models discrepancies between biometric measurements as an erasure and error channel is proposed, and it is shown that two-dimensional iterative min-sum decoding of properly chosen product codes almost reaches the capacity of this channel.
Abstract: Fuzzy commitment schemes, introduced as a link between biometrics and cryptography, are a way to handle biometric data matching as an error-correction issue. We focus here on finding the best error-correcting code with respect to a given database of biometric data. We propose a method that models discrepancies between biometric measurements as an erasure and error channel, and we estimate its capacity. We then show that two-dimensional iterative min-sum decoding of properly chosen product codes almost reaches the capacity of this channel. This leads to practical fuzzy commitment schemes that are close to theoretical limits. We test our techniques on public iris and fingerprint databases and validate our findings.

Journal ArticleDOI
TL;DR: It is shown that circular watermarking has robustness comparable to that of the insecure classical spread spectrum, and information leakage measures are proposed to highlight the security level of the new spread-spectrum modulations.
Abstract: It has recently been discovered that using pseudorandom sequences as carriers in spread-spectrum techniques for data-hiding is not at all a sufficient condition for ensuring data-hiding security. Using proper and realistic apriori hypothesis on the messages distribution, it is possible to accurately estimate the secret carriers by casting this estimation problem into a blind source separation problem. After reviewing relevant works on spread-spectrum security for watermarking, we further develop this topic to introduce the concept of security classes which broaden previous notions in watermarking security and fill the gap with steganography security as defined by Cachin. We define four security classes, namely, by order of creasing security: insecurity, key security, subspace security, and stegosecurity. To illustrate these views, we present two new modulations for truly secure watermarking in the watermark-only-attack (WOA) framework. The first one is called natural watermarking and can be made either stegosecure or subspace secure. The second is called circular watermarking and is key secure. We show that circular watermarking has robustness comparable to that of the insecure classical spread spectrum. We shall also propose information leakage measures to highlight the security level of our new spread-spectrum modulations.

Journal ArticleDOI
TL;DR: An encrypted wireless sensor network (eWSN) concept where stochastic enciphers operating on binary sensor outputs are introduced to disguise the sensor outputs, creating an eWSN scheme is introduced.
Abstract: We consider decentralized estimation of a noise-corrupted deterministic signal in a bandwidth-constrained sensor network communicating through an insecure medium. Each sensor collects a noise-corrupted version, performs a local quantization, and transmits a 1-bit message to an ally fusion center through a wireless medium where the sensor outputs are vulnerable to unauthorized observation from enemy/third-party fusion centers. In this paper, we introduce an encrypted wireless sensor network (eWSN) concept where stochastic enciphers operating on binary sensor outputs are introduced to disguise the sensor outputs, creating an eWSN scheme. Noting that the plaintext (original) and ciphertext (disguised) messages are constrained to a single bit due to bandwidth constraints, we consider a binary channel-like scheme to probabilistically encipher (i.e., flip) the sensor outputs. We first consider a symmetric key encryption case where the "0" and "1" enciphering probabilities are equal. The key is represented by the bit enciphering probability. Specifically, we derive the optimal estimator of the deterministic signal approached from a maximum-likelihood perspective and the Cramer-Rao lower bound for the estimation problem utilizing the key. Furthermore, we analyze the effect of the considered cryptosystem on enemy fusion centers that are unaware of the fact that the WSN is encrypted (i.e., we derive the bias, variance, and mean square error (MSE) of the enemy fusion center). We then extend the cryptosystem to admit unequal enciphering schemes for "0" and "1", and analyze the estimation problem from both the prospectives of ally (that has access to the enciphering keys) and (third-party) enemy fusion centers. The results show that when designed properly, a significant amount of bias and MSE can be introduced to an enemy fusion center with the cost to the ally fusion center being a marginal increase [factor of (1-Omega1-Omega0 )-2, where 1-Omegaj, j=0, 1 is the "j" enciphering probability in the estimation variance (compared to the variance of a fusion center estimate operating in a vulnerable WSN).

Journal ArticleDOI
TL;DR: The experimental results show that the proposed quality score is highly correlated with the recognition accuracy and is capable of predicting the recognition results.
Abstract: Poor quality images can significantly affect the accuracy of iris-recognition systems because they do not have enough feature information. However, existing quality measures have focused on parameters or factors other than feature information. The quality of feature available for measure is a combination of the distinctiveness of the iris region and the amount of iris region available. Some irises may only have a small area of changing patterns. Due to this, the proposed approach automatically selects the portions of the iris with the most distinguishable changing patterns to measure the feature information. The combination of occlusion and dilation determines the amount of iris region available and is considered in the proposed quality measure. The quality score is the fused result of the feature information score, the occlusion score, and the dilation score. The relationship between the quality score and recognition accuracy is evaluated using 2-D Gabor and 1-D Log-Gabor wavelet approaches and validated using a diverse data set. In addition, the proposed method is compared with the convolution matrix, spectrum energy, and Mexican hat wavelet methods. These three methods represent a variety of approaches for iris-quality measure. The experimental results show that the proposed quality score is highly correlated with the recognition accuracy and is capable of predicting the recognition results.

Journal ArticleDOI
TL;DR: This work designs an FPGA-based architecture for anomaly detection in network transmissions and demonstrates the use of principal component analysis as an outlier detection method for NIDSs.
Abstract: Network intrusion detection systems (NIDSs) monitor network traffic for suspicious activity and alert the system or network administrator. With the onset of gigabit networks, current generation networking components for NIDS will soon be insufficient for numerous reasons; most notably because the existing methods cannot support high-performance demands. Field-programmable gate arrays (FPGAs) are an attractive medium to handle both high throughput and adaptability to the dynamic nature of intrusion detection. In this work, we design an FPGA-based architecture for anomaly detection in network transmissions. We first develop a feature extraction module (FEM) which aims to summarize network information to be used at a later stage. Our FPGA implementation shows that we can achieve significant performance improvements compared to existing software and application-specific integrated-circuit implementations. Then, we go one step further and demonstrate the use of principal component analysis as an outlier detection method for NIDSs. The results show that our architecture correctly classifies attacks with detection rates exceeding 99% and false alarms rates as low as 1.95%. Moreover, using extensive pipelining and hardware parallelism, it can be shown that for realistic workloads, our architectures for FEM and outlier analysis achieve 21.25- and 23.76-Gb/s core throughput, respectively.

Journal ArticleDOI
TL;DR: It is shown that by limiting the watermark to nonzero-quantized AC residuals in P-frames, the video bit-rate increase can be held to reasonable values, and a watermark detection algorithm is developed that has controllable performance.
Abstract: Most video watermarking algorithms embed the watermark in I-frames, but refrain from embedding in P- and B-frames, which are highly compressed by motion compensation. However, P-frames appear more frequently in the compressed video and their watermarking capacity should be exploited, despite the fact that embedding the watermark in P-frames can increase the video bit rate significantly. This paper gives a detailed overview of a common approach for embedding the watermark in I-frames. This common approach is adopted to use P-frames for video watermarking. We show that by limiting the watermark to nonzero-quantized AC residuals in P-frames, the video bit-rate increase can be held to reasonable values. Since the nonzero-quantized AC residuals in P-frames correspond to nonflat areas that are in motion, temporal and texture masking are exploited at the same time. We also propose embedding the watermark in nonzero quantized AC residuals with spatial masking capacity in I-frames. Since the locations of the nonzero-quantized AC residuals is lost after decoding, we develop a watermark detection algorithm that does not depend on this knowledge. Our video watermark detection algorithm has controllable performance. We demonstrate the robustness of our proposed algorithm to several different attacks.

Journal ArticleDOI
TL;DR: A new criteria based on the concept of -nearest-neighbor simplex (), which is constructed by the nearest neighbors, to determine the class label of a new datum, is presented.
Abstract: Techniques for classification and feature extraction are often intertwined. In this paper, we contribute to these two aspects via the shared philosophy of simplexizing the sample set. For general classification, we present a new criteria based on the concept of -nearest-neighbor simplex (), which is constructed by the nearest neighbors, to determine the class label of a new datum. For feature extraction, we develop a novel subspace learning algorithm, called discriminant simplex analysis (DSA), in which the intraclass compactness and interclass separability are both measured by distances. Comprehensive experiments on face recognition and lipreading validate the effectiveness of the DSA as well as the -based classification approach.

Journal ArticleDOI
TL;DR: A practical forensic steganalysis tool for JPEG images that can properly analyze single- and double-compressed stego images and classify them to selected current steganographic methods is constructed.
Abstract: The aim of this paper is to construct a practical forensic steganalysis tool for JPEG images that can properly analyze single- and double-compressed stego images and classify them to selected current steganographic methods Although some of the individual modules of the steganalyzer were previously published by the authors, they were never tested as a complete system The fusion of the modules brings its own challenges and problems whose analysis and solution is one of the goals of this paper By determining the stego-algorithm, this tool provides the first step needed for extracting the secret message Given a JPEG image, the detector assigns it to six popular steganographic algorithms The detection is based on feature extraction and supervised training of two banks of multiclassifiers realized using support vector machines For accurate classification of single-compressed images, a separate multiclassifier is trained for each JPEG quality factor from a certain range Another bank of multiclassifiers is trained for double-compressed images for the same range of primary quality factors The image under investigation is first analyzed using a preclassifier that detects selected cases of double compression and estimates the primary quantization table It then sends the image to the appropriate single- or double-compression multiclassifier The error is estimated from more than 26 million images The steganalyzer is also tested on two previously unseen methods to examine its ability to generalize

Journal ArticleDOI
TL;DR: A variable-length Markov model (VLMM) is presented that captures the sequential properties of attack tracks, allowing for the prediction of likely future actions on ongoing attacks.
Abstract: Previous works in the area of network security have emphasized the creation of intrusion detection systems (IDSs) to flag malicious network traffic and computer usage, and the development of algorithms to analyze IDS alerts. One possible byproduct of correlating raw IDS data are attack tracks, which consist of ordered collections of alerts belonging to a single multistage attack. This paper presents a variable-length Markov model (VLMM) that captures the sequential properties of attack tracks, allowing for the prediction of likely future actions on ongoing attacks. The proposed approach is able to adapt to newly observed attack sequences without requiring specific network information. Simulation results are presented to demonstrate the performance of VLMM predictors and their adaptiveness to new attack scenarios.

Journal ArticleDOI
Stefan Katzenbeisser, Aweke N. Lemma1, Mehmet U. Celik1, M. van der Veen1, M. Maas1 
TL;DR: In this correspondence, it is shown that the same functionality can be achieved efficiently using recently proposed secure watermark embedding algorithms.
Abstract: In a forensic watermarking architecture, a buyer-seller protocol protects the watermark secrets from the buyer and prevents false infringement accusations by the seller. Existing protocols encrypt the watermark and the content with a homomorphic public-key cipher and perform embedding under encryption. When used for multimedia data, these protocols create a large computation and bandwidth overhead. In this correspondence, we show that the same functionality can be achieved efficiently using recently proposed secure watermark embedding algorithms.

Journal ArticleDOI
TL;DR: The orientation tensor of fingerprint images is studied to quantify signal impairments, such as noise, lack of structure, blur, with the help of symmetry descriptors to boost recognition rates and fusing differently skilled experts efficiently as well as effectively.
Abstract: Signal-quality awareness has been found to increase recognition rates and to support decisions in multisensor environments significantly. Nevertheless, automatic quality assessment is still an open issue. Here, we study the orientation tensor of fingerprint images to quantify signal impairments, such as noise, lack of structure, blur, with the help of symmetry descriptors. A strongly reduced reference is especially favorable in biometrics, but less information is not sufficient for the approach. This is also supported by numerous experiments involving a simpler quality estimator, a trained method (NFIQ), as well as the human perception of fingerprint quality on several public databases. Furthermore, quality measurements are extensively reused to adapt fusion parameters in a monomodal multialgorithm fingerprint recognition environment. In this study, several trained and nontrained score-level fusion schemes are investigated. A Bayes-based strategy for incorporating experts' past performances and current quality conditions, a novel cascaded scheme for computational efficiency, besides simple fusion rules, is presented. The quantitative results favor quality awareness under all aspects, boosting recognition rates and fusing differently skilled experts efficiently as well as effectively (by training).

Journal ArticleDOI
TL;DR: It is shown that the distribution of the explainability of the collisions is very sensitive to changes in the network, even with a changing number of competing terminals, making it an excellent candidate to serve as a jamming attack indicator.
Abstract: Carrier-sensing multiple-access with collision avoidance (CSMA/CA)-based networks, such as those using the IEEE 802.11 distributed coordination function protocol, have experienced widespread deployment due to their ease of implementation. The terminals accessing these networks are not owned or controlled by the network operators (such as in the case of cellular networks) and, thus, terminals may not abide by the protocol rules in order to gain unfair access to the network (selfish misbehavior), or simply to disturb the network operations (denial-of-service attack). This paper presents a robust nonparametric detection mechanism for the CSMA/CA media-access control layer denial-of-service attacks that does not require any modification to the existing protocols. This technique, based on the -truncated sequential Kolmogorov-Smirnov statistics, monitors the successful transmissions and the collisions of the terminals in the network, and determines how ldquoexplainablerdquo the collisions are given for such observations. We show that the distribution of the explainability of the collisions is very sensitive to changes in the network, even with a changing number of competing terminals, making it an excellent candidate to serve as a jamming attack indicator. Ns-2 simulation results show that the proposed method has a very short detection latency and high detection accuracy.

Journal ArticleDOI
TL;DR: This work introduces a new video watermarking algorithm for playback control that takes advantage of the properties of the dual-tree complex wavelet transform that offers the advantages of the regular and the complex wavelets.
Abstract: A watermarking scheme that discourages theater camcorder piracy through the enforcement of playback control is presented. In this method, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been re-recorded with a camera inside a movie theater. We introduce a new video watermarking algorithm for playback control that takes advantage of the properties of the dual-tree complex wavelet transform. This transform offers the advantages of the regular and the complex wavelets (perfect reconstruction, shift invariance, and good directional selectivity). Our method relies on these characteristics to create a watermark that is robust to geometric distortions and lossy compression. The proposed scheme is simple to implement and outperforms comparable methods when tested against geometric distortions.