scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Forensics and Security in 2009"


Journal ArticleDOI
Hany Farid1
TL;DR: A technique to detect whether the part of an image was initially compressed at a lower quality than the rest of the image is described, applicable to images of high and low quality as well as resolution.
Abstract: When creating a digital forgery, it is often necessary to combine several images, for example, when compositing one person's head onto another person's body. If these images were originally of different JPEG compression quality, then the digital composite may contain a trace of the original compression qualities. To this end, we describe a technique to detect whether the part of an image was initially compressed at a lower quality than the rest of the image. This approach is applicable to images of high and low quality as well as resolution.

427 citations


Journal ArticleDOI
TL;DR: This paper describes a novel iris segmentation scheme employing geodesic active contours (GACs) to extract the iris from the surrounding structures and demonstrates the efficacy of the proposed technique on the CASIA v3.0 and WVU nonideal iris databases.
Abstract: The richness and apparent stability of the iris texture make it a robust biometric trait for personal authentication. The performance of an automated iris recognition system is affected by the accuracy of the segmentation process used to localize the iris structure. Most segmentation models in the literature assume that the pupillary, limbic, and eyelid boundaries are circular or elliptical in shape. Hence, they focus on determining model parameters that best fit these hypotheses. However, it is difficult to segment iris images acquired under nonideal conditions using such conic models. In this paper, we describe a novel iris segmentation scheme employing geodesic active contours (GACs) to extract the iris from the surrounding structures. Since active contours can 1) assume any shape and 2) segment multiple objects simultaneously, they mitigate some of the concerns associated with traditional iris segmentation models. The proposed scheme elicits the iris texture in an iterative fashion and is guided by both local and global properties of the image. The matching accuracy of an iris recognition system is observed to improve upon application of the proposed segmentation algorithm. Experimental results on the CASIA v3.0 and WVU nonideal iris databases indicate the efficacy of the proposed technique.

277 citations


Journal ArticleDOI
TL;DR: HVC construction methods based on error diffusion are proposed, where the secret image is concurrently embedded into binary valued shares while these shares are halftoned by error diffusion-the workhorse standard of halftoning algorithms.
Abstract: Halftone visual cryptography (HVC) enlarges the area of visual cryptography by the addition of digital halftoning techniques. In particular, in visual secret sharing schemes, a secret image can be encoded into halftone shares taking meaningful visual information. In this paper, HVC construction methods based on error diffusion are proposed. The secret image is concurrently embedded into binary valued shares while these shares are halftoned by error diffusion-the workhorse standard of halftoning algorithms. Error diffusion has low complexity and provides halftone shares with good image quality. A reconstructed secret image, obtained by stacking qualified shares together, does not suffer from cross interference of share images. Factors affecting the share image quality and the contrast of the reconstructed image are discussed. Simulation results show several illustrative examples.

257 citations


Journal ArticleDOI
TL;DR: The experimental results from the proposed approach are promising and confirm the usefulness of such an approach for personal authentication using peg-free imaging.
Abstract: This paper investigates a new approach for personal authentication using fingerback surface imaging. The texture pattern produced by the finger knuckle bending is highly unique and makes the surface a distinctive biometric identifier. The finger geometry features can be simultaneously acquired from the same image at the same time and integrated to further improve the user-identification accuracy of such a system. The fingerback surface images from each user are normalized to minimize the scale, translation, and rotational variations in the knuckle images. This paper details the development of such an approach using peg-free imaging. The experimental results from the proposed approach are promising and confirm the usefulness of such an approach for personal authentication.

242 citations


Journal ArticleDOI
TL;DR: The key elements of the approach are presented and the evolution of the design and their suitability in various contexts are described and the voter experience, and the security properties that the schemes provide are described.
Abstract: ??????Pre?t a? Voter provides a practical approach to end-to-end verifiable elections with a simple, familiar voter-experience. It assures a high degree of transparency while preserving secrecy of the ballot. Assurance arises from the auditability of the election itself, rather than the need to place trust in the system components. The original idea has undergone several revisions and enhancements since its inception in 2004, driven by the identification of threats, the availability of improved cryptographic primitives, and the desire to make the scheme as flexible as possible. This paper presents the key elements of the approach and describes the evolution of the design and their suitability in various contexts. We also describe the voter experience, and the security properties that the schemes provide.

195 citations


Journal ArticleDOI
TL;DR: This paper addresses privacy leakage in biometric secrecy systems by investigating four settings in which two terminals observe two correlated sequences and determining the fundamental balance for both unconditional and conditional privacy leakage.
Abstract: This paper addresses privacy leakage in biometric secrecy systems. Four settings are investigated. The first one is the standard Ahlswede-Csiszar secret-generation setting in which two terminals observe two correlated sequences. They form a common secret by interchanging a public message. This message should only contain a negligible amount of information about the secret, but here, in addition, we require it to leak as little information as possible about the biometric data. For this first case, the fundamental tradeoff between secret-key and privacy-leakage rates is determined. Also for the second setting, in which the secret is not generated but independently chosen, the fundamental secret-key versus privacy-leakage rate balance is found. Settings three and four focus on zero-leakage systems. Here the public message should only contain a negligible amount of information on both the secret and the biometric sequence. To achieve this, a private key is needed, which can only be observed by the terminals. For both the generated-secret and the chosen-secret model, the regions of achievable secret-key versus private-key rate pairs are determined. For all four settings, the fundamental balance is determined for both unconditional and conditional privacy leakage.

194 citations


Journal ArticleDOI
TL;DR: This paper investigates the implementation of the discrete Fourier transform (DFT) in the encrypted domain by using the homomorphic properties of the underlying cryptosystem, and shows that the radix-4 fast Fouriertransform is best suited for an encrypted domain implementation in the proposed scenarios.
Abstract: Signal-processing modules working directly on encrypted data provide an elegant solution to application scenarios where valuable signals must be protected from a malicious processing device. In this paper, we investigate the implementation of the discrete Fourier transform (DFT) in the encrypted domain by using the homomorphic properties of the underlying cryptosystem. Several important issues are considered for the direct DFT: the radix-2 and the radix-4 fast Fourier algorithms, including the error analysis and the maximum size of the sequence that can be transformed. We also provide computational complexity analyses and comparisons. The results show that the radix-4 fast Fourier transform is best suited for an encrypted domain implementation in the proposed scenarios.

186 citations


Journal ArticleDOI
TL;DR: The proposed framework first reversely classifies the demosaiced samples into several categories and then estimates the underlying demosaicing formulas for each category based on partial second-order derivative correlation models, which detect both the intrachannel and the cross-channel demosaice correlation.
Abstract: In this paper, we propose a novel accurate detection framework of demosaicing regularity from different source images. The proposed framework first reversely classifies the demosaiced samples into several categories and then estimates the underlying demosaicing formulas for each category based on partial second-order derivative correlation models, which detect both the intrachannel and the cross-channel demosaicing correlation. An expectation-maximization reverse classification scheme is used to iteratively resolve the ambiguous demosaicing axes in order to best reveal the implicit grouping adopted by the underlying demosaicing algorithm. Comparison results based on syntactic images show that our proposed formulation significantly improves the accuracy of the regenerated demosaiced samples from the sensor samples for a large number of diversified demosaicing algorithms. By running sequential forward feature selection, our reduced feature sets used in conjunction with the probabilistic support vector machine classifier achieve superior performance in identifying 16 demosaicing algorithms in the presence of common camera post demosaicing processing. When applied to real applications, including camera model and RAW-tool identification, our selected features achieve nearly perfect classification performances based on large sets of cropped image blocks.

173 citations


Journal ArticleDOI
TL;DR: The proposed scheme fully preserves the privacy of the biometric data of every user, that is, the scheme does not reveal theBiometric data to anyone else, including the remote servers, through the GNY (Gong, Needham, and Yahalom) logic.
Abstract: A three-factor authentication scheme combines biometrics with passwords and smart cards to provide high-security remote authentication. Most existing schemes, however, rely on smart cards to verify biometric characteristics. The advantage of this approach is that the user's biometric data is not shared with remote server. But the disadvantage is that the remote server must trust the smart card to perform proper authentication which leads to various vulnerabilities. To achieve truly secure three-factor authentication, a method must keep the user's biometrics secret while still allowing the server to perform its own authentication. Our method achieves this. The proposed scheme fully preserves the privacy of the biometric data of every user, that is, the scheme does not reveal the biometric data to anyone else, including the remote servers. We demonstrate the completeness of the proposed scheme through the GNY (Gong, Needham, and Yahalom) logic. Furthermore, the security of our proposed scheme is proven through Bellare and Rogaway's model. As a further benefit, we point out that our method reduces the computation cost for the smart card.

158 citations


Journal ArticleDOI
TL;DR: This paper addresses the intrusion detection problem in heterogeneous networks consisting of nodes with different noncorrelated security assets by formulating the network intrusion detection as a noncooperative game and performing an in-depth analysis on the Nash equilibrium.
Abstract: Due to the dynamic, distributed, and heterogeneous nature of today's networks, intrusion detection systems (IDSs) have become a necessary addition to the security infrastructure and are widely deployed as a complementary line of defense to classical security approaches. In this paper, we address the intrusion detection problem in heterogeneous networks consisting of nodes with different noncorrelated security assets. In our study, two crucial questions are: What are the expected behaviors of rational attackers? What is the optimal strategy of the defenders (IDSs)? We answer the questions by formulating the network intrusion detection as a noncooperative game and performing an in-depth analysis on the Nash equilibrium and the engineering implications behind. Based on our game theoretical analysis, we derive the expected behaviors of rational attackers, the minimum monitor resource requirement, and the optimal strategy of the defenders. We then provide guidelines for IDS design and deployment. We also show how our game theoretical framework can be applied to configure the intrusion detection strategies in realistic scenarios via a case study. Finally, we evaluate the proposed game theoretical framework via simulations. The simulation results show both the correctness of the analytical results and the effectiveness of the proposed guidelines.

144 citations


Journal ArticleDOI
TL;DR: An enhanced physical-layer authentication scheme to detect Sybil attacks, exploiting the spatial variability of radio channels in environments with rich scattering, as is typical in indoor and urban environments.
Abstract: Due to the broadcast nature of the wireless medium, wireless networks are especially vulnerable to Sybil attacks, where a malicious node illegitimately claims a large number of identities and thus depletes system resources. We propose an enhanced physical-layer authentication scheme to detect Sybil attacks, exploiting the spatial variability of radio channels in environments with rich scattering, as is typical in indoor and urban environments. We build a hypothesis test to detect Sybil clients for both wideband and narrowband wireless systems, such as WiFi and WiMax systems. Based on the existing channel estimation mechanisms, our method can be easily implemented with low overhead, either independently or combined with other physical-layer security methods, e.g., spoofing attack detection. The performance of our Sybil detector is verified, via both a propagation modeling software and field measurements using a vector network analyzer, for typical indoor environments. Our evaluation examines numerous combinations of system parameters, including bandwidth, signal power, number of channel estimates, number of total clients, number of Sybil clients, and number of access points. For instance, both the false alarm rate and the miss rate of Sybil attacks are usually below 0.01, with three tones, pilot power of 10 mW, and a system bandwidth of 20 MHz.

Journal ArticleDOI
TL;DR: Experimental results show that proposed derivative-based and wavelet-based approaches remarkably improve the detection accuracy.
Abstract: To improve a recently developed mel-cepstrum audio steganalysis method, we present in this paper a method based on Fourier spectrum statistics and mel-cepstrum coefficients, derived from the second-order derivative of the audio signal. Specifically, the statistics of the high-frequency spectrum and the mel-cepstrum coefficients of the second-order derivative are extracted for use in detecting audio steganography. We also design a wavelet-based spectrum and mel-cepstrum audio steganalysis. By applying support vector machines to these features, unadulterated carrier signals (without hidden data) and the steganograms (carrying covert data) are successfully discriminated. Experimental results show that proposed derivative-based and wavelet-based approaches remarkably improve the detection accuracy. Between the two new methods, the derivative-based approach generally delivers a better performance.

Journal ArticleDOI
TL;DR: The spectralminutiae representation introduced in this paper is a novel method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for.
Abstract: Most fingerprint recognition systems are based on the use of a minutiae set, which is an unordered collection of minutiae locations and orientations suffering from various deformations such as translation, rotation, and scaling. The spectral minutiae representation introduced in this paper is a novel method to represent a minutiae set as a fixed-length feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for. These characteristics enable the combination of fingerprint recognition systems with template protection schemes that require a fixed-length feature vector. This paper introduces the concept of algorithms for two representation methods: the location-based spectral minutiae representation and the orientation-based spectral minutiae representation. Both algorithms are evaluated using two correlation-based spectral minutiae matching algorithms. We present the performance of our algorithms on three fingerprint databases. We also show how the performance can be improved by using a fusion scheme and singular points.

Journal ArticleDOI
TL;DR: This paper reports a benchmarking study carried out within the framework of the BioSecure DS2 (Access Control) evaluation campaign organized by the University of Surrey, involving face, fingerprint, and iris biometrics for person authentication, targeting the application of physical access control in a medium-size establishment with some 500 persons.
Abstract: Automatically verifying the identity of a person by means of biometrics (e.g., face and fingerprint) is an important application in our day-to-day activities such as accessing banking services and security control in airports. To increase the system reliability, several biometric devices are often used. Such a combined system is known as a multimodal biometric system. This paper reports a benchmarking study carried out within the framework of the BioSecure DS2 (Access Control) evaluation campaign organized by the University of Surrey, involving face, fingerprint, and iris biometrics for person authentication, targeting the application of physical access control in a medium-size establishment with some 500 persons. While multimodal biometrics is a well-investigated subject in the literature, there exists no benchmark for a fusion algorithm comparison. Working towards this goal, we designed two sets of experiments: quality-dependent and cost-sensitive evaluation. The quality-dependent evaluation aims at assessing how well fusion algorithms can perform under changing quality of raw biometric images principally due to change of devices. The cost-sensitive evaluation, on the other hand, investigates how well a fusion algorithm can perform given restricted computation and in the presence of software and hardware failures, resulting in errors such as failure-to-acquire and failure-to-match. Since multiple capturing devices are available, a fusion algorithm should be able to handle this nonideal but nevertheless realistic scenario. In both evaluations, each fusion algorithm is provided with scores from each biometric comparison subsystem as well as the quality measures of both the template and the query data. The response to the call of the evaluation campaign proved very encouraging, with the submission of 22 fusion systems. To the best of our knowledge, this campaign is the first attempt to benchmark quality-based multimodal fusion algorithms. In the presence of changing image quality which may be due to a change of acquisition devices and/or device capturing configurations, we observe that the top performing fusion algorithms are those that exploit automatically derived quality measurements. Our evaluation also suggests that while using all the available biometric sensors can definitely increase the fusion performance, this comes at the expense of increased cost in terms of acquisition time, computation time, the physical cost of hardware, and its maintenance cost. As demonstrated in our experiments, a promising solution which minimizes the composite cost is sequential fusion, where a fusion algorithm sequentially uses match scores until a desired confidence is reached, or until all the match scores are exhausted, before outputting the final combined score.

Journal ArticleDOI
TL;DR: A novel perception-inspired nonmetric partial similarity measure is introduced, which is potentially useful in dealing with the concerned problems because it can help capture the prominent partial similarities that are dominant in human perception.
Abstract: Recognition in uncontrolled situations is one of the most important bottlenecks for practical face recognition systems. In particular, few researchers have addressed the challenge to recognize noncooperative or even uncooperative subjects who try to cheat the recognition system by deliberately changing their facial appearance through such tricks as variant expressions or disguise (e.g., by partial occlusions). This paper addresses these problems within the framework of similarity matching. A novel perception-inspired nonmetric partial similarity measure is introduced, which is potentially useful in dealing with the concerned problems because it can help capture the prominent partial similarities that are dominant in human perception. Two methods, based on the general golden section rule and the maximum margin criterion, respectively, are proposed to automatically set the similarity threshold. The effectiveness of the proposed method in handling large expressions, partial occlusions, and other distortions is demonstrated on several well-known face databases.

Journal ArticleDOI
TL;DR: Practical and theoretical analyses of the security offered by watermarking and data hiding methods based on spread spectrum reveal fundamental limits and bounds on security and provide insight into other properties, such as the impact of the embedding parameters, and the tradeoff between robustness and security.
Abstract: This paper presents both theoretical and practical analyses of the security offered by watermarking and data hiding methods based on spread spectrum. In this context, security is understood as the difficulty of estimating the secret parameters of the embedding function based on the observation of watermarked signals. On the theoretical side, the security is quantified from an information-theoretic point of view by means of the equivocation about the secret parameters. The main results reveal fundamental limits and bounds on security and provide insight into other properties, such as the impact of the embedding parameters, and the tradeoff between robustness and security. On the practical side, workable estimators of the secret parameters are proposed and theoretically analyzed for a variety of scenarios, providing a comparison with previous approaches, and showing that the security of many schemes used in practice can be fairly low.

Journal ArticleDOI
TL;DR: An image source coding forensic detector is constructed that identifies which source encoder is applied, what the coding parameters are along with confidence measures of the result, and shows that the proposed system provides trustworthy performance.
Abstract: Recent development in multimedia processing and network technologies has facilitated the distribution and sharing of multimedia through networks, and increased the security demands of multimedia contents. Traditional image content protection schemes use extrinsic approaches, such as watermarking or fingerprinting. However, under many circumstances, extrinsic content protection is not possible. Therefore, there is great interest in developing forensic tools via intrinsic fingerprints to solve these problems. Source coding is a common step of natural image acquisition, so in this paper, we focus on the fundamental research on digital image source coder forensics via intrinsic fingerprints. First, we investigate the unique intrinsic fingerprint of many popular image source encoders, including transform-based coding (both discrete cosine transform and discrete wavelet transform based), subband coding, differential image coding, and also block processing as the traces of evidence. Based on the intrinsic fingerprint of image source encoders, we construct an image source coding forensic detector that identifies which source encoder is applied, what the coding parameters are along with confidence measures of the result. Our simulation results show that the proposed system provides trustworthy performance: for most test cases, the probability of detecting the correct source encoder is over 90%.

Journal ArticleDOI
TL;DR: Scantegrity II is an enhancement for existing paper ballot systems that allows voters to verify election integrity - from their selections on the ballot all the way to the final tally - by noting codes and checking for them online.
Abstract: Scantegrity II is an enhancement for existing paper ballot systems. It allows voters to verify election integrity - from their selections on the ballot all the way to the final tally - by noting codes and checking for them online. Voters mark Scantegrity II ballots just as with conventional optical scan, but using a special ballot marking pen. Marking a selection with this pen makes legible an otherwise invisible preprinted confirmation code. Confirmation codes are independent and random for each potential selection on each ballot. To verify that their individual votes are recorded correctly, voters can look up their ballot serial numbers online and verify that their confirmation codes are posted correctly. The confirmation codes do not allow voters to prove how they voted. However, the confirmation codes constitute convincing evidence of error or malfeasance in the event that incorrect codes are posted online. Correctness of the final tally with respect to the published codes is proven by election officials in a manner that can be verified by any interested party. Thus, compromise of either ballot chain of custody or the software systems cannot undetectably affect election integrity. Scantegrity II has been implemented and tested in small elections in which ballots were scanned either at the polling place or centrally. Preparations for its use in a public sector election have commenced.

Journal ArticleDOI
TL;DR: A novel method for OF estimation that uses traced ridge and valley lines that provides robustness against disturbances caused, e.g., by scars, contamination, moisture, or dryness of the finger is presented.
Abstract: Orientation field (OF) estimation is a crucial preprocessing step in fingerprint image processing. In this paper, we present a novel method for OF estimation that uses traced ridge and valley lines. This approach provides robustness against disturbances caused, e.g., by scars, contamination, moisture, or dryness of the finger. It considers pieces of flow information from a larger region and makes good use of fingerprint inherent properties like continuity of ridge flow perpendicular to the flow. The performance of the line-sensor method is compared with the gradients-based method and a multiscale directional operator. Its robustness is tested in experiments with simulated scar noise which is drawn on top of good quality fingerprint images from the FVC2000 and FVC2002 databases. Finally, the effectiveness of the line-sensor-based approach is demonstrated on 60 naturally poor quality fingerprint images from the FVC2004 database. All orientations marked by a human expert are made available at the journal's and the authors' Website for comparative tests.

Journal ArticleDOI
TL;DR: This work takes advantage of the temporal continuity in an iris video to improve matching performance using signal-level fusion, and finds that this method performs better than Ma's or Krichen's score- level fusion methods of N Hamming distance scores.
Abstract: We take advantage of the temporal continuity in an iris video to improve matching performance using signal-level fusion. From multiple frames of a frontal iris video, we create a single average image. For comparison, we reimplement three score-level fusion methods (Ma, Krichen, and Schmid). We find that our signal-level fusion of N images performs better than Ma's or Krichen's score-level fusion methods of N Hamming distance scores. Our signal-level fusion performs comparably to Schmid's log-likelihood method of score-level fusion, and our method achieves this performance using less computation time. We compare our signal fusion method with another new method: a multigallery, multiprobe method involving score-level fusion of N 2 Hamming distances. The multigallery, multiprobe score fusion has slightly better recognition performance, while the signal fusion has significant advantages in memory and computation requirements. No published prior work has shown any advantage of the use of video over still images in iris biometrics.

Journal ArticleDOI
TL;DR: This paper presents methods for authenticating images that have been acquired using flatbed desktop scanners that use scanner fingerprints based on statistics of imaging sensor pattern noise to achieve high classification accuracy.
Abstract: Digital images can be obtained through a variety of sources including digital cameras and scanners. In many cases, the ability to determine the source of a digital image is important. This paper presents methods for authenticating images that have been acquired using flatbed desktop scanners. These methods use scanner fingerprints based on statistics of imaging sensor pattern noise. To capture different types of sensor noise, a denoising filterbank consisting four different denoising filters is used for obtaining the noise patterns. To identify the source scanner, a support vector machine classifier based on these fingerprints is used. These features are shown to achieve high classification accuracy. Furthermore, the selected fingerprints based on statistical properties of the sensor noise are shown to be robust under postprocessing operations, such as JPEG compression, contrast stretching, and sharpening.

Journal ArticleDOI
TL;DR: New techniques for nonintrusive scanner forensics that utilize intrinsic sensor noise features are proposed to verify the source and integrity of digital scanned images to extend the scope of acquisition forensics to differentiating scanned images from camera-taken photographs and computer-generated graphics.
Abstract: A large portion of digital images available today are acquired using digital cameras or scanners. While cameras provide digital reproduction of natural scenes, scanners are often used to capture hard-copy art in a more controlled environment. In this paper, new techniques for nonintrusive scanner forensics that utilize intrinsic sensor noise features are proposed to verify the source and integrity of digital scanned images. Scanning noise is analyzed from several aspects using only scanned image samples, including through image denoising, wavelet analysis, and neighborhood prediction, and then obtain statistical features from each characterization. Based on the proposed statistical features of scanning noise, a robust scanner identifier is constructed to determine the model/brand of the scanner used to capture a scanned image. Utilizing these noise features, we extend the scope of acquisition forensics to differentiating scanned images from camera-taken photographs and computer-generated graphics. The proposed noise features also enable tampering forensics to detect postprocessing operations on scanned images. Experimental results are presented to demonstrate the effectiveness of employing the proposed noise features for performing various forensic analysis on scanners and scanned images.

Journal ArticleDOI
TL;DR: This paper proposes a mathematical model for the LoRDAS attack that allows it to evaluate its performance by relating it to the configuration parameters of the attack and the dynamics of network and victim, and makes some recommendations for the challenging task of building defense techniques against this attack.
Abstract: In recent years, variants of denial of service (DoS) attacks that use low-rate traffic have been proposed, including the Shrew attack, reduction of quality attacks, and low-rate DoS attacks against application servers (LoRDAS). All of these are flooding attacks that take advantage of vulnerability in the victims for reducing the rate of the traffic. Although their implications and impact have been comprehensively studied, mainly by means of simulation, there is a need for mathematical models by which the behaviour of these sometimes complex processes can be described. In this paper, we propose a mathematical model for the LoRDAS attack. This model allows us to evaluate its performance by relating it to the configuration parameters of the attack and the dynamics of network and victim. The model is validated by comparing the performance values given against those obtained from a simulated environment. In addition, some applicability issues for the model are contributed, together with interpretation guidelines to the model's behaviour. Finally, experience of the model enables us to make some recommendations for the challenging task of building defense techniques against this attack.

Journal ArticleDOI
TL;DR: The success of detecting YASS by the proposed method indicates a properly selected SO-domain is beneficial for steganalysis and confirms that the embedding locations are of great importance in designing a secure steganographic scheme.
Abstract: A promising steganographic method-yet another steganography scheme (YASS)-was designed to resist blind steganalysis via embedding data in randomized locations. In addition to a concrete realization which is named the YASS algorithm in this paper, a few strategies were proposed to work with the YASS algorithm in order to enhance the data embedding rate and security. In this work, the YASS algorithm and these strategies, together referred to as YASS, have been analyzed from a warden's perspective. It is observed that the embedding locations chosen by YASS are not randomized enough and the YASS embedding scheme causes detectable artifacts. We present a steganalytic method to attack the YASS algorithm, which is facilitated by a specifically selected steganalytic observation domain (SO-domain), a term to define the domain from which steganalytic features are extracted. The proposed SO-domain is not exactly, but partially accesses, the domain where the YASS algorithm embeds data. Statistical features generated from the SO-domain have demonstrated high effectiveness in detecting the YASS algorithm and identifying some embedding parameters. In addition, we discuss how to defeat the above-mentioned strategies of YASS and demonstrate a countermeasure to a new case in which the randomness of the embedding locations is enhanced. The success of detecting YASS by the proposed method indicates a properly selected SO-domain is beneficial for steganalysis and confirms that the embedding locations are of great importance in designing a secure steganographic scheme.

Journal ArticleDOI
TL;DR: Two simple-but-effective fast subspace learning and image projection methods, fast Haar transform (FHT) based principal component analysis and FHT based spectral regression discriminant analysis are proposed.
Abstract: Subspace learning is the process of finding a proper feature subspace and then projecting high-dimensional data onto the learned low-dimensional subspace. The projection operation requires many floating-point multiplications and additions, which makes the projection process computationally expensive. To tackle this problem, this paper proposes two simple-but-effective fast subspace learning and image projection methods, fast Haar transform (FHT) based principal component analysis and FHT based spectral regression discriminant analysis. The advantages of these two methods result from employing both the FHT for subspace learning and the integral vector for feature extraction. Experimental results on three face databases demonstrated their effectiveness and efficiency.

Journal ArticleDOI
TL;DR: Shannon entropy analysis shows that even if the biometric ciphertexts and some biometric traits are disclosed, the new constructions still can achieve consistently data security and biometric privacy.
Abstract: Single biometric cryptosystems were developed to obtain win-win scenarios for security and privacy. They are seriously threatened by spoof attacks, in which a forged biometric copy or artificially recreated biometric data of a legitimate user may be used to spoof a system. Meanwhile, feature alignment and quantization greatly degrade the accuracy of single biometric cryptosystems. In this paper, by trying to bind multiple biometrics to cryptography, a cryptosystem named multibiometric cryptosystem (MBC), is demonstrated from the theoretical point of view. First, an MBC with two fusion levels: fusion at the biometric level, and fusion at the cryptographic level, is formally defined. Then four models, namely biometric fusion model, MN-split model, nonsplit model, and package model, adopted at those two levels for fusion are presented. Shannon entropy analysis shows that even if the biometric ciphertexts and some biometric traits are disclosed, the new constructions still can achieve consistently data security and biometric privacy. In addition, the achievable accuracy is analyzed in terms of false acceptance rate/false rejection rate at each model. Finally, a comparison on the relative advantages and disadvantages of the proposed models is discussed.

Journal ArticleDOI
TL;DR: This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation and illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality.
Abstract: Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision (DD) fusion method, the performance optimization of individual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.

Journal ArticleDOI
TL;DR: A joint digital watermarking scheme using Chinese remainder theorem for the multiparty multilevel DRM architecture that takes care of the security concerns of all parties involved.
Abstract: Multiparty multilevel digital rights management (DRM) architecture involving several levels of distributors in between an owner and a consumer has been suggested as an alternative business model to the traditional two-party (buyer-seller) DRM architecture for digital content delivery. In the two-party DRM architecture, cryptographic techniques are used for secure delivery of the content, and watermarking techniques are used for protecting the rights of the seller and the buyer. The cryptographic protocols used in the two-party case for secure content delivery can be directly applied to the multiparty multilevel case. However, the watermarking protocols used in the two-party case may not directly carry over to the multiparty multilevel case, as it needs to address the simultaneous security concerns of multiple parties such as the owner, multiple levels of distributors, and consumers. Towards this, in this paper, we propose a joint digital watermarking scheme using Chinese remainder theorem for the multiparty multilevel DRM architecture. In the proposed scheme, watermark information is jointly created by all the parties involved; then a watermark signal is generated out of it and embedded into the content. This scheme takes care of the security concerns of all parties involved. Further, in the event of finding an illegal copy of the content, the violator(s) can be traced back.

Journal ArticleDOI
TL;DR: This work presents a more direct and parallel processing alternative using field-programmable gate arrays (FPGAs), offering an opportunity to increase speed and potentially alter the form factor of the resulting system.
Abstract: Iris recognition is one of the most accurate biometric methods in use today. However, the iris recognition algorithms are currently implemented on general purpose sequential processing systems, such as generic central processing units (CPUs). In this work, we present a more direct and parallel processing alternative using field-programmable gate arrays (FPGAs), offering an opportunity to increase speed and potentially alter the form factor of the resulting system. Within the means of this project, the most time-consuming operations of a modern iris recognition algorithm are deconstructed and directly parallelized. In particular, portions of iris segmentation, template creation, and template matching are parallelized on an FPGA-based system, with a demonstrated speedup of 9.6, 324, and 19 times, respectively, when compared to a state-of-the-art CPU-based version. Furthermore, the parallel algorithm on our FPGA also greatly outperforms our calculated theoretical best Intel CPU design. Finally, on a state-of-the-art FPGA, we conclude that a full implementation of a very fast iris recognition algorithm is more than feasible, providing a potential small form-factor solution.

Journal ArticleDOI
TL;DR: The novel iris recognition method shows a good performance when applied to a large database of irises and provides reliable identification and verification and allows for a quick analysis and comparison of iris samples.
Abstract: A novel iris recognition method is presented. In the method, the iris features are extracted using the oriented separable wavelet transforms (directionlets) and they are compared in terms of a weighted Hamming distance. The feature extraction and comparison are shift-, size-, and rotation-invariant to the location of iris in the acquired image. The generated iris code is binary, whose length is fixed (and therefore commensurable), independent of the iris image, and comparatively short. The novel method shows a good performance when applied to a large database of irises and provides reliable identification and verification. At the same time, it preserves conceptual and computational simplicity and allows for a quick analysis and comparison of iris samples.