scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Forensics and Security in 2012"


Journal ArticleDOI
TL;DR: A novel general strategy for building steganography detectors for digital images by assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters.
Abstract: We describe a novel general strategy for building steganography detectors for digital images. The process starts with assembling a rich model of the noise component as a union of many diverse submodels formed by joint distributions of neighboring samples from quantized image noise residuals obtained using linear and nonlinear high-pass filters. In contrast to previous approaches, we make the model assembly a part of the training process driven by samples drawn from the corresponding cover- and stego-sources. Ensemble classifiers are used to assemble the model as well as the final steganalyzer due to their low computational complexity and ability to efficiently work with high-dimensional feature spaces and large training sets. We demonstrate the proposed framework on three steganographic algorithms designed to hide messages in images represented in the spatial domain: HUGO, edge-adaptive algorithm by Luo , and optimally coded ternary ±1 embedding. For each algorithm, we apply a simple submodel-selection technique to increase the detection accuracy per model dimensionality and show how the detection saturates with increasing complexity of the rich model. By observing the differences between how different submodels engage in detection, an interesting interplay between the embedding and detection is revealed. Steganalysis built around rich image models combined with ensemble classifiers is a promising direction towards automatizing steganalysis for a wide spectrum of steganographic schemes.

1,553 citations


Journal ArticleDOI
TL;DR: This paper proposes an alternative and well-known machine learning tool-ensemble classifiers implemented as random forests-and argues that they are ideally suited for steganalysis.
Abstract: Today, the most accurate steganalysis methods for digital media are built as supervised classifiers on feature vectors extracted from the media. The tool of choice for the machine learning seems to be the support vector machine (SVM). In this paper, we propose an alternative and well-known machine learning tool-ensemble classifiers implemented as random forests-and argue that they are ideally suited for steganalysis. Ensemble classifiers scale much more favorably w.r.t. the number of training examples and the feature dimensionality with performance comparable to the much more complex SVMs. The significantly lower training complexity opens up the possibility for the steganalyst to work with rich (high-dimensional) cover models and train on larger training sets-two key elements that appear necessary to reliably detect modern steganographic algorithms. Ensemble classification is portrayed here as a powerful developer tool that allows fast construction of steganography detectors with markedly improved detection accuracy across a wide range of embedding methods. The power of the proposed framework is demonstrated on three steganographic methods that hide messages in JPEG images.

967 citations


Journal ArticleDOI
TL;DR: This work proposes a novel scheme for separable reversible data hiding in encrypted images by exploiting the spatial correlation in natural image when the amount of additional data is not too large.
Abstract: This work proposes a novel scheme for separable reversible data hiding in encrypted images. In the first phase, a content owner encrypts the original uncompressed image using an encryption key. Then, a data-hider may compress the least significant bits of the encrypted image using a data-hiding key to create a sparse space to accommodate some additional data. With an encrypted image containing additional data, if a receiver has the data-hiding key, he can extract the additional data though he does not know the image content. If the receiver has the encryption key, he can decrypt the received data to obtain an image similar to the original one, but cannot extract the additional data. If the receiver has both the data-hiding key and the encryption key, he can extract the additional data and recover the original content without any error by exploiting the spatial correlation in natural image when the amount of additional data is not too large.

626 citations


Journal ArticleDOI
TL;DR: This paper created a challenging real-world copy-move dataset, and a software framework for systematic image manipulation, and examined the 15 most prominent feature sets, finding the keypoint-based features Sift and Surf as well as the block-based DCT, DWT, KPCA, PCA, and Zernike features perform very well.
Abstract: A copy-move forgery is created by copying and pasting content within the same image, and potentially postprocessing it. In recent years, the detection of copy-move forgeries has become one of the most actively researched topics in blind image forensics. A considerable number of different algorithms have been proposed focusing on different types of postprocessed copies. In this paper, we aim to answer which copy-move forgery detection algorithms and processing steps (e.g., matching, filtering, outlier detection, affine transformation estimation) perform best in various postprocessing scenarios. The focus of our analysis is to evaluate the performance of previously proposed feature sets. We achieve this by casting existing algorithms in a common pipeline. In this paper, we examined the 15 most prominent feature sets. We analyzed the detection performance on a per-image basis and on a per-pixel basis. We created a challenging real-world copy-move dataset, and a software framework for systematic image manipulation. Experiments show, that the keypoint-based features Sift and Surf, as well as the block-based DCT, DWT, KPCA, PCA, and Zernike features perform very well. These feature sets exhibit the best robustness against various noise sources and downsampling, while reliably identifying the copied regions.

623 citations


Journal ArticleDOI
TL;DR: The security of HASBE is formally proved based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and its performance and computational complexity are formally analyzed.
Abstract: Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.

497 citations


Journal ArticleDOI
TL;DR: It is shown that an alternative to dynamic face matcher selection is to train face recognition algorithms on datasets that are evenly distributed across demographics, as this approach offers consistently high accuracy across all cohorts.
Abstract: This paper studies the influence of demographics on the performance of face recognition algorithms. The recognition accuracies of six different face recognition algorithms (three commercial, two nontrainable, and one trainable) are computed on a large scale gallery that is partitioned so that each partition consists entirely of specific demographic cohorts. Eight total cohorts are isolated based on gender (male and female), race/ethnicity (Black, White, and Hispanic), and age group (18-30, 30-50, and 50-70 years old). Experimental results demonstrate that both commercial and the nontrainable algorithms consistently have lower matching accuracies on the same cohorts (females, Blacks, and age group 18-30) than the remaining cohorts within their demographic. Additional experiments investigate the impact of the demographic distribution in the training set on the performance of a trainable face recognition algorithm. We show that the matching accuracy for race/ethnicity and age cohorts can be improved by training exclusively on that specific cohort. Operationally, this leads to a scenario, called dynamic face matcher selection, where multiple face recognition algorithms (each trained on different demographic cohorts) are available for a biometric system operator to select based on the demographic information extracted from a probe image. This procedure should lead to improved face recognition accuracy in many intelligence and law enforcement face recognition scenarios. Finally, we show that an alternative to dynamic face matcher selection is to train face recognition algorithms on datasets that are evenly distributed across demographics, as this approach offers consistently high accuracy across all cohorts.

426 citations


Journal ArticleDOI
TL;DR: A forensic algorithm to discriminate between original and forged regions in JPEG images, under the hypothesis that the tampered image presents a double JPEG compression, either aligned (A- DJPG) or nonaligned (NA-DJPG).
Abstract: In this paper, we propose a forensic algorithm to discriminate between original and forged regions in JPEG images, under the hypothesis that the tampered image presents a double JPEG compression, either aligned (A-DJPG) or nonaligned (NA-DJPG). Unlike previous approaches, the proposed algorithm does not need to manually select a suspect region in order to test the presence or the absence of double compression artifacts. Based on an improved and unified statistical model characterizing the artifacts that appear in the presence of both A-DJPG or NA-DJPG, the proposed algorithm automatically computes a likelihood map indicating the probability for each 8 × 8 discrete cosine transform block of being doubly compressed. The validity of the proposed approach has been assessed by evaluating the performance of a detector based on thresholding the likelihood map, considering different forensic scenarios. The effectiveness of the proposed method is also confirmed by tests carried on realistic tampered images. An interesting property of the proposed Bayesian approach is that it can be easily extended to work with traces left by other kinds of processing.

402 citations


Journal ArticleDOI
TL;DR: A forensic tool able to discriminate between original and forged regions in an image captured by a digital camera is presented, based on a new feature measuring the presence of demosaicking artifacts at a local level and a new statistical model allowing to derive the tampering probability of each 2 × 2 image block without requiring to know a priori the position of the forged region.
Abstract: In this paper, a forensic tool able to discriminate between original and forged regions in an image captured by a digital camera is presented. We make the assumption that the image is acquired using a Color Filter Array, and that tampering removes the artifacts due to the demosaicking algorithm. The proposed method is based on a new feature measuring the presence of demosaicking artifacts at a local level, and on a new statistical model allowing to derive the tampering probability of each 2 × 2 image block without requiring to know a priori the position of the forged region. Experimental results on different cameras equipped with different demosaicking algorithms demonstrate both the validity of the theoretical model and the effectiveness of our scheme.

357 citations


Journal ArticleDOI
TL;DR: Only a few of the proposed ECG recognition algorithms appear to be able to support performance improvement due to multiple training sessions, and only three of these algorithms produced equal error rates (EERs) in the single digits, including an EER of 5.5% using a method proposed by us.
Abstract: The electrocardiogram (ECG) is an emerging biometric modality that has seen about 13 years of development in peer-reviewed literature, and as such deserves a systematic review and discussion of the associated methods and findings. In this paper, we review most of the techniques that have been applied to the use of the electrocardiogram for biometric recognition. In particular, we categorize the methodologies based on the features and the classification schemes. Finally, a comparative analysis of the authentication performance of a few of the ECG biometric systems is presented, using our inhouse database. The comparative study includes the cases where training and testing data come from the same and different sessions (days). The authentication results show that most of the algorithms that have been proposed for ECG-based biometrics perform well when the training and testing data come from the same session. However, when training and testing data come from different sessions, a performance degradation occurs. Multiple training sessions were incorporated to diminish the loss in performance. That notwithstanding, only a few of the proposed ECG recognition algorithms appear to be able to support performance improvement due to multiple training sessions. Only three of these algorithms produced equal error rates (EERs) in the single digits, including an EER of 5.5% using a method proposed by us.

321 citations


Journal ArticleDOI
TL;DR: The world's largest gait database is described-the “OU-ISIR Gait Database, Large Population Dataset”-and its application to a statistically reliable performance evaluation of vision-based gait recognition is described.
Abstract: This paper describes the world's largest gait database-the “OU-ISIR Gait Database, Large Population Dataset”-and its application to a statistically reliable performance evaluation of vision-based gait recognition Whereas existing gait databases include at most 185 subjects, we construct a larger gait database that includes 4007 subjects (2135 males and 1872 females) with ages ranging from 1 to 94 years The dataset allows us to determine statistically significant performance differences between currently proposed gait features In addition, the dependences of gait-recognition performance on gender and age group are investigated and the results provide several novel insights, such as the gradual change in recognition performance with human growth

313 citations


Journal ArticleDOI
TL;DR: This paper investigates joint relay and jammer selection in two-way cooperative networks, consisting of two sources, a number of intermediate nodes, and one eavesdropper, with the constraints of physical-layer security and introduces a hybrid scheme to switch between jamming and nonjamming modes.
Abstract: In this paper, we investigate joint relay and jammer selection in two-way cooperative networks, consisting of two sources, a number of intermediate nodes, and one eavesdropper, with the constraints of physical-layer security. Specifically, the proposed algorithms select two or three intermediate nodes to enhance security against the malicious eavesdropper. The first selected node operates in the conventional relay mode and assists the sources to deliver their data to the corresponding destinations using an amplify-and-forward protocol. The second and third nodes are used in different communication phases as jammers in order to create intentional interference upon the malicious eavesdropper. First, we find that in a topology where the intermediate nodes are randomly and sparsely distributed, the proposed schemes with cooperative jamming outperform the conventional nonjamming schemes within a certain transmitted power regime. We also find that, in the scenario where the intermediate nodes gather as a close cluster, the jamming schemes may be less effective than their nonjamming counterparts. Therefore, we introduce a hybrid scheme to switch between jamming and nonjamming modes. Simulation results validate our theoretical analysis and show that the hybrid switching scheme further improves the secrecy rate.

Journal ArticleDOI
TL;DR: A feature-level fusion framework to simultaneously protect multiple templates of a user as a single secure sketch based on two different databases, each containing the three most popular biometric modalities, namely, fingerprint, iris, and face.
Abstract: Multibiometric systems are being increasingly de- ployed in many large-scale biometric applications (e.g., FBI-IAFIS, UIDAI system in India) because they have several advantages such as lower error rates and larger population coverage compared to unibiometric systems. However, multibiometric systems require storage of multiple biometric templates (e.g., fingerprint, iris, and face) for each user, which results in increased risk to user privacy and system security. One method to protect individual templates is to store only the secure sketch generated from the corresponding template using a biometric cryptosystem. This requires storage of multiple sketches. In this paper, we propose a feature-level fusion framework to simultaneously protect multiple templates of a user as a single secure sketch. Our main contributions include: (1) practical implementation of the proposed feature-level fusion framework using two well-known biometric cryptosystems, namery,fuzzy vault and fuzzy commitment, and (2) detailed analysis of the trade-off between matching accuracy and security in the proposed multibiometric cryptosystems based on two different databases (one real and one virtual multimodal database), each containing the three most popular biometric modalities, namely, fingerprint, iris, and face. Experimental results show that both the multibiometric cryptosystems proposed here have higher security and matching performance compared to their unibiometric counterparts.

Journal ArticleDOI
TL;DR: Experimental results reveal that the proposed method offers lower distortion than DE by providing more compact neighborhood sets and allowing embedded digits in any notational system and is secure under the detection of some well-known steganalysis techniques.
Abstract: This paper proposes a new data-hiding method based on pixel pair matching (PPM). The basic idea of PPM is to use the values of pixel pair as a reference coordinate, and search a coordinate in the neighborhood set of this pixel pair according to a given message digit. The pixel pair is then replaced by the searched coordinate to conceal the digit. Exploiting modification direction (EMD) and diamond encoding (DE) are two data-hiding methods proposed recently based on PPM. The maximum capacity of EMD is 1.161 bpp and DE extends the payload of EMD by embedding digits in a larger notational system. The proposed method offers lower distortion than DE by providing more compact neighborhood sets and allowing embedded digits in any notational system. Compared with the optimal pixel adjustment process (OPAP) method, the proposed method always has lower distortion for various payloads. Experimental results reveal that the proposed method not only provides better performance than those of OPAP and DE, but also is secure under the detection of some well-known steganalysis techniques.

Journal ArticleDOI
TL;DR: This paper derives a suboptimal beamforming scheme based on a Markov bound, which performs reasonably well and generalizes the cases with perfect as well as without channel state information of the eavesdropper channel.
Abstract: Secrecy on the physical layer is a promising technique to simplify the overall cross-layer secrecy concept. In many recent works on the multiple antenna wiretap channel, perfect channel state information to the intended receiver as well as the passive eavesdropper are assumed. In this paper, the transmitter has only partial information about the channel to the eavesdropper, but full information on the main channel to the intended receiver. The applied channel model is the flat-fading multiple-input single-output wiretap channel. We minimize the outage probability of secure transmission under single-stream beamforming and the use of artificial noise in the null space of the main channel. Furthermore, we derive a suboptimal beamforming scheme based on a Markov bound, which performs reasonably well. The results generalize the cases with perfect as well as without channel state information of the eavesdropper channel. Numerical simulations illustrate the secrecy outage probability over the degree of channel knowledge and confirm the theoretical results.

Journal ArticleDOI
TL;DR: Theoretical analysis shows that the proposed CI method can remove the interference and raise the CCN value of a positive sample and thus achieve greater CI performance, and CCN values of the negative sample class with the proposed method follow the normal distribution N (0,1) and the false positive rate can be calculated.
Abstract: Sensor pattern noise (SPN) extracted from digital images has been proved to be a unique fingerprint of digital cameras. However, SPN can be contaminated largely in the frequency domain by image content and nonunique artefacts of JPEG compression, on-sensor signal transfer, sensor design, color interpolation. The source camera identification (CI) performance based on SPN needs to be improved for small sizes of images and especially in resisting JPEG compression. Because the SPN is modelled as an additive white Gaussian noise (AWGN) in its extraction process from an image, it is reasonable to assume the camera reference SPN to be a white noise signal in order to remove the interference mentioned above. The noise residues (SPN) extracted from the original images are whitened first, then they are averaged to generate the camera reference SPN. Motivated by Goljan 's test statistic peak to correlation energy (PCE), we propose to use correlation to circular correlation norm (CCN) as the test statistic, which can lower the false positive rate to be a half of that with PCE. Theoretical analysis shows that the proposed CI method can remove the interference and raise the CCN value of a positive sample and thus achieve greater CI performance, CCN values of the negative sample class with the proposed method follow the normal distribution N (0,1) and the false positive rate can be calculated. Compared with the existing state of the art on seven cameras, 1400 photos totally (200 for each camera), the experimental results show that the proposed CI method achieves the best receiver operating characteristic (ROC) performance among all CI methods in all cases and especially achieves much better resistance to JPEG compression than all of the existing state-of-the-art CI methods.

Journal ArticleDOI
TL;DR: This work proposes encrypting private data and processing them under encryption to generate recommendations by introducing a semitrusted third party and using data packing, and presents a comparison protocol, which is the first one to the best of the knowledge, that compares multiple values that are packed in one encryption.
Abstract: Recommender systems have become an important tool for personalization of online services. Generating recommendations in online services depends on privacy-sensitive data collected from the users. Traditional data protection mechanisms focus on access control and secure transmission, which provide security only against malicious third parties, but not the service provider. This creates a serious privacy risk for the users. In this paper, we aim to protect the private data against the service provider while preserving the functionality of the system. We propose encrypting private data and processing them under encryption to generate recommendations. By introducing a semitrusted third party and using data packing, we construct a highly efficient system that does not require the active participation of the user. We also present a comparison protocol, which is the first one to the best of our knowledge, that compares multiple values that are packed in one encryption. Conducted experiments show that this work opens a door to generate private recommendations in a privacy-preserving manner.

Journal ArticleDOI
TL;DR: Systematic experimentations show that the new approach compares favorably with state-of-the-art methods in terms of accuracy and, at the same time, provides a good protection of minutiae information and is robust against masquerade attacks.
Abstract: Although several fingerprint template protection methods have been proposed in the literature, the problem is still unsolved, since enforcing nonreversibility tends to produce an excessive drop in accuracy. Furthermore, unlike fingerprint verification, whose performance is assessed today with public benchmarks and protocols, performance of template protection approaches is often evaluated in heterogeneous scenarios, thus making it very difficult to compare existing techniques. In this paper, we propose a novel protection technique for Minutia Cylinder-Code (MCC), which is a well-known local minutiae representation. A sophisticate algorithm is designed to reverse MCC (i.e., recovering original minutiae positions and angles). Systematic experimentations show that the new approach compares favorably with state-of-the-art methods in terms of accuracy and, at the same time, provides a good protection of minutiae information and is robust against masquerade attacks.

Journal ArticleDOI
TL;DR: The proposed scheme is able to accurately estimate the grid shift and the quantization step of the DC coefficient of the primary JPEG compression, allowing one to perform a more detailed analysis of possibly forged images.
Abstract: In this paper, a simple yet reliable algorithm to detect the presence of nonaligned double JPEG compression (NA-JPEG) in compressed images is proposed. The method evaluates a single feature based on the integer periodicity of the blockwise discrete cosine transform (DCT) coefficients when the DCT is computed according to the grid of the previous JPEG compression. Even if the proposed feature is computed relying only on DC coefficient statistics, a simple threshold detector can classify NA-JPEG images with improved accuracy with respect to existing methods and on smaller image sizes, without resorting to a properly trained classifier. Moreover, the proposed scheme is able to accurately estimate the grid shift and the quantization step of the DC coefficient of the primary JPEG compression, allowing one to perform a more detailed analysis of possibly forged images.

Journal ArticleDOI
TL;DR: This paper develops a theoretical model of the forensically detectable fingerprints that frame deletion or addition leaves behind, and develops a game theoretic framework for analyzing the interplay between a forensic investigator and a forger.
Abstract: Due to the ease with which digital information can be altered, many digital forensic techniques have been developed to authenticate multimedia content. Similarly, a number of anti-forensic operations have recently been designed to make digital forgeries undetectable by forensic techniques. However, like the digital manipulations they are designed to hide, many anti-forensic operations leave behind their own forensically detectable traces. As a result, a digital forger must balance the trade-off between completely erasing evidence of their forgery and introducing new evidence of anti-forensic manipulation. Because a forensic investigator is typically bound by a constraint on their probability of false alarm (P_fa), they must also balance a trade-off between the accuracy with which they detect forgeries and the accuracy with which they detect the use of anti-forensics. In this paper, we analyze the interaction between a forger and a forensic investigator by examining the problem of authenticating digital videos. Specifically, we study the problem of adding or deleting a sequence of frames from a digital video. We begin by developing a theoretical model of the forensically detectable fingerprints that frame deletion or addition leaves behind, then use this model to improve upon the video frame deletion or addition detection technique proposed by Wang and Farid. Next, we propose an anti-forensic technique designed to fool video forensic techniques and develop a method for detecting the use of anti-forensics. We introduce a new set of techniques for evaluating the performance of anti-forensic operations and develop a game theoretic framework for analyzing the interplay between a forensic investigator and a forger. We use these new techniques to evaluate the performance of each of our proposed forensic and anti-forensic techniques, and identify the optimal actions of both the forger and forensic investigator.

Journal ArticleDOI
TL;DR: This paper formulated the traffic analysis attack and defense problem, and defined a metric, cost coefficient of anonymization (CCA), to measure the performance of anonymized, and concluded that the proposed strategy is better than the current dummy packet padding strategy in theory.
Abstract: Anonymous communication has become a hot research topic in order to meet the increasing demand for web privacy protection However, there are few such systems which can provide high level anonymity for web browsing The reason is the current dominant dummy packet padding method for anonymization against traffic analysis attacks This method inherits huge delay and bandwidth waste, which inhibits its use for web browsing In this paper, we propose a predicted packet padding strategy to replace the dummy packet padding method for anonymous web browsing systems The proposed strategy mitigates delay and bandwidth waste significantly on average We formulated the traffic analysis attack and defense problem, and defined a metric, cost coefficient of anonymization (CCA), to measure the performance of anonymization We thoroughly analyzed the problem with the characteristics of web browsing and concluded that the proposed strategy is better than the current dummy packet padding strategy in theory We have conducted extensive experiments on two real world data sets, and the results confirmed the advantage of the proposed method

Journal ArticleDOI
TL;DR: The intrinsically secure communications graph (iS-graph), a random graph which describes the connections that can be securely established over a large-scale network, is defined and results help clarify how the spatial density of eavesdroppers can compromise the intrinsic security of wireless networks.
Abstract: The ability to exchange secret information is critical to many commercial, governmental, and military networks. Information-theoretic security-widely accepted as the strictest notion of security-relies on channel coding techniques that exploit the inherent randomness of the propagation channels to strengthen the security of digital communications systems. Motivated by recent developments in the field, we aim to characterize the fundamental secrecy limits of wireless networks. The paper is comprised of two separate parts. In Part I, we define the intrinsically secure communications graph (iS-graph), a random graph which describes the connections that can be securely established over a large-scale network. We provide conclusive results for the local connectivity of the Poisson iS-graph, in terms of node degrees and isolation probabilities. We show how the secure connectivity of the network varies with the wireless propagation effects, the secrecy rate threshold of each link, and the noise powers of legitimate nodes and eavesdroppers. We then propose sectorized transmission and eavesdropper neutralization as viable strategies for improving the secure connectivity. Our results help clarify how the spatial density of eavesdroppers can compromise the intrinsic security of wireless networks. In Part II of the paper, we study the achievable secrecy rates and the effect of eavesdropper collusion.

Journal ArticleDOI
TL;DR: Results show that a basic version of local binary patterns (LBP) or local derivative and directional patterns are more robust than rotation invariant uniform LBP or GLCM features to the gray level distortion when using a support vector machine with histogram oriented kernels as a classifier.
Abstract: Several papers have recently appeared in the literature which propose pseudo-dynamic features for automatic static handwritten signature verification based on the use of gray level values from signature stroke pixels. Good results have been obtained using rotation invariant uniform local binary patterns LBP8,1riu2 plus LBP16,2riu2 and statistical measures from gray level co-occurrence matrices (GLCM) with MCYT and GPDS offline signature corpuses. In these studies the corpuses contain signatures written on a uniform white “nondistorting” background, however the gray level distribution of signature strokes changes when it is written on a complex background, such as a check or an invoice. The aim of this paper is to measure gray level features robustness when it is distorted by a complex background and also to propose more stable features. A set of different checks and invoices with varying background complexity is blended with the MCYT and GPDS signatures. The blending model is based on multiplication. The signature models are trained with genuine signatures on white background and tested with other genuine and forgeries mixed with different backgrounds. Results show that a basic version of local binary patterns (LBP) or local derivative and directional patterns are more robust than rotation invariant uniform LBP or GLCM features to the gray level distortion when using a support vector machine with histogram oriented kernels as a classifier.

Journal ArticleDOI
TL;DR: A comprehensive description of the first known active hardware metering method is provided and new formal security proofs are introduced and an automatic synthesis method for low overhead hardware implementation is devised.
Abstract: In the horizontal semiconductor business model where the designer's intellectual property (IP) is transparent to foundry and to other entities on the production chain, integrated circuits (ICs) overbuilding and IP piracy are prevalent problems. Active metering is a suite of methods enabling the designers to control their chips postfabrication. We provide a comprehensive description of the first known active hardware metering method and introduce new formal security proofs. The active metering method uniquely and automatically locks each IC upon manufacturing, such that the IP rights owner is the only entity that can provide the specific key to unlock or otherwise control each chip. The IC control mechanism exploits: 1) the functional description of the design, and 2) unique and unclonable IC identifiers. The locks are embedded by modifying the structure of the hardware computation model, in the form of a finite state machine (FSM). We show that for each IC hiding the locking states within the modified FSM structure can be constructed as an instance of a general output multipoint function that can be provably efficiently obfuscated. The hidden locks within the FSM may also be used for remote enabling and disabling of chips by the IP rights owner during the IC's normal operation. An automatic synthesis method for low overhead hardware implementation is devised. Attacks and countermeasures are addressed. Experimental evaluations demonstrate the low overhead of the method. Proof-of-concept implementation on the H.264 MPEG decoder automatically synthesized on a Xilinix Virtex-5 field-programmable gate array (FPGA) further shows the practicality, security, and the low overhead of the new method.

Journal ArticleDOI
TL;DR: The proposed shape-contexts-based image hashing approach using robust local feature points yields better identification performances under geometric attacks such as rotation attacks and brightness changes, and provides comparable performances under classical distortions such as additive noise, blurring, and compression.
Abstract: Local feature points have been widely investigated in solving problems in computer vision, such as robust matching and object detection However, its investigation in the area of image hashing is still limited In this paper, we propose a novel shape-contexts-based image hashing approach using robust local feature points The contributions are twofold: 1) The robust SIFT-Harris detector is proposed to select the most stable SIFT keypoints under various content-preserving distortions 2) Compact and robust image hashes are generated by embedding the detected local features into shape-contexts-based descriptors Experimental results show that the proposed image hashing is robust to a wide range of distortions and attacks, due to the benefits of robust salient keypoints detection and the shape-contexts-based feature descriptors When compared with the current state-of-the-art schemes, the proposed scheme yields better identification performances under geometric attacks such as rotation attacks and brightness changes, and provides comparable performances under classical distortions such as additive noise, blurring, and compression Also, we demonstrate that the proposed approach could be applied for image tampering detection

Journal ArticleDOI
TL;DR: The synopsis diffusion approach is made secure against attacks in which compromised nodes contribute false subaggregate values, and a novel lightweight verification algorithm by which the base station can determine if the computed aggregate includes any false contribution.
Abstract: In a large sensor network, in-network data aggregation significantly reduces the amount of communication and energy consumption. Recently, the research community has proposed a robust aggregation framework called synopsis diffusion which combines multipath routing schemes with duplicate-insensitive algorithms to accurately compute aggregates (e.g., predicate Count, Sum) in spite of message losses resulting from node and transmission failures. However, this aggregation framework does not address the problem of false subaggregate values contributed by compromised nodes resulting in large errors in the aggregate computed at the base station, which is the root node in the aggregation hierarchy. This is an important problem since sensor networks are highly vulnerable to node compromises due to the unattended nature of sensor nodes and the lack of tamper-resistant hardware. In this paper, we make the synopsis diffusion approach secure against attacks in which compromised nodes contribute false subaggregate values. In particular, we present a novel lightweight verification algorithm by which the base station can determine if the computed aggregate (predicate Count or Sum) includes any false contribution. Thorough theoretical analysis and extensive simulation study show that our algorithm outperforms other existing approaches. Irrespective of the network size, the per-node communication overhead in our algorithm is O(1).

Journal ArticleDOI
TL;DR: A novel framework for facilitating the acquisition of provably trustworthy hardware intellectual property (IP) that draws upon research in the field of proof-carrying code (PCC) to allow for formal yet computationally straightforward validation of security-related properties by the IP consumer.
Abstract: We present a novel framework for facilitating the acquisition of provably trustworthy hardware intellectual property (IP). The proposed framework draws upon research in the field of proof-carrying code (PCC) to allow for formal yet computationally straightforward validation of security-related properties by the IP consumer. These security-related properties, agreed upon a priori by the IP vendor and consumer and codified in a temporal logic, outline the boundaries of trusted operation, without necessarily specifying the exact IP functionality. A formal proof of these properties is then crafted by the vendor and presented to the consumer alongside the hardware IP. The consumer, in turn, can easily and automatically check the correctness of the proof and, thereby, validate compliance of the hardware IP with the agreed-upon properties. We implement the proposed framework using a synthesizable subset of Verilog and a series of pertinent definitions in the Coq theorem-proving language. Finally, we demonstrate the application of this framework on a simple IP acquisition scenario, including specification of security-related properties, Verilog code for two alter- native circuit implementations, as well as proofs of their security compliance.

Journal ArticleDOI
TL;DR: In this paper, a channel model for a multipath wireless channel and exploit the channel diversity in generating secret key bits is studied. But the key extraction methods based both on entire channel state information (CSI) and on single channel parameter such as the received signal strength indicators (RSSI) are compared.
Abstract: We design and analyze a method to extract secret keys from the randomness inherent to wireless channels. We study a channel model for a multipath wireless channel and exploit the channel diversity in generating secret key bits. We compare the key extraction methods based both on entire channel state information (CSI) and on single channel parameter such as the received signal strength indicators (RSSI). Due to the reduction in the degree-of-freedom when going from CSI to RSSI, the rate of key extraction based on CSI is far higher than that based on RSSI. This suggests that exploiting channel diversity and making CSI information available to higher layers would greatly benefit the secret key generation. We propose a key generation system based on low-density parity-check (LDPC) codes and describe the design and performance of two systems: one based on binary LDPC codes and the other (useful at higher signal-to-noise ratios) based on four-ary LDPC codes.

Journal ArticleDOI
TL;DR: A user authentication protocol named oPass is designed which leverages a user's cellphone and short message service to thwart password stealing and password reuse attacks and is believed to be efficient and affordable compared with the conventional web authentication mechanisms.
Abstract: Text password is the most popular form of user authentication on websites due to its convenience and simplicity. However, users' passwords are prone to be stolen and compromised under different threats and vulnerabilities. Firstly, users often select weak passwords and reuse the same passwords across different websites. Routinely reusing passwords causes a domino effect; when an adversary compromises one password, she will exploit it to gain access to more websites. Second, typing passwords into untrusted computers suffers password thief threat. An adversary can launch several password stealing attacks to snatch passwords, such as phishing, keyloggers and malware. In this paper, we design a user authentication protocol named oPass which leverages a user's cellphone and short message service to thwart password stealing and password reuse attacks. oPass only requires each participating website possesses a unique phone number, and involves a telecommunication service provider in registration and recovery phases. Through oPass, users only need to remember a long-term password for login on all websites. After evaluating the oPass prototype, we believe oPass is efficient and affordable compared with the conventional web authentication mechanisms.

Journal ArticleDOI
TL;DR: This work proposes an identity-mapping function that expands the set of CRPs of a ring-oscillator PUF (RO-PUF) with low area cost and shows the enhanced CRP generation capability of the new function using a statistical hypothesis test.
Abstract: A Physical Unclonable Function (PUF) is a promising solution to many security issues due its ability to generate a die unique identifier that can resist cloning attempts as well as physical tampering. However, the efficiency of a PUF depends on its implementation cost, its reliability, its resiliency to attacks, and the amount of entropy in it. PUF entropy is used to construct crypto graphic keys, chip identifiers, or challenge-response pairs (CRPs) in a chip authentication mechanism. The amount of entropy in a PUF is limited by the circuit resources available to build a PUF. As a result, generating longer keys or larger sets of CRPs may increase PUF circuit cost. We address this limitation in a PUF by proposing an identity-mapping function that expands the set of CRPs of a ring-oscillator PUF (RO-PUF) with low area cost. The CRPs generated through this function exhibit strong PUF qualities in terms of uniqueness and reliability. To introduce the identity-mapping function, we formulate a novel PUF system model that uncouples PUF measurement from PUF identifier formation. We show the enhanced CRP generation capability of the new function using a statistical hypothesis test. An implementation of our technique on a low-cost FPGA platform shows at least 2 times savings in area compared to the traditional RO-PUF. The proposed technique is validated using a population of 125 chips, and its reliability over varying environmental conditions is shown.

Journal ArticleDOI
TL;DR: While Eve's channel has a quality equal to or better than that of Bob's channel, it is shown that the use of a hybrid automatic repeat-request protocol with authentication still allows achieving a sufficient level of security.
Abstract: This paper examines the use of nonsystematic channel codes to obtain secure transmissions over the additive white Gaussian noise wire-tap channel. Unlike the previous approaches, we propose to implement nonsystematic coded transmission by scrambling the information bits, and characterize the bit error rate of scrambled transmissions through theoretical arguments and numerical simulations. We have focused on some examples of Bose-Chaudhuri-Hocquenghem and low-density parity-check codes to estimate the security gap, which we have used as a measure of physical layer security, in addition to the bit error rate. Based on a number of numerical examples, we found that such a transmission technique can outperform alternative solutions. In fact, when an eavesdropper (Eve) has a worse channel than the authorized user (Bob), the security gap required to reach a given level of security is very small. The amount of degradation of Eve's channel with respect to Bob's that is needed to achieve sufficient security can be further reduced by implementing scrambling and descrambling operations on blocks of frames, rather than on single frames. While Eve's channel has a quality equal to or better than that of Bob's channel, we have shown that the use of a hybrid automatic repeat-request protocol with authentication still allows achieving a sufficient level of security. Finally, the secrecy performance of some practical schemes has also been measured in terms of the equivocation rate about the message at the eavesdropper and compared with that of ideal codes.