scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Forensics and Security in 2015"


Journal ArticleDOI
TL;DR: The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction, and the copy-move regions can be detected by matching between these patches.
Abstract: In this paper, we propose a scheme to detect the copy-move forgery in an image, mainly by extracting the keypoints for comparison. The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction. As a result, the copy-move regions can be detected by matching between these patches. The matching process consists of two stages. In the first stage, we find the suspicious pairs of patches that may contain copy-move forgery regions, and we roughly estimate an affine transform matrix. In the second stage, an Expectation-Maximization-based algorithm is designed to refine the estimated matrix and to confirm the existence of copy-move forgery. Experimental results prove the good performance of the proposed scheme via comparing it with the state-of-the-art schemes on the public databases.

780 citations


Journal ArticleDOI
TL;DR: An efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA) that outperforms the state-of-the-art methods in spoof detection and highlights the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.
Abstract: Automatic face recognition is now widely used in applications ranging from deduplication of identity to authentication of mobile payment. This popularity of face recognition has raised concerns about face spoof attacks (also known as biometric sensor presentation attacks), where a photo or video of an authorized person’s face could be used to gain access to facilities or services. While a number of face spoof detection techniques have been proposed, their generalization ability has not been adequately addressed. We propose an efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA). Four different features (specular reflection, blurriness, chromatic moment, and color diversity) are extracted to form the IDA feature vector. An ensemble classifier, consisting of multiple SVM classifiers trained for different face spoof attacks (e.g., printed photo and replayed video), is used to distinguish between genuine (live) and spoof faces. The proposed approach is extended to multiframe face spoof detection in videos using a voting-based scheme. We also collect a face spoof database, MSU mobile face spoofing database (MSU MFSD), using two mobile devices (Google Nexus 5 and MacBook Air) with three types of spoof attacks (printed photo, replayed video with iPhone 5S, and replayed video with iPad Air). Experimental results on two public-domain face spoof databases (Idiap REPLAY-ATTACK and CASIA FASD), and the MSU MFSD database show that the proposed approach outperforms the state-of-the-art methods in spoof detection. Our results also highlight the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.

716 citations


Journal ArticleDOI
TL;DR: This work proposes a CPPA scheme for VANETs that does not use bilinear paring and demonstrates that it could supports both the mutual authentication and the privacy protection simultaneously and yields a better performance in terms of computation cost and communication cost.
Abstract: By broadcasting messages about traffic status to vehicles wirelessly, a vehicular ad hoc network (VANET) can improve traffic safety and efficiency. To guarantee secure communication in VANETs, security and privacy issues must be addressed before their deployment. The conditional privacy-preserving authentication (CPPA) scheme is suitable for solving security and privacy-preserving problems in VANETs, because it supports both mutual authentication and privacy protection simultaneously. Many identity-based CPPA schemes for VANETs using bilinear pairings have been proposed over the last few years to enhance security or to improve performance. However, it is well known that the bilinear pairing operation is one of the most complex operations in modern cryptography. To achieve better performance and reduce computational complexity of information processing in VANET, the design of a CPPA scheme for the VANET environment that does not use bilinear paring becomes a challenge. To address this challenge, we propose a CPPA scheme for VANETs that does not use bilinear paring and we demonstrate that it could supports both the mutual authentication and the privacy protection simultaneously. Our proposed CPPA scheme retains most of the benefits obtained with the previously proposed CPPA schemes. Moreover, the proposed CPPA scheme yields a better performance in terms of computation cost and communication cost making it be suitable for use by the VANET safety-related applications.

625 citations


Journal ArticleDOI
TL;DR: This work assumes a very limited knowledge about biometric spoofing at the sensor to derive outstanding spoofing detection systems for iris, face, and fingerprint modalities based on two deep learning approaches based on convolutional networks.
Abstract: Biometrics systems have significantly improved person identification and authentication, playing an important role in personal, national, and global security. However, these systems might be deceived (or spoofed) and, despite the recent advances in spoofing detection, current solutions often rely on domain knowledge, specific biometric reading systems, and attack types. We assume a very limited knowledge about biometric spoofing at the sensor to derive outstanding spoofing detection systems for iris, face, and fingerprint modalities based on two deep learning approaches. The first approach consists of learning suitable convolutional network architectures for each domain, whereas the second approach focuses on learning the weights of the network via back propagation. We consider nine biometric spoofing benchmarks—each one containing real and fake samples of a given biometric modality and attack type—and learn deep representations for each benchmark by combining and contrasting the two learning approaches. This strategy not only provides better comprehension of how these approaches interplay, but also creates systems that exceed the best known results in eight out of the nine benchmarks. The results strongly indicate that spoofing detection systems based on convolutional networks can be robust to attacks already known and possibly adapted, with little effort, to image-based attacks that are yet to come.

353 citations


Journal ArticleDOI
TL;DR: A novel feature set for steganalysis of JPEG images engineered as first-order statistics of quantized noise residuals obtained from the decompressed JPEG image using 64 kernels of the discrete cosine transform (DCT) (the so-called undecimated DCT).
Abstract: This paper introduces a novel feature set for steganalysis of JPEG images. The features are engineered as first-order statistics of quantized noise residuals obtained from the decompressed JPEG image using 64 kernels of the discrete cosine transform (DCT) (the so-called undecimated DCT). This approach can be interpreted as a projection model in the JPEG domain, forming thus a counterpart to the projection spatial rich model. The most appealing aspect of this proposed steganalysis feature set is its low computational complexity, lower dimensionality in comparison with other rich models, and a competitive performance with respect to previously proposed JPEG domain steganalysis features.

350 citations


Journal ArticleDOI
TL;DR: This paper first analyzes He-Wang's scheme, then proposes a new secure multi-server authentication protocol using biometric-based smart card and ECC with more security functionalities and shows that the proposed scheme provides secure authentication.
Abstract: Recently, in 2014, He and Wang proposed a robust and efficient multi-server authentication scheme using biometrics-based smart card and elliptic curve cryptography (ECC). In this paper, we first analyze He–Wang’s scheme and show that their scheme is vulnerable to a known session-specific temporary information attack and impersonation attack. In addition, we show that their scheme does not provide strong user’s anonymity. Furthermore, He–Wang’s scheme cannot provide the user revocation facility when the smart card is lost/stolen or user’s authentication parameter is revealed. Apart from these, He–Wang’s scheme has some design flaws, such as wrong password login and its consequences, and wrong password update during password change phase. We then propose a new secure multi-server authentication protocol using biometric-based smart card and ECC with more security functionalities. Using the Burrows–Abadi–Needham logic, we show that our scheme provides secure authentication. In addition, we simulate our scheme for the formal security verification using the widely accepted and used automated validation of Internet security protocols and applications tool, and show that our scheme is secure against passive and active attacks. Our scheme provides high security along with low communication cost, computational cost, and variety of security features. As a result, our scheme is very suitable for battery-limited mobile devices as compared with He–Wang’s scheme.

335 citations


Journal ArticleDOI
TL;DR: A new algorithm for the accurate detection and localization of copy-move forgeries, based on rotation-invariant features computed densely on the image, is proposed, using a fast approximate nearest-neighbor search algorithm, PatchMatch, especially suited for the computation of dense fields over images.
Abstract: We propose a new algorithm for the accurate detection and localization of copy–move forgeries, based on rotation-invariant features computed densely on the image. Dense-field techniques proposed in the literature guarantee a superior performance with respect to their keypoint-based counterparts, at the price of a much higher processing time, mostly due to the feature matching phase. To overcome this limitation, we resort here to a fast approximate nearest-neighbor search algorithm, PatchMatch, especially suited for the computation of dense fields over images. We adapt the matching algorithm to deal efficiently with invariant features, so as to achieve higher robustness with respect to rotations and scale changes. Moreover, leveraging on the smoothness of the output field, we implement a simplified and reliable postprocessing procedure. The experimental analysis, conducted on databases available online, proves the proposed technique to be at least as accurate, generally more robust, and typically much faster than the state-of-the-art dense-field references.

331 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed RDH method outperforms the conventional PEE and its miscellaneous extensions including both one- or two-dimensional PEH-based ones.
Abstract: Prediction-error expansion (PEE) is the most successful reversible data hiding (RDH) technique, and existing PEE-based RDH methods are mainly based on the modification of one- or two-dimensional prediction-error histogram (PEH). The two-dimensional PEH-based methods perform generally better than those based on one-dimensional PEH; however, their performance is still unsatisfactory since the PEH modification manner is fixed and independent of image content. In this paper, we propose a new RDH method based on PEE for multiple histograms. Unlike the previous methods, we consider in this paper a sequence of histograms and devise a new embedding mechanism based on multiple histograms modification (MHM). A complexity measurement is computed for each pixel according to its context, and the pixels with a given complexity are collected together to generate a PEH. By varying the complexity to cover the whole image, a sequence of histograms can be generated. Then, two expansion bins are selected in each generated histogram and data embedding is realized based on MHM. Here, the expansion bins are adaptively selected considering the image content such that the embedding distortion is minimized. With such selected expansion bins, the proposed MHM-based RDH method works well. Experimental results show that the proposed method outperforms the conventional PEE and its miscellaneous extensions including both one- or two-dimensional PEH-based ones.

307 citations


Journal ArticleDOI
TL;DR: Investigating the secrecy performance of full-duplex relay (FDR) networks shows that FDR networks have better secrecy performance than half duplex relay networks, if the self-interference can be well suppressed.
Abstract: This paper investigates the secrecy performance of full-duplex relay (FDR) networks. The resulting analysis shows that FDR networks have better secrecy performance than half duplex relay networks, if the self-interference can be well suppressed. We also propose a full duplex jamming relay network, in which the relay node transmits jamming signals while receiving the data from the source. While the full duplex jamming scheme has the same data rate as the half duplex scheme, the secrecy performance can be significantly improved, making it an attractive scheme when the network secrecy is a primary concern. A mathematic model is developed to analyze secrecy outage probabilities for the half duplex, the full duplex and full duplex jamming schemes, and the simulation results are also presented to verify the analysis.

265 citations


Journal ArticleDOI
TL;DR: The proposed forgery region extraction algorithm, which replaces the feature points with small superpixels as feature blocks and then merges the neighboring blocks that have similar local color features into the feature blocks to generate the merged regions to detect the detected forgery regions.
Abstract: A novel copy–move forgery detection scheme using adaptive oversegmentation and feature point matching is proposed in this paper. The proposed scheme integrates both block-based and keypoint-based forgery detection methods. First, the proposed adaptive oversegmentation algorithm segments the host image into nonoverlapping and irregular blocks adaptively. Then, the feature points are extracted from each block as block features, and the block features are matched with one another to locate the labeled feature points; this procedure can approximately indicate the suspected forgery regions. To detect the forgery regions more accurately, we propose the forgery region extraction algorithm, which replaces the feature points with small superpixels as feature blocks and then merges the neighboring blocks that have similar local color features into the feature blocks to generate the merged regions. Finally, it applies the morphological operation to the merged regions to generate the detected forgery regions. The experimental results indicate that the proposed copy–move forgery detection scheme can achieve much better detection results even under various challenging conditions compared with the existing state-of-the-art copy–move forgery detection methods.

238 citations


Journal ArticleDOI
TL;DR: This work advances the state of the art in facial antispoofing by applying a recently developed algorithm called dynamic mode decomposition (DMD) as a general purpose, entirely data-driven approach to capture the above liveness cues.
Abstract: Rendering a face recognition system robust is vital in order to safeguard it against spoof attacks carried out using printed pictures of a victim (also known as print attack) or a replayed video of the person (replay attack). A key property in distinguishing a live, valid access from printed media or replayed videos is by exploiting the information dynamics of the video content, such as blinking eyes, moving lips, and facial dynamics. We advance the state of the art in facial antispoofing by applying a recently developed algorithm called dynamic mode decomposition (DMD) as a general purpose, entirely data-driven approach to capture the above liveness cues. We propose a classification pipeline consisting of DMD, local binary patterns (LBPs), and support vector machines (SVMs) with a histogram intersection kernel. A unique property of DMD is its ability to conveniently represent the temporal information of the entire video as a single image with the same dimensions as those images contained in the video. The pipeline of DMD + LBP + SVM proves to be efficient, convenient to use, and effective. In fact only the spatial configuration for LBP needs to be tuned. The effectiveness of the methodology was demonstrated using three publicly available databases: 1) print-attack; 2) replay-attack; and 3) CASIA-FASD, attaining comparable results with the state of the art, following the respective published experimental protocols.

Journal ArticleDOI
TL;DR: This paper utilizes the sparse matrix to propose a new secure outsourcing algorithm of large-scale linear equations in the fully malicious model and shows that the proposed algorithm is superior in both efficiency and checkability.
Abstract: With the rapid development in availability of cloud services, the techniques for securely outsourcing the prohibitively expensive computations to untrusted servers are getting more and more attentions in the scientific community. In this paper, we investigate secure outsourcing for large-scale systems of linear equations, which are the most popular problems in various engineering disciplines. For the first time, we utilize the sparse matrix to propose a new secure outsourcing algorithm of large-scale linear equations in the fully malicious model. Compared with the state-of-the-art algorithm, the proposed algorithm only requires ( optimal ) one round communication (while the algorithm requires $L$ rounds of interactions between the client and cloud server, where $L$ denotes the number of iteration in iterative methods). Furthermore, the client in our algorithm can detect the misbehavior of cloud server with the ( optimal ) probability 1. Therefore, our proposed algorithm is superior in both efficiency and checkability. We also provide the experimental evaluation that demonstrates the efficiency and effectiveness of our algorithm.

Journal ArticleDOI
TL;DR: The proposed UERD gains a significant performance improvement in terms of secure embedding capacity when compared with the original UED, and rivals the current state-of-the-art with much reduced computational complexity.
Abstract: Uniform embedding was first introduced in 2012 for non-side-informed JPEG steganography, and then extended to the side-informed JPEG steganography in 2014. The idea behind uniform embedding is that, by uniformly spreading the embedding modifications to the quantized discrete cosine transform (DCT) coefficients of all possible magnitudes, the average changes of the first-order and the second-order statistics can be possibly minimized, which leads to less statistical detectability. The purpose of this paper is to refine the uniform embedding by considering the relative changes of statistical model for digital images, aiming to make the embedding modifications to be proportional to the coefficient of variation. Such a new strategy can be regarded as generalized uniform embedding in substantial sense. Compared with the original uniform embedding distortion (UED), the proposed method uses all the DCT coefficients (including the DC, zero, and non-zero AC coefficients) as the cover elements. We call the corresponding distortion function uniform embedding revisited distortion (UERD), which incorporates the complexities of both the DCT block and the DCT mode of each DCT coefficient (i.e., selection channel), and can be directly derived from the DCT domain. The effectiveness of the proposed scheme is verified with the evidence obtained from the exhaustive experiments using a popular steganalyzer with rich models on the BOSSbase database. The proposed UERD gains a significant performance improvement in terms of secure embedding capacity when compared with the original UED, and rivals the current state-of-the-art with much reduced computational complexity.

Journal ArticleDOI
TL;DR: Results demonstrate that the stacked supervised autoencoders-based face representation significantly outperforms the commonly used image representations in single sample per person face recognition, and it achieves higher recognition accuracy compared with other deep learning models, including the deep Lambertian network.
Abstract: This paper targets learning robust image representation for single training sample per person face recognition. Motivated by the success of deep learning in image representation, we propose a supervised autoencoder, which is a new type of building block for deep architectures. There are two features distinct our supervised autoencoder from standard autoencoder. First, we enforce the faces with variants to be mapped with the canonical face of the person, for example, frontal face with neutral expression and normal illumination; Second, we enforce features corresponding to the same person to be similar. As a result, our supervised autoencoder extracts the features which are robust to variances in illumination, expression, occlusion, and pose, and facilitates the face recognition. We stack such supervised autoencoders to get the deep architecture and use it for extracting features in image representation. Experimental results on the AR, Extended Yale B, CMU-PIE, and Multi-PIE data sets demonstrate that by coupling with the commonly used sparse representation-based classification, our stacked supervised autoencoders-based face representation significantly outperforms the commonly used image representations in single sample per person face recognition, and it achieves higher recognition accuracy compared with other deep learning models, including the deep Lambertian network, in spite of much less training data and without any domain information. Moreover, supervised autoencoder can also be used for face verification, which further demonstrates its effectiveness for face representation.

Journal ArticleDOI
Bin Li1, Ming Wang1, Xiaolong Li2, Shunquan Tan1, Jiwu Huang1 
TL;DR: Experimental results show that the proposed CMD strategy, incorporated into existing steganographic schemes, can effectively overcome the challenges posed by the modern steganalyzers with high-dimensional features.
Abstract: Most of the recently proposed steganographic schemes are based on minimizing an additive distortion function defined as the sum of embedding costs for individual pixels. In such an approach, mutual embedding impacts are often ignored. In this paper, we present an approach that can exploit the interactions among embedding changes in order to reduce the risk of detection by steganalysis. It employs a novel strategy, called clustering modification directions (CMDs), based on the assumption that when embedding modifications in heavily textured regions are locally heading toward the same direction, the steganographic security might be improved. To implement the strategy, a cover image is decomposed into several subimages, in which message segments are embedded with well-known schemes using additive distortion functions. The costs of pixels are updated dynamically to take mutual embedding impacts into account. Specifically, when neighboring pixels are changed toward a positive/negative direction, the cost of the considered pixel is biased toward the same direction. Experimental results show that our proposed CMD strategy, incorporated into existing steganographic schemes, can effectively overcome the challenges posed by the modern steganalyzers with high-dimensional features.

Journal ArticleDOI
TL;DR: A very novel hybrid approach, which compares triangles rather than blocks, or single points, and objects are modeled as a set of connected triangles built onto these points to detect copy-move forgery.
Abstract: Copy–move forgery is one of the most common types of tampering for digital images Detection methods generally use block-matching approaches, which first divide the image into overlapping blocks and then extract and compare features to find similar ones, or point-based approaches, in which relevant keypoints are extracted and matched to each other to find similar areas In this paper, we present a very novel hybrid approach, which compares triangles rather than blocks, or single points Interest points are extracted from the image, and objects are modeled as a set of connected triangles built onto these points Triangles are matched according to their shapes (inner angles), their content (color information), and the local feature vectors extracted onto the vertices of the triangles Our methods are designed to be robust to geometric transformations Results are compared with a state-of-the-art block matching method and a point-based method Furthermore, our data set is available for use by academic researchers

Journal ArticleDOI
TL;DR: Assessment of the potential of local descriptors, based on the analysis of microtextural features, for the liveness detection task in authentication systems based on various biometric traits, and points out possible lines of development toward further improvements.
Abstract: Biometric authentication systems are quite vulnerable to sophisticated spoofing attacks. To keep a good level of security, reliable spoofing detection tools are necessary, preferably implemented as software modules. The research in this field is very active, with local descriptors, based on the analysis of microtextural features, gaining more and more popularity, because of their excellent performance and flexibility. This paper aims at assessing the potential of these descriptors for the liveness detection task in authentication systems based on various biometric traits: fingerprint, iris, and face. Besides compact descriptors based on the independent quantization of features, already considered for some liveness detection tasks, we will study promising descriptors based on the joint quantization of rich local features. The experimental analysis, conducted on publicly available data sets and in fully reproducible modality, confirms the potential of these tools for biometric applications, and points out possible lines of development toward further improvements.

Journal ArticleDOI
TL;DR: This paper investigates how to reduce the damage of the client's key exposure in cloud storage auditing, and gives the first practical solution for this new problem setting and formalizes the definition and the security model of auditing protocol with key-exposure resilience and proposes a protocol.
Abstract: Cloud storage auditing is viewed as an important service to verify the integrity of the data in public cloud. Current auditing protocols are all based on the assumption that the client’s secret key for auditing is absolutely secure. However, such assumption may not always be held, due to the possibly weak sense of security and/or low security settings at the client. If such a secret key for auditing is exposed, most of the current auditing protocols would inevitably become unable to work. In this paper, we focus on this new aspect of cloud storage auditing. We investigate how to reduce the damage of the client’s key exposure in cloud storage auditing, and give the first practical solution for this new problem setting. We formalize the definition and the security model of auditing protocol with key-exposure resilience and propose such a protocol. In our design, we employ the binary tree structure and the preorder traversal technique to update the secret keys for the client. We also develop a novel authenticator construction to support the forward security and the property of blockless verifiability. The security proof and the performance analysis show that our proposed protocol is secure and efficient.

Journal ArticleDOI
TL;DR: A searchable attribute-based proxy reencryption system that enables a data owner to efficiently share his data to a specified group of users matching a sharing policy and meanwhile, the data will maintain its searchable property but also the corresponding search keyword(s) can be updated after the data sharing.
Abstract: To date, the growth of electronic personal data leads to a trend that data owners prefer to remotely outsource their data to clouds for the enjoyment of the high-quality retrieval and storage service without worrying the burden of local data management and maintenance. However, secure share and search for the outsourced data is a formidable task, which may easily incur the leakage of sensitive personal information. Efficient data sharing and searching with security is of critical importance. This paper, for the first time, proposes a searchable attribute-based proxy reencryption system. When compared with the existing systems only supporting either searchable attribute-based functionality or attribute-based proxy reencryption, our new primitive supports both abilities and provides flexible keyword update service. In particular, the system enables a data owner to efficiently share his data to a specified group of users matching a sharing policy and meanwhile, the data will maintain its searchable property but also the corresponding search keyword(s) can be updated after the data sharing. The new mechanism is applicable to many real-world applications, such as electronic health record systems. It is also proved chosen ciphertext secure in the random oracle model.

Journal ArticleDOI
TL;DR: A map-based provable multicopy dynamic data possession (MB-PMDDP) scheme that has the following features: it provides an evidence to the customers that the CSP is not cheating by storing fewer copies, and it supports outsourcing of dynamic data, i.e., it supports block-level operations.
Abstract: Increasingly more and more organizations are opting for outsourcing data to remote cloud service providers (CSPs). Customers can rent the CSPs storage infrastructure to store and retrieve almost unlimited amount of data by paying fees metered in gigabyte/month. For an increased level of scalability, availability, and durability, some customers may want their data to be replicated on multiple servers across multiple data centers. The more copies the CSP is asked to store, the more fees the customers are charged. Therefore, customers need to have a strong guarantee that the CSP is storing all data copies that are agreed upon in the service contract, and all these copies are consistent with the most recent modifications issued by the customers. In this paper, we propose a map-based provable multicopy dynamic data possession (MB-PMDDP) scheme that has the following features: 1) it provides an evidence to the customers that the CSP is not cheating by storing fewer copies; 2) it supports outsourcing of dynamic data, i.e., it supports block-level operations, such as block modification, insertion, deletion, and append; and 3) it allows authorized users to seamlessly access the file copies stored by the CSP. We give a comparative analysis of the proposed MB-PMDDP scheme with a reference model obtained by extending existing provable possession of dynamic single-copy schemes. The theoretical analysis is validated through experimental results on a commercial cloud platform. In addition, we show the security against colluding servers, and discuss how to identify corrupted copies by slightly modifying the proposed scheme.

Journal ArticleDOI
TL;DR: Two practical large universe CP-ABE systems supporting white-box traceability are proposed and have two advantages: 1) the number of attributes is not polynomially bounded and 2) malicious users who leak their decryption keys could be traced.
Abstract: Ciphertext-policy attribute-based encryption (CP-ABE) enables fine-grained access control to the encrypted data for commercial applications. There has been significant progress in CP-ABE over the recent years because of two properties called traceability and large universe, greatly enriching the commercial applications of CP-ABE. Traceability is the ability of ABE to trace the malicious users or traitors who intentionally leak the partial or modified decryption keys for profits. Nevertheless, due to the nature of CP-ABE, it is difficult to identify the original key owner from an exposed key since the decryption privilege is shared by multiple users who have the same attributes. On the other hand, the property of large universe in ABE enlarges the practical applications by supporting flexible number of attributes. Several systems have been proposed to obtain either of the above properties. However, none of them achieve the two properties simultaneously in practice, which limits the commercial applications of CP-ABE to a certain extent. In this paper, we propose two practical large universe CP-ABE systems supporting white-box traceability. Compared with existing systems, both the two proposed systems have two advantages: 1) the number of attributes is not polynomially bounded and 2) malicious users who leak their decryption keys could be traced. Moreover, another remarkable advantage of the second proposed system is that the storage overhead for traitor tracing is constant, which are suitable for commercial applications.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed solution outperforms traditional differential privacy in terms of mean square error on large group of queries, which suggests the correlated differential privacy can successfully retain the utility while preserving the privacy.
Abstract: Privacy preserving on data mining and data release has attracted an increasing research interest over a number of decades. Differential privacy is one influential privacy notion that offers a rigorous and provable privacy guarantee for data mining and data release. Existing studies on differential privacy assume that in a data set, records are sampled independently. However, in real-world applications, records in a data set are rarely independent. The relationships among records are referred to as correlated information and the data set is defined as correlated data set. A differential privacy technique performed on a correlated data set will disclose more information than expected, and this is a serious privacy violation. Although recent research was concerned with this new privacy violation, it still calls for a solid solution for the correlated data set. Moreover, how to decrease the large amount of noise incurred via differential privacy in correlated data set is yet to be explored. To fill the gap, this paper proposes an effective correlated differential privacy solution by defining the correlated sensitivity and designing a correlated data releasing mechanism. With consideration of the correlated levels between records, the proposed correlated sensitivity can significantly decrease the noise compared with traditional global sensitivity. The correlated data releasing mechanism correlated iteration mechanism is designed based on an iterative method to answer a large number of queries. Compared with the traditional method, the proposed correlated differential privacy solution enhances the privacy guarantee for a correlated data set with less accuracy cost. Experimental results show that the proposed solution outperforms traditional differential privacy in terms of mean square error on large group of queries. This also suggests the correlated differential privacy can successfully retain the utility while preserving the privacy.

Journal ArticleDOI
TL;DR: This paper formalizes a security model of ABE with verifiable outsourced decryption by introducing a verification key in the output of the encryption algorithm, and presents an approach to convert any ABE scheme with outsourcedDecryption into an ABE schemewith verifiable Outsourced ABE, which is simple, general, and almost optimal.
Abstract: Attribute-based encryption (ABE) with outsourced decryption not only enables fine-grained sharing of encrypted data, but also overcomes the efficiency drawback (in terms of ciphertext size and decryption cost) of the standard ABE schemes. In particular, an ABE scheme with outsourced decryption allows a third party (e.g., a cloud server) to transform an ABE ciphertext into a (short) El Gamal-type ciphertext using a public transformation key provided by a user so that the latter can be decrypted much more efficiently than the former by the user. However, a shortcoming of the original outsourced ABE scheme is that the correctness of the cloud server’s transformation cannot be verified by the user. That is, an end user could be cheated into accepting a wrong or maliciously transformed output. In this paper, we first formalize a security model of ABE with verifiable outsourced decryption by introducing a verification key in the output of the encryption algorithm. Then, we present an approach to convert any ABE scheme with outsourced decryption into an ABE scheme with verifiable outsourced decryption. The new approach is simple, general, and almost optimal. Compared with the original outsourced ABE, our verifiable outsourced ABE neither increases the user’s and the cloud server’s computation costs except some nondominant operations (e.g., hash computations), nor expands the ciphertext size except adding a hash value (which is et al .’s ciphertext-policy ABE scheme with outsourced decryption, and provide a detailed performance evaluation to demonstrate the advantages of our approach.

Journal ArticleDOI
TL;DR: The results show that security can be introduced at a negligible cost, particularly for large number of files and users, and it is shown that the rate achieved by the proposed caching scheme with secure delivery is within a constant multiplicative factor from the information-theoretic optimal rate.
Abstract: Caching is emerging as a vital tool for alleviating the severe capacity crunch in modern content-centric wireless networks. The main idea behind caching is to store parts of the popular content in end-users’ memory and leverage the locally stored content to reduce peak data rates. By jointly designing content placement and delivery mechanisms, recent works have shown order-wise reduction in transmission rates in contrast to traditional methods. In this paper, we consider the secure caching problem with the additional goal of minimizing information leakage to an external wiretapper. The fundamental cache memory versus transmission rate tradeoff for the secure caching problem is characterized. Rather surprisingly, these results show that security can be introduced at a negligible cost, particularly for large number of files and users. It is also shown that the rate achieved by the proposed caching scheme with secure delivery is within a constant multiplicative factor from the information-theoretic optimal rate for almost all parameter values of practical interest.

Journal ArticleDOI
TL;DR: A remote authentication protocol featured with nonrepudiation, client anonymity, key escrow resistance, and revocability for extra-body communication in the WBANs, and a certificateless anonymous remote authentication with revocation is constructed by incorporating the proposed encryption scheme and signature scheme.
Abstract: To ensure the security and privacy of the patient’s health status in the wireless body area networks (WBANs), it is critical to secure the extra-body communication between the smart portable device held by the WBAN client and the application providers, such as the hospital, physician or medical staff. Based on certificateless cryptography, this paper proposes a remote authentication protocol featured with nonrepudiation, client anonymity, key escrow resistance, and revocability for extra-body communication in the WBANs. First, we present a certificateless encryption scheme and a certificateless signature scheme with efficient revocation against short-term key exposure, which we believe are of independent interest. Then, a certificateless anonymous remote authentication with revocation is constructed by incorporating the proposed encryption scheme and signature scheme. Our revocation mechanism is highly scalable, which is especially suitable for the large-scale WBANs, in the sense that the key-update overhead on the side of trusted party increased logarithmically in the number of users. As far as we know, this is the first time considering the revocation functionality of anonymous remote authentication for the WBANs. Both theoretic analysis and experimental simulations show that the proposed authentication protocol is provably secure in the random oracle model and highly practical.

Journal ArticleDOI
TL;DR: This paper study's the authorization mechanism for PKEET, and proposes four types of authorization policies to enhance the privacy of users' data, and proves its security based on the computational Diffie-Hellman assumption in the random oracle model.
Abstract: We reformalize and recast the notion of public key encryption with equality test (PKEET), which was proposed in CT-RSA 2010 and supports to check whether two ciphertexts encrypted under different public keys contain the same message. PKEET has many interesting applications, for example, in constructing searchable encryption and partitioning encrypted data. However, the original PKEET scheme lacks an authorization mechanism for a user to control the comparison of its ciphertexts with others’. In this paper, we study the authorization mechanism for PKEET, and propose four types of authorization policies to enhance the privacy of users’ data. We give the definitions of the policies, propose a PKEET scheme supporting these four types of authorization at the same time, and prove its security based on the computational Diffie–Hellman assumption in the random oracle model. To the best of our knowledge, it is the only PKEET scheme supporting flexible authorization.

Journal ArticleDOI
TL;DR: The proposed CL-EKM protocol supports efficient key updates when a node leaves or joins a cluster and ensures forward and backward key secrecy and minimizes the impact of a node compromise on the security of other communication links.
Abstract: Recently, wireless sensor networks (WSNs) have been deployed for a wide variety of applications, including military sensing and tracking, patient status monitoring, traffic flow monitoring, where sensory devices often move between different locations Securing data and communications requires suitable encryption key protocols In this paper, we propose a certificateless-effective key management (CL-EKM) protocol for secure communication in dynamic WSNs characterized by node mobility The CL-EKM supports efficient key updates when a node leaves or joins a cluster and ensures forward and backward key secrecy The protocol also supports efficient key revocation for compromised nodes and minimizes the impact of a node compromise on the security of other communication links A security analysis of our scheme shows that our protocol is effective in defending against various attacks We implement CL-EKM in Contiki OS and simulate it using Cooja simulator to assess its time, energy, communication, and memory performance

Journal ArticleDOI
TL;DR: A privacy-preserving decentralized CP-ABE (PPDCP-ABe) is proposed to reduce the trust on the central authority and protect users' privacy and both the identifiers and the attributes can be protected to be known by the authorities.
Abstract: In previous privacy-preserving multiauthority attribute-based encryption (PPMA-ABE) schemes, a user can acquire secret keys from multiple authorities with them knowing his/her attributes and furthermore, a central authority is required. Notably, a user’s identity information can be extracted from his/her some sensitive attributes. Hence, existing PPMA-ABE schemes cannot fully protect users’ privacy as multiple authorities can collaborate to identify a user by collecting and analyzing his attributes. Moreover, ciphertext-policy ABE (CP-ABE) is a more efficient public-key encryption, where the encryptor can select flexible access structures to encrypt messages. Therefore, a challenging and important work is to construct a PPMA-ABE scheme where there is no necessity of having the central authority and furthermore, both the identifiers and the attributes can be protected to be known by the authorities. In this paper, a privacy-preserving decentralized CP-ABE (PPDCP-ABE) is proposed to reduce the trust on the central authority and protect users’ privacy. In our PPDCP-ABE scheme, each authority can work independently without any collaboration to initial the system and issue secret keys to users. Furthermore, a user can obtain secret keys from multiple authorities without them knowing anything about his global identifier and attributes.

Journal ArticleDOI
TL;DR: DRA benefits and rogue device rejection performance are demonstrated using discrete Gabor transform features extracted from experimentally collected orthogonal frequency division multiplexing-based wireless fidelity (WiFi) and worldwide interoperability for microwave access (WiMAX) signals.
Abstract: Unauthorized network access and spoofing attacks at wireless access points (WAPs) have been traditionally addressed using bit-centric security measures and remain a major information technology security concern. This has been recently addressed using RF fingerprinting methods within the physical layer to augment WAP security. This paper extends the RF fingerprinting knowledge base by: 1) identifying and removing less-relevant features through dimensional reduction analysis (DRA) and 2) providing a first look assessment of device identification (ID) verification that enables the detection of rogue devices attempting to gain network access by presenting false bit-level credentials of authorized devices. DRA benefits and rogue device rejection performance are demonstrated using discrete Gabor transform features extracted from experimentally collected orthogonal frequency division multiplexing-based wireless fidelity (WiFi) and worldwide interoperability for microwave access (WiMAX) signals. Relative to empirically selected full-dimensional feature sets, performance using DRA-reduced feature sets containing only 10% of the highest ranked features (90% reduction), includes: 1) maintaining desired device classification accuracy and 2) improving authorized device ID verification for both WiFi and WiMAX signals. Reliable burst-by-burst rogue device rejection of better than 93% is achieved for 72 unique spoofing attacks and improvement to 100% is demonstrated when an accurate sample of the overall device population is employed. DRA-reduced feature set efficiency is reflected in DRA models requiring only one-tenth the number of features and processing time.

Journal ArticleDOI
TL;DR: This paper presents a semianonymous privilege control scheme AnonyControl, which decentralizes the central authority to limit the identity leakage and thus achieves semianonymity, and also generalizes the file access control to the privilege control, by which privileges of all operations on the cloud data can be managed in a fine-grained manner.
Abstract: Cloud computing is a revolutionary computing paradigm, which enables flexible, on-demand, and low-cost usage of computing resources, but the data is outsourced to some cloud servers, and various privacy concerns emerge from it. Various schemes based on the attribute-based encryption have been proposed to secure the cloud storage. However, most work focuses on the data contents privacy and the access control, while less attention is paid to the privilege control and the identity privacy. In this paper, we present a semianonymous privilege control scheme AnonyControl to address not only the data privacy, but also the user identity privacy in existing access control schemes. AnonyControl decentralizes the central authority to limit the identity leakage and thus achieves semianonymity. Besides, it also generalizes the file access control to the privilege control, by which privileges of all operations on the cloud data can be managed in a fine-grained manner. Subsequently, we present the AnonyControl-F , which fully prevents the identity leakage and achieve the full anonymity. Our security analysis shows that both AnonyControl and AnonyControl-F are secure under the decisional bilinear Diffie–Hellman assumption, and our performance evaluation exhibits the feasibility of our schemes.