scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Forensics and Security in 2016"


Journal ArticleDOI
TL;DR: A unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user, and when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction.
Abstract: With the increasing importance of images in people’s daily life, content-based image retrieval (CBIR) has been widely studied. Compared with text documents, images consume much more storage space. Hence, its maintenance is considered to be a typical example for cloud storage outsourcing. For privacy-preserving purposes, sensitive images, such as medical and personal images, need to be encrypted before outsourcing, which makes the CBIR technologies in plaintext domain to be unusable. In this paper, we propose a scheme that supports CBIR over encrypted images without leaking the sensitive information to the cloud server. First, feature vectors are extracted to represent the corresponding images. After that, the pre-filter tables are constructed by locality-sensitive hashing to increase search efficiency. Moreover, the feature vectors are protected by the secure kNN algorithm, and image pixels are encrypted by a standard stream cipher. In addition, considering the case that the authorized query users may illegally copy and distribute the retrieved images to someone unauthorized, we propose a watermark-based protocol to deter such illegal distributions. In our watermark-based protocol, a unique watermark is directly embedded into the encrypted images by the cloud server before images are sent to the query user. Hence, when image copy is found, the unlawful query user who distributed the image can be traced by the watermark extraction. The security analysis and the experiments show the security and efficiency of the proposed scheme.

563 citations


Journal ArticleDOI
TL;DR: A new method of keyword transformation based on the uni-gram is developed, which will simultaneously improve the accuracy and creates the ability to handle other spelling mistakes and consider the keyword weight when selecting an adequate matching file set.
Abstract: Keyword-based search over encrypted outsourced data has become an important tool in the current cloud computing scenario. The majority of the existing techniques are focusing on multi-keyword exact match or single keyword fuzzy search. However, those existing techniques find less practical significance in real-world applications compared with the multi-keyword fuzzy search technique over encrypted data. The first attempt to construct such a multi-keyword fuzzy search scheme was reported by Wang et al. , who used locality-sensitive hashing functions and Bloom filtering to meet the goal of multi-keyword fuzzy search. Nevertheless, Wang’s scheme was only effective for a one letter mistake in keyword but was not effective for other common spelling mistakes. Moreover, Wang’s scheme was vulnerable to server out-of-order problems during the ranking process and did not consider the keyword weight. In this paper, based on Wang et al. ’s scheme, we propose an efficient multi-keyword fuzzy ranked search scheme based on Wang et al. ’s scheme that is able to address the aforementioned problems. First, we develop a new method of keyword transformation based on the uni-gram, which will simultaneously improve the accuracy and creates the ability to handle other spelling mistakes. In addition, keywords with the same root can be queried using the stemming algorithm. Furthermore, we consider the keyword weight when selecting an adequate matching file set. Experiments using real-world data show that our scheme is practically efficient and achieve high accuracy.

464 citations


Journal ArticleDOI
TL;DR: This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis that exploits the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces.
Abstract: Research on non-intrusive software-based face spoofing detection schemes has been mainly focused on the analysis of the luminance information of the face images, hence discarding the chroma component, which can be very useful for discriminating fake faces from genuine ones. This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis. We exploit the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces. More specifically, the feature histograms are computed over each image band separately. Extensive experiments on the three most challenging benchmark data sets, namely, the CASIA face anti-spoofing database, the replay-attack database, and the MSU mobile face spoof database, showed excellent results compared with the state of the art. More importantly, unlike most of the methods proposed in the literature, our proposed approach is able to achieve stable performance across all the three benchmark data sets. The promising results of our cross-database evaluation suggest that the facial colour texture representation is more stable in unknown conditions compared with its gray-scale counterparts.

449 citations


Journal ArticleDOI
TL;DR: An alternative approach based on a locally estimated multivariate Gaussian cover image model that is sufficiently simple to derive a closed-form expression for the power of the most powerful detector of content-adaptive least significant bit matching but, at the same time, complex enough to capture the non-stationary character of natural images.
Abstract: Most current steganographic schemes embed the secret payload by minimizing a heuristically defined distortion. Similarly, their security is evaluated empirically using classifiers equipped with rich image models. In this paper, we pursue an alternative approach based on a locally estimated multivariate Gaussian cover image model that is sufficiently simple to derive a closed-form expression for the power of the most powerful detector of content-adaptive least significant bit matching but, at the same time, complex enough to capture the non-stationary character of natural images. We show that when the cover model estimator is properly chosen, the state-of-the-art performance can be obtained. The closed-form expression for detectability within the chosen model is used to obtain new fundamental insight regarding the performance limits of empirical steganalysis detectors built as classifiers. In particular, we consider a novel detectability limited sender and estimate the secure payload of individual images.

406 citations


Journal ArticleDOI
TL;DR: An efficient face spoof detection system on an Android smartphone based on the analysis of image distortion in spoof face images and an unconstrained smartphone spoof attack database containing more than 1000 subjects are built.
Abstract: With the wide deployment of the face recognition systems in applications from deduplication to mobile device unlocking, security against the face spoofing attacks requires increased attention; such attacks can be easily launched via printed photos, video replays, and 3D masks of a face. We address the problem of face spoof detection against the print (photo) and replay (photo or video) attacks based on the analysis of image distortion ( e.g. , surface reflection, moire pattern, color distortion, and shape deformation) in spoof face images (or video frames). The application domain of interest is smartphone unlock, given that the growing number of smartphones have the face unlock and mobile payment capabilities. We build an unconstrained smartphone spoof attack database (MSU USSA) containing more than 1000 subjects. Both the print and replay attacks are captured using the front and rear cameras of a Nexus 5 smartphone. We analyze the image distortion of the print and replay attacks using different: 1) intensity channels (R, G, B, and grayscale); 2) image regions (entire image, detected face, and facial component between nose and chin); and 3) feature descriptors. We develop an efficient face spoof detection system on an Android smartphone. Experimental results on the public-domain Idiap Replay-Attack, CASIA FASD, and MSU-MFSD databases, and the MSU USSA database show that the proposed approach is effective in face spoof detection for both the cross-database and intra-database testing scenarios. User studies of our Android face spoof detection system involving 20 participants show that the proposed approach works very well in real application scenarios.

375 citations


Journal ArticleDOI
TL;DR: An overview of soft biometrics is provided and some of the techniques that have been proposed to extract them from the image and the video data are discussed, a taxonomy for organizing and classifying soft biometric attributes is introduced, and the strengths and limitations are enumerated.
Abstract: Recent research has explored the possibility of extracting ancillary information from primary biometric traits viz., face, fingerprints, hand geometry, and iris. This ancillary information includes personal attributes, such as gender, age, ethnicity, hair color, height, weight, and so on. Such attributes are known as soft biometrics and have applications in surveillance and indexing biometric databases. These attributes can be used in a fusion framework to improve the matching accuracy of a primary biometric system (e.g., fusing face with gender information), or can be used to generate qualitative descriptions of an individual (e.g., young Asian female with dark eyes and brown hair). The latter is particularly useful in bridging the semantic gap between human and machine descriptions of the biometric data. In this paper, we provide an overview of soft biometrics and discuss some of the techniques that have been proposed to extract them from the image and the video data. We also introduce a taxonomy for organizing and classifying soft biometric attributes, and enumerate the strengths and limitations of these attributes in the context of an operational biometric system. Finally, we discuss open research problems in this field. This survey is intended for researchers and practitioners in the field of biometrics.

355 citations


Journal ArticleDOI
TL;DR: The results suggest that this is due to the ability of HMOG features to capture distinctive body movements caused by walking, in addition to the hand-movement dynamics from taps.
Abstract: We introduce hand movement, orientation, and grasp (HMOG), a set of behavioral features to continuously authenticate smartphone users. HMOG features unobtrusively capture subtle micro-movement and orientation dynamics resulting from how a user grasps, holds, and taps on the smartphone. We evaluated authentication and biometric key generation (BKG) performance of HMOG features on data collected from 100 subjects typing on a virtual keyboard. Data were collected under two conditions: 1) sitting and 2) walking. We achieved authentication equal error rates (EERs) as low as 7.16% (walking) and 10.05% (sitting) when we combined HMOG, tap, and keystroke features. We performed experiments to investigate why HMOG features perform well during walking. Our results suggest that this is due to the ability of HMOG features to capture distinctive body movements caused by walking, in addition to the hand-movement dynamics from taps. With BKG, we achieved the EERs of 15.1% using HMOG combined with taps. In comparison, BKG using tap, key hold, and swipe features had EERs between 25.7% and 34.2%. We also analyzed the energy consumption of HMOG feature extraction and computation. Our analysis shows that HMOG features extracted at a 16-Hz sensor sampling rate incurred a minor overhead of 7.9% without sacrificing authentication accuracy. Two points distinguish our work from current literature: 1) we present the results of a comprehensive evaluation of three types of features (HMOG, keystroke, and tap) and their combinations under the same experimental conditions and 2) we analyze the features from three perspectives (authentication, BKG, and energy consumption on smartphones).

319 citations


Journal ArticleDOI
TL;DR: It is shown that pretrained CNNs can yield the state-of-the-art results with no need for architecture or hyperparameter selection, and data set augmentation is used to increase the classifiers performance, not only for deep architectures but also for shallow ones.
Abstract: With the growing use of biometric authentication systems in the recent years, spoof fingerprint detection has become increasingly important. In this paper, we use convolutional neural networks (CNNs) for fingerprint liveness detection. Our system is evaluated on the data sets used in the liveness detection competition of the years 2009, 2011, and 2013, which comprises almost 50 000 real and fake fingerprints images. We compare four different models: two CNNs pretrained on natural images and fine-tuned with the fingerprint images, CNN with random weights, and a classical local binary pattern approach. We show that pretrained CNNs can yield the state-of-the-art results with no need for architecture or hyperparameter selection. Data set augmentation is used to increase the classifiers performance, not only for deep architectures but also for shallow ones. We also report good accuracy on very small training sets (400 samples) using these large pretrained networks. Our best model achieves an overall rate of 97.1% of correctly classified samples—a relative improvement of 16% in test error when compared with the best previously published results. This model won the first prize in the fingerprint liveness detection competition 2015 with an overall accuracy of 95.5%.

314 citations


Journal ArticleDOI
TL;DR: In this paper, a discriminant correlation analysis (DCA) is proposed for feature fusion by maximizing the pairwise correlations across the two feature sets and eliminating the between-class correlations and restricting the correlations to be within the classes.
Abstract: Information fusion is a key step in multimodal biometric systems. The fusion of information can occur at different levels of a recognition system, i.e., at the feature level, matching-score level, or decision level. However, feature level fusion is believed to be more effective owing to the fact that a feature set contains richer information about the input biometric data than the matching score or the output decision of a classifier. The goal of feature fusion for recognition is to combine relevant information from two or more feature vectors into a single one with more discriminative power than any of the input feature vectors. In pattern recognition problems, we are also interested in separating the classes. In this paper, we present discriminant correlation analysis (DCA), a feature level fusion technique that incorporates the class associations into the correlation analysis of the feature sets. DCA performs an effective feature fusion by maximizing the pairwise correlations across the two feature sets and, at the same time, eliminating the between-class correlations and restricting the correlations to be within the classes. Our proposed method can be used in pattern recognition applications for fusing the features extracted from multiple modalities or combining different feature vectors extracted from a single modality. It is noteworthy that DCA is the first technique that considers class structure in feature fusion. Moreover, it has a very low computational complexity and it can be employed in real-time applications. Multiple sets of experiments performed on various biometric databases and using different feature extraction techniques, show the effectiveness of our proposed method, which outperforms other state-of-the-art approaches.

310 citations


Journal ArticleDOI
TL;DR: The most important innovation of ActiveTrust is that it avoids black holes through the active creation of a number of detection routes to quickly detect and obtain nodal trust and thus improve the data route security.
Abstract: Wireless sensor networks (WSNs) are increasingly being deployed in security-critical applications Because of their inherent resource-constrained characteristics, they are prone to various security attacks, and a black hole attack is a type of attack that seriously affects data collection To conquer that challenge, an active detection-based security and trust routing scheme named ActiveTrust is proposed for WSNs The most important innovation of ActiveTrust is that it avoids black holes through the active creation of a number of detection routes to quickly detect and obtain nodal trust and thus improve the data route security More importantly, the generation and the distribution of detection routes are given in the ActiveTrust scheme, which can fully use the energy in non-hotspots to create as many detection routes as needed to achieve the desired security and energy efficiency Both comprehensive theoretical analysis and experimental results indicate that the performance of the ActiveTrust scheme is better than that of the previous studies ActiveTrust can significantly improve the data route success probability and ability against black hole attacks and can optimize network lifetime

290 citations


Journal ArticleDOI
TL;DR: An efficient file hierarchy attribute-based encryption scheme is proposed in cloud computing that combines layered access structures into a single access structure, and then, the hierarchical files are encrypted with the integrated access structure.
Abstract: Ciphertext-policy attribute-based encryption (CP-ABE) has been a preferred encryption technology to solve the challenging problem of secure data sharing in cloud computing. The shared data files generally have the characteristic of multilevel hierarchy, particularly in the area of healthcare and the military. However, the hierarchy structure of shared files has not been explored in CP-ABE. In this paper, an efficient file hierarchy attribute-based encryption scheme is proposed in cloud computing. The layered access structures are integrated into a single access structure, and then, the hierarchical files are encrypted with the integrated access structure. The ciphertext components related to attributes could be shared by the files. Therefore, both ciphertext storage and time cost of encryption are saved. Moreover, the proposed scheme is proved to be secure under the standard assumption. Experimental simulation shows that the proposed scheme is highly efficient in terms of encryption and decryption. With the number of the files increasing, the advantages of our scheme become more and more conspicuous.

Journal ArticleDOI
TL;DR: This paper investigates to what extent an external attacker can identify the specific actions that a user is performing on her mobile apps, and design a system that achieves this goal using advanced machine learning techniques, and compares the solution with the three state-of-the-art algorithms.
Abstract: Mobile devices can be maliciously exploited to violate the privacy of people. In most attack scenarios, the adversary takes the local or remote control of the mobile device, by leveraging a vulnerability of the system, hence sending back the collected information to some remote web service. In this paper, we consider a different adversary, who does not interact actively with the mobile device, but he is able to eavesdrop the network traffic of the device from the network side (e.g., controlling a Wi-Fi access point). The fact that the network traffic is often encrypted makes the attack even more challenging. In this paper, we investigate to what extent such an external attacker can identify the specific actions that a user is performing on her mobile apps. We design a system that achieves this goal using advanced machine learning techniques. We built a complete implementation of this system, and we also run a thorough set of experiments, which show that our attack can achieve accuracy and precision higher than 95%, for most of the considered actions. We compared our solution with the three state-of-the-art algorithms, and confirming that our system outperforms all these direct competitors.

Journal ArticleDOI
TL;DR: It is proved that the proposed EPOM achieves the goal of secure integer number processing without resulting in privacy leakage of data to unauthorized parties.
Abstract: In this paper, we propose a toolkit for efficient and privacy-preserving outsourced calculation under multiple encrypted keys (EPOM). Using EPOM, a large scale of users can securely outsource their data to a cloud server for storage. Moreover, encrypted data belonging to multiple users can be processed without compromising on the security of the individual user’s (original) data and the final computed results. To reduce the associated key management cost and private key exposure risk in EPOM, we present a distributed two-trapdoor public-key cryptosystem, the core cryptographic primitive. We also present the toolkit to ensure that the commonly used integer operations can be securely handled across different encrypted domains. We then prove that the proposed EPOM achieves the goal of secure integer number processing without resulting in privacy leakage of data to unauthorized parties. Last, we demonstrate the utility and the efficiency of EPOM using simulations.

Journal ArticleDOI
TL;DR: This work incorporates ring partition and invariant vector distance to image hashing algorithm for enhancing rotation robustness and discriminative capability, and demonstrates that the proposed hashing algorithm is robust at commonly used digital operations to images.
Abstract: Robustness and discrimination are two of the most important objectives in image hashing. We incorporate ring partition and invariant vector distance to image hashing algorithm for enhancing rotation robustness and discriminative capability. As ring partition is unrelated to image rotation, the statistical features that are extracted from image rings in perceptually uniform color space, i.e., CIE L*a*b* color space, are rotation invariant and stable. In particular, the Euclidean distance between vectors of these perceptual features is invariant to commonly used digital operations to images (e.g., JPEG compression, gamma correction, and brightness/contrast adjustment), which helps in making image hash compact and discriminative. We conduct experiments to evaluate the efficiency with 250 color images, and demonstrate that the proposed hashing algorithm is robust at commonly used digital operations to images. In addition, with the receiver operating characteristics curve, we illustrate that our hashing is much better than the existing popular hashing algorithms at robustness and discrimination.

Journal ArticleDOI
TL;DR: A new simple yet effective framework for RDH in encrypted domain that the server manager does not need to design a new RDH scheme according to the encryption algorithm that has been conducted by the content owner and most of those previously proposed RDH schemes can be applied to the encrypted image directly.
Abstract: In the past more than one decade, hundreds of reversible data hiding (RDH) algorithms have been reported. Via exploring the correlation between the neighboring pixels (or coefficients), extra information can be embedded into the host image reversibly. However, these RDH algorithms cannot be accomplished in encrypted domain directly, since the correlation between the neighboring pixels will disappear after encryption. In order to accomplish RDH in encrypted domain, specific RDH schemes have been designed according to the encryption algorithm utilized. In this paper, we propose a new simple yet effective framework for RDH in encrypted domain. In the proposed framework, the pixels in a plain image are first divided into sub-blocks with the size of $m\times n$ . Then, with an encryption key, a key stream (a stream of random or pseudorandom bits/bytes that are combined with a plaintext message to produce the encrypted message) is generated, and the pixels in the same sub-block are encrypted with the same key stream byte. After the stream encryption, the encrypted $m\times n$ sub-blocks are randomly permutated with a permutation key. Since the correlation between the neighboring pixels in each sub-block can be well preserved in the encrypted domain, most of those previously proposed RDH schemes can be applied to the encrypted image directly. One of the main merits of the proposed framework is that the RDH scheme is independent of the image encryption algorithm. That is, the server manager (or channel administrator) does not need to design a new RDH scheme according to the encryption algorithm that has been conducted by the content owner; instead, he/she can accomplish the data hiding by applying the numerous RDH algorithms previously proposed to the encrypted domain directly.

Journal ArticleDOI
TL;DR: This paper designs an efficient homomorphic encryption scheme and a secure comparison scheme, which is used to build an association rule mining solution and demonstrates that the run time in each of the solutions is only one order higher than that in the best non-privacy-preserving data mining algorithms.
Abstract: Association rule mining and frequent itemset mining are two popular and widely studied data analysis techniques for a range of applications. In this paper, we focus on privacy-preserving mining on vertically partitioned databases. In such a scenario, data owners wish to learn the association rules or frequent itemsets from a collective data set and disclose as little information about their (sensitive) raw data as possible to other data owners and third parties. To ensure data privacy, we design an efficient homomorphic encryption scheme and a secure comparison scheme. We then propose a cloud-aided frequent itemset mining solution, which is used to build an association rule mining solution. Our solutions are designed for outsourced databases that allow multiple data owners to efficiently share their data securely without compromising on data privacy. Our solutions leak less information about the raw data than most existing solutions. In comparison to the only known solution achieving a similar privacy level as our proposed solutions, the performance of our proposed solutions is three to five orders of magnitude higher. Based on our experiment findings using different parameters and data sets, we demonstrate that the run time in each of our solutions is only one order higher than that in the best non-privacy-preserving data mining algorithms. Since both data and computing work are outsourced to the cloud servers, the resource consumption at the data owner end is very low.

Journal ArticleDOI
TL;DR: It is proved that in all permutationonly image ciphers, regardless of the cipher structure, the correct permutation mapping is recovered completely by a chosenplaintext attack, which significantly outperforms the state-of-theart cryptanalytic methods.
Abstract: Permutation is a commonly used primitive in multimedia (image/video) encryption schemes, and many permutation-only algorithms have been proposed in recent years for the protection of multimedia data. In permutation-only image ciphers, the entries of the image matrix are scrambled using a permutation mapping matrix which is built by a pseudo-random number generator. The literature on the cryptanalysis of image ciphers indicates that the permutation-only image ciphers are insecure against ciphertext-only attacks and/or known/chosen-plaintext attacks. However, the previous studies have not been able to ensure the correct retrieval of the complete plaintext elements. In this paper, we revisited the previous works on cryptanalysis of permutation-only image encryption schemes and made the cryptanalysis work on chosen-plaintext attacks complete and more efficient. We proved that in all permutation-only image ciphers, regardless of the cipher structure, the correct permutation mapping is recovered completely by a chosen-plaintext attack. To the best of our knowledge, for the first time, this paper gives a chosen-plaintext attack that completely determines the correct plaintext elements using a deterministic method. When the plain-images are of size ${M}\times {N}$ and with ${L}$ different color intensities, the number ${n}$ of required chosen plain-images to break the permutation-only image encryption algorithm is ${n}=\lceil \log _{L}$ ( MN ) $\rceil $ . The complexity of the proposed attack is $O$ ( $n\,\cdot \, {M N}$ ) which indicates its feasibility in a polynomial amount of computation time. To validate the performance of the proposed chosen-plaintext attack, numerous experiments were performed on two recently proposed permutation-only image/video ciphers. Both theoretical and experimental results showed that the proposed attack outperforms the state-of-the-art cryptanalytic methods.

Journal ArticleDOI
TL;DR: This paper proposes a new PEKS framework named dual-server PEKS (DS-PEKS), and defines a new variant of the smooth projective hash functions (SPHFs) referred to as linear and homomorphic SPHF (LH-SPHF), which can achieve the strong security against inside the KGA.
Abstract: Searchable encryption is of increasing interest for protecting the data privacy in secure searchable cloud storage. In this paper, we investigate the security of a well-known cryptographic primitive, namely, public key encryption with keyword search (PEKS) which is very useful in many applications of cloud storage. Unfortunately, it has been shown that the traditional PEKS framework suffers from an inherent insecurity called inside keyword guessing attack (KGA) launched by the malicious server. To address this security vulnerability, we propose a new PEKS framework named dual-server PEKS (DS-PEKS). As another main contribution, we define a new variant of the smooth projective hash functions (SPHFs) referred to as linear and homomorphic SPHF (LH-SPHF). We then show a generic construction of secure DS-PEKS from LH-SPHF. To illustrate the feasibility of our new framework, we provide an efficient instantiation of the general framework from a Decision Diffie–Hellman-based LH-SPHF and show that it can achieve the strong security against inside the KGA.

Journal ArticleDOI
TL;DR: A new malware detection method, named ICCDetector, that detects and classifies malwares into five newly defined malware categories, which help understand the relationship between malicious behaviors and ICC characteristics, and provides a systemic analysis of ICC patterns of benign apps and malWares.
Abstract: Most existing mobile malware detection methods (e.g., Kirin and DroidMat) are designed based on the resources required by malwares (e.g., permissions, application programming interface (API) calls, and system calls). These methods capture the interactions between mobile apps and Android system, but ignore the communications among components within or cross application boundaries. As a consequence, the majority of the existing methods are less effective in identifying many typical malwares, which require a few or no suspicious resources, but leverage on inter-component communication (ICC) mechanism when launching stealthy attacks. To address this challenge, we propose a new malware detection method, named ICCDetector. ICCDetector outputs a detection model after training with a set of benign apps and a set of malwares, and employs the trained model for malware detection. The performance of ICCDetector is evaluated with 5264 malwares, and 12026 benign apps. Compared with our benchmark, which is a permission-based method proposed by Peng et al. in 2012 with an accuracy up to 88.2%, ICCDetector achieves an accuracy of 97.4%, roughly 10% higher than the benchmark, with a lower false positive rate of 0.67%, which is only about a half of the benchmark. After manually analyzing false positives, we discover 43 new malwares from the benign data set, and reduce the number of false positives to seven. More importantly, ICCDetector discovers 1708 more advanced malwares than the benchmark, while it misses 220 obvious malwares, which can be easily detected by the benchmark. For the detected malwares, ICCDetector further classifies them into five newly defined malware categories, which help understand the relationship between malicious behaviors and ICC characteristics. We also provide a systemic analysis of ICC patterns of benign apps and malwares.

Journal ArticleDOI
TL;DR: A system model and a security model are formulated for the proposed Re-dtPECK scheme to show that it is an efficient scheme proved secure in the standard model and has a low computation and storage overhead.
Abstract: An electronic health (e-health) record system is a novel application that will bring great convenience in healthcare. The privacy and security of the sensitive personal information are the major concerns of the users, which could hinder further development and widely adoption of the systems. The searchable encryption (SE) scheme is a technology to incorporate security protection and favorable operability functions together, which can play an important role in the e-health record system. In this paper, we introduce a novel cryptographic primitive named as conjunctive keyword search with designated tester and timing enabled proxy reencryption function (Re-dtPECK), which is a kind of a time-dependent SE scheme. It could enable patients to delegate partial access rights to others to operate search functions over their records in a limited time period. The length of the time period for the delegatee to search and decrypt the delegator’s encrypted documents can be controlled. Moreover, the delegatee could be automatically deprived of the access and search authority after a specified period of effective time. It can also support the conjunctive keywords search and resist the keyword guessing attacks. By the solution, only the designated tester is able to test the existence of certain keywords. We formulate a system model and a security model for the proposed Re-dtPECK scheme to show that it is an efficient scheme proved secure in the standard model. The comparison and extensive simulations demonstrate that it has a low computation and storage overhead.

Journal ArticleDOI
TL;DR: The proposed selection-channel-aware features can be efficiently computed and provide a substantial detection gain across all the tested algorithms especially for small payloads.
Abstract: All the modern steganographic algorithms for digital images are content adaptive in the sense that they restrict the embedding modifications to complex regions of the cover, which are difficult to model for the steganalyst. The probabilities with which the individual cover elements are modified (the selection channel) are jointly determined by the size of the embedded payload and the content complexity. The most accurate detection of content-adaptive steganography is currently achieved with the detectors built as classifiers trained on cover and stego features that incorporate the knowledge of the selection channel. While the selection-channel-aware features have been proposed for detection of spatial domain steganography, an equivalent for the JPEG domain does not exist. Since modern steganographic algorithms for JPEG images are currently best detected with the features formed by the histograms of the noise residuals split by their JPEG phase, we use such feature sets as a starting point in this paper and extend their design to incorporate the knowledge of the selection channel. This is achieved by accumulating in the histograms a quantity that bounds the expected absolute distortion of the residual. The proposed features can be efficiently computed and provide a substantial detection gain across all the tested algorithms especially for small payloads.

Journal ArticleDOI
TL;DR: A novel EEG-based authentication system is presented, which is based on the rapid serial visual presentation paradigm and uses a knowledge-based approach for authentication.
Abstract: Lately, electroencephalography (EEG)-based auth- entication has received considerable attention from the scientific community. However, the limited usability of wet EEG electrodes as well as low accuracy for large numbers of users have so far prevented this new technology to become commonplace. In this study a novel EEG-based authentication system is presented, which is based on the rapid serial visual presentation paradigm and uses a knowledge-based approach for authentication. Twenty-nine subjects’ data were recorded and analyzed with wet EEG electrodes as well as dry ones. A true acceptance rate of 100% can be reached for all subjects with an average required login time of 13.5 s for wet and 27 s for dry electrodes. Average false acceptance rates for the dry electrode setup were estimated to be $3.33 \times 10^{-5}$ .

Journal ArticleDOI
TL;DR: GuardOL is a combined approach using processor and field-programmable gate array (FPGA) to perform online malware detection and aims to capture the malicious behavior (i.e., high-level semantics) of malware.
Abstract: Recently, malware has increasingly become a critical threat to embedded systems, while the conventional software solutions, such as antivirus and patches, have not been so successful in defending the ever-evolving and advanced malicious programs. In this paper, we propose a hardware-enhanced architecture, GuardOL, to perform online malware detection. GuardOL is a combined approach using processor and field-programmable gate array (FPGA). Our approach aims to capture the malicious behavior (i.e., high-level semantics) of malware. To this end, we first propose the frequency-centric model for feature construction using system call patterns of known malware and benign samples. We then develop a machine learning approach (using multilayer perceptron) in FPGA to train classifier using these features. At runtime, the trained classifier is used to classify the unknown samples as malware or benign, with early prediction. The experimental results show that our solution can achieve high classification accuracy, fast detection, low power consumption, and flexibility for easy functionality upgrade to adapt to new malware samples. One of the main advantages of our design is the support of early prediction—detecting 46% of malware within first 30% of their execution, while 97% of the samples at 100% of their execution, with <3% false positives.

Journal ArticleDOI
TL;DR: The security analysis of the proposed AMUA protocol demonstrates that it satisfies the security requirements in practical applications and is provably secure in the novel security model and is more practical for various mobile applications.
Abstract: Rapid advances in wireless communication technologies have paved the way for a wide range of mobile devices to become increasingly ubiquitous and popular. Mobile devices enable anytime, anywhere access to the Internet. The fast growth of many types of mobile services used by various users has made the traditional single-server architecture inefficient in terms of its functional requirements. To ensure the availability of various mobile services, there is a need to deploy multi-server architectures. To ensure the security of various mobile service applications, the anonymous mobile user authentication (AMUA) protocol without online registration using the self-certified public key cryptography (SCPKC) for multi-server architectures was proposed in the past. However, most of the past AMUA solutions suffer from malicious attacks or have unacceptable computation and communication costs. To address these drawbacks, we propose a new AMUA protocol that uses the SCPKC for multi-server architectures. In contrast to the existing AMUA protocols, our proposed AMUA protocol incurs lower computation and communication costs. By comparing with two of the latest AMUA protocols, the computation and the communication costs of our protocol are at least 74.93% and 37.43% lower than them, respectively. Moreover, the security analysis of our AMUA protocol demonstrates that it satisfies the security requirements in practical applications and is provably secure in the novel security model. By maintaining security at various levels, our AMUA protocol is more practical for various mobile applications.

Journal ArticleDOI
TL;DR: This paper investigates the specific emitter identification (SEI) problem, which distinguishes different emitters using features generated by the nonlinearity of the power amplifiers of emitters, and three algorithms based on the Hilbert spectrum are proposed that show effectiveness in both single-hop and relaying scenarios, as well as under different channel conditions.
Abstract: In this paper, we investigate the specific emitter identification (SEI) problem, which distinguishes different emitters using features generated by the nonlinearity of the power amplifiers of emitters. SEI is performed by measuring the features representing the individual specifications of emitters and making a decision based on their differences. In this paper, the SEI problem is considered in both single-hop and relaying scenarios, and three algorithms based on the Hilbert spectrum are proposed. The first employs the entropy and the first- and second-order moments as identification features, which describe the uniformity of the Hilbert spectrum. The second uses the correlation coefficient as an identification feature, by evaluating the similarity between different Hilbert spectra. The third exploits Fisher’s discriminant ratio to obtain the identification features by selecting the Hilbert spectrum elements with strong class separability. When compared with the existing literature, we further consider the identification problem in a relaying scenario, in which the fingerprint of different emitters is contaminated by the relay’s fingerprints. Moreover, we explore the identification performance under various channel conditions, such as additive white Gaussian noise, non-Gaussian noise, and fading. Extensive simulation experiments are performed to evaluate the identification performance of the proposed algorithms, and results show their effectiveness in both single-hop and relaying scenarios, as well as under different channel conditions.

Journal ArticleDOI
TL;DR: Experimental results have demonstrated that the CDM-based RDH scheme can achieve the best performance at the moderate-to-high embedding capacity compared with other state-of-the-art schemes.
Abstract: In this paper, a novel code division multiplexing (CDM) algorithm-based reversible data hiding (RDH) scheme is presented The covert data are denoted by different orthogonal spreading sequences and embedded into the cover image The original image can be completely recovered after the data have been extracted exactly The Walsh Hadamard matrix is employed to generate orthogonal spreading sequences, by which the data can be overlappingly embedded without interfering each other, and multilevel data embedding can be utilized to enlarge the embedding capacity Furthermore, most elements of different spreading sequences are mutually cancelled when they are overlappingly embedded, which maintains the image in good quality even with a high embedding payload A location-map free method is presented in this paper to save more space for data embedding, and the overflow/underflow problem is solved by shrinking the distribution of the image histogram on both the ends This would further improve the embedding performance Experimental results have demonstrated that the CDM-based RDH scheme can achieve the best performance at the moderate-to-high embedding capacity compared with other state-of-the-art schemes

Journal ArticleDOI
TL;DR: This paper leverage the third party auditor (TPA) in many existing public auditing designs, let it play the role of authorized party in this case, and make it in charge of both the storage auditing and the secure key updates for key-exposure resistance.
Abstract: Key-exposure resistance has always been an important issue for in-depth cyber defence in many security applications. Recently, how to deal with the key exposure problem in the settings of cloud storage auditing has been proposed and studied. To address the challenge, existing solutions all require the client to update his secret keys in every time period, which may inevitably bring in new local burdens to the client, especially those with limited computation resources, such as mobile phones. In this paper, we focus on how to make the key updates as transparent as possible for the client and propose a new paradigm called cloud storage auditing with verifiable outsourcing of key updates. In this paradigm, key updates can be safely outsourced to some authorized party, and thus the key-update burden on the client will be kept minimal. In particular, we leverage the third party auditor (TPA) in many existing public auditing designs, let it play the role of authorized party in our case, and make it in charge of both the storage auditing and the secure key updates for key-exposure resistance. In our design, TPA only needs to hold an encrypted version of the client’s secret key while doing all these burdensome tasks on behalf of the client. The client only needs to download the encrypted secret key from the TPA when uploading new files to cloud. Besides, our design also equips the client with capability to further verify the validity of the encrypted secret keys provided by the TPA. All these salient features are carefully designed to make the whole auditing procedure with key exposure resistance as transparent as possible for the client. We formalize the definition and the security model of this paradigm. The security proof and the performance simulation show that our detailed design instantiations are secure and efficient.

Journal ArticleDOI
TL;DR: This paper revisits attribute-based data sharing scheme in order to solve the key escrow issue but also improve the expressiveness of attribute, so that the resulting scheme is more friendly to cloud computing applications.
Abstract: Ciphertext-policy attribute-based encryption (CP-ABE) is a very promising encryption technique for secure data sharing in the context of cloud computing. Data owner is allowed to fully control the access policy associated with his data which to be shared. However, CP-ABE is limited to a potential security risk that is known as key escrow problem, whereby the secret keys of users have to be issued by a trusted key authority. Besides, most of the existing CP-ABE schemes cannot support attribute with arbitrary state. In this paper, we revisit attribute-based data sharing scheme in order to solve the key escrow issue but also improve the expressiveness of attribute, so that the resulting scheme is more friendly to cloud computing applications. We propose an improved two-party key issuing protocol that can guarantee that neither key authority nor cloud service provider can compromise the whole secret key of a user individually. Moreover, we introduce the concept of attribute with weight, being provided to enhance the expression of attribute, which can not only extend the expression from binary to arbitrary state, but also lighten the complexity of access policy. Therefore, both storage cost and encryption complexity for a ciphertext are relieved. The performance analysis and the security proof show that the proposed scheme is able to achieve efficient and secure data sharing in cloud computing.

Journal ArticleDOI
TL;DR: The results indicate that CS is in general not secure according to cryptographic standards, but may provide a useful built-in data obfuscation layer.
Abstract: In this paper, the security of the compressed sensing (CS) framework as a form of data confidentiality is analyzed. Two important properties of one-time random linear measurements acquired using a Gaussian independent identically distributed matrix are outlined: 1) the measurements reveal only the energy of the sensed signal and 2) only the energy of the measurements leaks information about the signal. An important consequence of the above facts is that CS provides information theoretic secrecy in a particular setting. Namely, a simple strategy based on the normalization of the Gaussian measurements achieves, at least in theory, perfect secrecy, enabling the use of CS as an additional security layer in privacy preserving applications. In the generic setting in which CS does not provide information theoretic secrecy, two alternative security notions linked to the difficulty of estimating the energy of the signal and distinguishing equal-energy signals are introduced. Useful bounds on the mean square error of any possible estimator and the probability of error of any possible detector are provided and compared with the simulations. The results indicate that CS is in general not secure according to cryptographic standards, but may provide a useful built-in data obfuscation layer.

Journal ArticleDOI
TL;DR: Experimental results show the feasibility and effectiveness of the proposed approach to detect the hidden data exchange between colluding applications, based on artificial intelligence tools, such as neural networks and decision trees.
Abstract: Modern malware uses advanced techniques to hide from static and dynamic analysis tools. To achieve stealthiness when attacking a mobile device, an effective approach is the use of a covert channel built by two colluding applications to exchange data locally. Since this process is tightly coupled with the used hiding method, its detection is a challenging task, also worsened by the very low transmission rates. As a consequence, it is important to investigate how to reveal the presence of malicious software using general indicators, such as the energy consumed by the device. In this perspective, this paper aims to spot malware covertly exchanging data using two detection methods based on artificial intelligence tools, such as neural networks and decision trees. To verify their effectiveness, seven covert channels have been implemented and tested over a measurement framework using Android devices. Experimental results show the feasibility and effectiveness of the proposed approach to detect the hidden data exchange between colluding applications.