scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Information Forensics and Security in 2013"


Journal ArticleDOI
TL;DR: A classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen is proposed.
Abstract: We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0% for intrasession authentication, 2%-3% for intersession authentication, and below 4% when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multimodal biometric authentication system.

804 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel method by reserving room before encryption with a traditional RDH algorithm, and thus it is easy for the data hider to reversibly embed data in the encrypted image.
Abstract: Recently, more and more attention is paid to reversible data hiding (RDH) in encrypted images, since it maintains the excellent property that the original cover can be losslessly recovered after embedded data is extracted while protecting the image content's confidentiality. All previous methods embed data by reversibly vacating room from the encrypted images, which may be subject to some errors on data extraction and/or image restoration. In this paper, we propose a novel method by reserving room before encryption with a traditional RDH algorithm, and thus it is easy for the data hider to reversibly embed data in the encrypted image. The proposed method can achieve real reversibility, that is, data extraction and image recovery are free of any error. Experiments show that this novel method can embed more than 10 times as large payloads for the same image quality as the previous methods, such as for PSNR=40 dB.

610 citations


Journal ArticleDOI
TL;DR: Numerical modeling attacks on several proposed strong physical unclonable functions (PUFs) are discussed, leading to new design requirements for secure electrical Strong PUFs, and will be useful to PUF designers and attackers alike.
Abstract: We discuss numerical modeling attacks on several proposed strong physical unclonable functions (PUFs). Given a set of challenge-response pairs (CRPs) of a Strong PUF, the goal of our attacks is to construct a computer algorithm which behaves indistinguishably from the original PUF on almost all CRPs. If successful, this algorithm can subsequently impersonate the Strong PUF, and can be cloned and distributed arbitrarily. It breaks the security of any applications that rest on the Strong PUF's unpredictability and physical unclonability. Our method is less relevant for other PUF types such as Weak PUFs. The Strong PUFs that we could attack successfully include standard Arbiter PUFs of essentially arbitrary sizes, and XOR Arbiter PUFs, Lightweight Secure PUFs, and Feed-Forward Arbiter PUFs up to certain sizes and complexities. We also investigate the hardness of certain Ring Oscillator PUF architectures in typical Strong PUF applications. Our attacks are based upon various machine learning techniques, including a specially tailored variant of logistic regression and evolution strategies. Our results are mostly obtained on CRPs from numerical simulations that use established digital models of the respective PUFs. For a subset of the considered PUFs-namely standard Arbiter PUFs and XOR Arbiter PUFs-we also lead proofs of concept on silicon data from both FPGAs and ASICs. Over four million silicon CRPs are used in this process. The performance on silicon CRPs is very close to simulated CRPs, confirming a conjecture from earlier versions of this work. Our findings lead to new design requirements for secure electrical Strong PUFs, and will be useful to PUF designers and attackers alike.

463 citations


Journal ArticleDOI
TL;DR: This paper presents an information-theoretic framework that promises an analytical model guaranteeing tight bounds of how much utility is possible for a given level of privacy and vice-versa.
Abstract: Ensuring the usefulness of electronic data sources while providing necessary privacy guarantees is an important unsolved problem. This problem drives the need for an analytical framework that can quantify the privacy of personally identifiable information while still providing a quantifiable benefit (utility) to multiple legitimate information consumers. This paper presents an information-theoretic framework that promises an analytical model guaranteeing tight bounds of how much utility is possible for a given level of privacy and vice-versa. Specific contributions include: 1) stochastic data models for both categorical and numerical data; 2) utility-privacy tradeoff regions and the encoding (sanization) schemes achieving them for both classes and their practical relevance; and 3) modeling of prior knowledge at the user and/or data source and optimal encoding schemes for both cases.

393 citations


Journal ArticleDOI
TL;DR: This paper gives the formal model of ABE with verifiable outsourced decryption and proposes a concrete scheme that is both secure and verifiable, without relying on random oracles and shows an implementation of the scheme and result of performance measurements, which indicates a significant reduction on computing resources imposed on users.
Abstract: Attribute-based encryption (ABE) is a public-key-based one-to-many encryption that allows users to encrypt and decrypt data based on user attributes. A promising application of ABE is flexible access control of encrypted data stored in the cloud, using access polices and ascribed attributes associated with private keys and ciphertexts. One of the main efficiency drawbacks of the existing ABE schemes is that decryption involves expensive pairing operations and the number of such operations grows with the complexity of the access policy. Recently, Green et al. proposed an ABE system with outsourced decryption that largely eliminates the decryption overhead for users. In such a system, a user provides an untrusted server, say a cloud service provider, with a transformation key that allows the cloud to translate any ABE ciphertext satisfied by that user's attributes or access policy into a simple ciphertext, and it only incurs a small computational overhead for the user to recover the plaintext from the transformed ciphertext. Security of an ABE system with outsourced decryption ensures that an adversary (including a malicious cloud) will not be able to learn anything about the encrypted message; however, it does not guarantee the correctness of the transformation done by the cloud. In this paper, we consider a new requirement of ABE with outsourced decryption: verifiability. Informally, verifiability guarantees that a user can efficiently check if the transformation is done correctly. We give the formal model of ABE with verifiable outsourced decryption and propose a concrete scheme. We prove that our new scheme is both secure and verifiable, without relying on random oracles. Finally, we show an implementation of our scheme and result of performance measurements, which indicates a significant reduction on computing resources imposed on users.

385 citations


Journal ArticleDOI
TL;DR: This paper proposes a role-based encryption (RBE) scheme that integrates the cryptographic techniques with RBAC, and presents a secure RBE-based hybrid cloud storage architecture that allows an organization to store data securely in a public cloud, while maintaining the sensitive information related to the organization's structure in a private cloud.
Abstract: With the rapid developments occurring in cloud computing and services, there has been a growing trend to use the cloud for large-scale data storage. This has raised the important security issue of how to control and prevent unauthorized access to data stored in the cloud. One well known access control model is the role-based access control (RBAC), which provides flexible controls and management by having two mappings, users to roles and roles to privileges on data objects. In this paper, we propose a role-based encryption (RBE) scheme that integrates the cryptographic techniques with RBAC. Our RBE scheme allows RBAC policies to be enforced for the encrypted data stored in public clouds. Based on the proposed scheme, we present a secure RBE-based hybrid cloud storage architecture that allows an organization to store data securely in a public cloud, while maintaining the sensitive information related to the organization's structure in a private cloud. We describe a practical implementation of the proposed RBE-based architecture and discuss the performance results. We demonstrate that users only need to keep a single key for decryption, and system operations are efficient regardless of the complexity of the role hierarchy and user membership in the system.

353 citations


Journal ArticleDOI
TL;DR: A novel reversible data hiding (RDH) scheme is proposed by using difference-pair-mapping (DPM), which outperforms some state-of-the-art RDH works and can be better exploited and an improved embedding performance is achieved.
Abstract: In this paper, based on two-dimensional difference- histogram modification, a novel reversible data hiding (RDH) scheme is proposed by using difference-pair-mapping (DPM). First, by considering each pixel-pair and its context, a sequence consisting of pairs of difference values is computed. Then, a two-dimensional difference-histogram is generated by counting the frequency of the resulting difference-pairs. Finally, reversible data embedding is implemented according to a specifically designed DPM. Here, the DPM is an injective mapping defined on difference-pairs. It is a natural extension of expansion embedding and shifting techniques used in current histogram-based RDH methods. By the proposed approach, compared with the conventional one-dimensional difference-histogram and one-dimensional prediction-error-histogram-based RDH methods, the image redundancy can be better exploited and an improved embedding performance is achieved. Moreover, a pixel-pair-selection strategy is also adopted to priorly use the pixel-pairs located in smooth image regions to embed data. This can further enhance the embedding performance. Experimental results demonstrate that the proposed scheme outperforms some state-of-the-art RDH works.

292 citations


Journal ArticleDOI
TL;DR: This paper constructs a new multiauthority CP-ABE scheme with efficient decryption, and design an efficient attribute revocation method that can achieve both forward security and backward security, and proposes an extensive data access control scheme (EDAC-MACS), which is secure under weaker security assumptions.
Abstract: Data access control is an effective way to ensure data security in the cloud. However, due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Existing access control schemes are no longer applicable to cloud storage systems, because they either produce multiple encrypted copies of the same data or require a fully trusted cloud server. Ciphertext-policy attribute-based encryption (CP-ABE) is a promising technique for access control of encrypted data. However, due to the inefficiency of decryption and revocation, existing CP-ABE schemes cannot be directly applied to construct a data access control scheme for multiauthority cloud storage systems, where users may hold attributes from multiple authorities. In this paper, we propose data access control for multiauthority cloud storage (DAC-MACS), an effective and secure data access control scheme with efficient decryption and revocation. Specifically, we construct a new multiauthority CP-ABE scheme with efficient decryption, and also design an efficient attribute revocation method that can achieve both forward security and backward security. We further propose an extensive data access control scheme (EDAC-MACS), which is secure under weaker security assumptions.

257 citations


Journal ArticleDOI
TL;DR: This work makes the first empirical study and evaluation of the effect of evasion tactics utilized by Twitter spammers and is a valuable supplement to this line of research.
Abstract: To date, as one of the most popular online social networks (OSNs), Twitter is paying its dues as more and more spammers set their sights on this microblogging site. Twitter spammers can achieve their malicious goals such as sending spam, spreading malware, hosting botnet command and control (C&C) channels, and launching other underground illicit activities. Due to the significance and indispensability of detecting and suspending those spam accounts, many researchers along with the engineers at Twitter Inc. have devoted themselves to keeping Twitter as spam-free online communities. Most of the existing studies utilize machine learning techniques to detect Twitter spammers. “While the priest climbs a post, the devil climbs ten.” Twitter spammers are evolving to evade existing detection features. In this paper, we first make a comprehensive and empirical analysis of the evasion tactics utilized by Twitter spammers. We further design several new detection features to detect more Twitter spammers. In addition, to deeply understand the effectiveness and difficulties of using machine learning features to detect spammers, we analyze the robustness of 24 detection features that are commonly utilized in the literature as well as our proposed ones. Through our experiments, we show that our new designed features are much more effective to be used to detect (even evasive) Twitter spammers. According to our evaluation, while keeping an even lower false positive rate, the detection rate using our new feature set is also significantly higher than that of existing work. To the best of our knowledge, this work is the first empirical study and evaluation of the effect of evasion tactics utilized by Twitter spammers and is a valuable supplement to this line of research.

246 citations


Journal ArticleDOI
TL;DR: Zernike moments of small image blocks are used in this article to detect duplicated image regions by exploiting rotation invariance properties to reliably unveil duplicated regions after arbitrary rotations.
Abstract: This paper proposes a forensic technique to localize duplicated image regions based on Zernike moments of small image blocks. We exploit rotation invariance properties to reliably unveil duplicated regions after arbitrary rotations. We devise a novel block matching procedure based on locality sensitive hashing and reduce false positives by examining the moments' phase. A massive experimental test setup benchmarks our algorithm against state-of-the-art methods under various perspectives, examining both pixel-level localization and image-level detection performance. By taking signal characteristics into account and distinguishing between “textured” and “smooth” duplicated regions, we find that the proposed method outperforms prior art in particular when duplicated regions are smooth. Experiments indicate high robustness against JPEG compression, blurring, additive white Gaussian noise, and moderate scaling.

245 citations


Journal ArticleDOI
TL;DR: The traditional way to represent digital images for feature based steganalysis is to compute a noise residual from the image using a pixel predictor and then form the feature as a sample joint probability distribution of neighboring quantized residual samples - the co-occurrence matrix is proposed.
Abstract: The traditional way to represent digital images for feature based steganalysis is to compute a noise residual from the image using a pixel predictor and then form the feature as a sample joint probability distribution of neighboring quantized residual samples-the so - called co-occurrence matrix. In this paper, we propose an alternative statistical representation - instead of forming the co-occurrence matrix, we project neighboring residual samples onto a set of random vectors and take the first-order statistic (histogram) of the projections as the feature. When multiple residuals are used, this representation is called the projection spatial rich model (PSRM). On selected modern steganographic algorithms embedding in the spatial, JPEG, and side-informed JPEG domains, we demonstrate that the PSRM can achieve a more accurate detection as well as a substantially improved performance versus dimensionality trade-off than state-of-the-art feature sets.

Journal ArticleDOI
TL;DR: This paper proposes a forgery detection method that exploits subtle inconsistencies in the color of the illumination of images that is applicable to images containing two or more people and requires no expert interaction for the tampering decision.
Abstract: For decades, photographs have been used to document space-time events and they have often served as evidence in courts. Although photographers are able to create composites of analog pictures, this process is very time consuming and requires expert knowledge. Today, however, powerful digital image editing software makes image modifications straightforward. This undermines our trust in photographs and, in particular, questions pictures as evidence for real-world events. In this paper, we analyze one of the most common forms of photographic manipulation, known as image composition or splicing. We propose a forgery detection method that exploits subtle inconsistencies in the color of the illumination of images. Our approach is machine-learning-based and requires minimal user interaction. The technique is applicable to images containing two or more people and requires no expert interaction for the tampering decision. To achieve this, we incorporate information from physics- and statistical-based illuminant estimators on image regions of similar material. From these illuminant estimates, we extract texture- and edge-based features which are then provided to a machine-learning approach for automatic decision-making. The classification performance using an SVM meta-fusion classifier is promising. It yields detection rates of 86% on a new benchmark dataset consisting of 200 images, and 83% on 50 images that were collected from the Internet.

Journal ArticleDOI
TL;DR: A penalty function method incorporating the rank-1 constraint into the objective function is proposed and an efficient iterative algorithm to solve the so-obtained problem, which is a convex SDP problem, thus it can be efficiently solved using the interior point method.
Abstract: In this paper, we propose a hybrid cooperative beamforming and jamming scheme to enhance the physical-layer security of a single-antenna-equipped two-way relay network in the presence of an eavesdropper. The basic idea is that in both cooperative transmission phases, some intermediate nodes help to relay signals to the legitimate destination adopting distributed beamforming, while the remaining nodes jam the eavesdropper, simultaneously, which takes the data transmissions in both phases under protection. Two different schemes are proposed, with and without the instantaneous channel state information of the eavesdropper, respectively, and both are subjected to the more practical individual power constraint of each cooperative node. Under the general channel model, it is shown that both problems can be transformed into a semi-definite programming (SDP) problem with an additional rank-1 constraint. A current state of the art technique for handling such a problem is the semi-definite relaxation (SDR) and randomization techniques. In this paper, however, we propose a penalty function method incorporating the rank-1 constraint into the objective function. Although the so-obtained problem is not convex, we develop an efficient iterative algorithm to solve it. Each iteration is a convex SDP problem, thus it can be efficiently solved using the interior point method. When the channels are reciprocal such as in TDD mode, we show that the problems become second-order convex cone programming ones. Numerical evaluation results are provided and analyzed to show the properties and efficiency of the proposed hybrid security scheme, and also demonstrate that our optimization algorithms outperform the SDR technique.

Journal ArticleDOI
TL;DR: A secure scheme that can achieve the security and privacy requirements, and overcome the weaknesses of SPECS is provided, and the efficiency merits of the scheme are shown through performance evaluations in terms of verification delay and transmission overhead.
Abstract: The security and privacy preservation issues are prerequisites for vehicular ad hoc networks. Recently, secure and privacy enhancing communication schemes (SPECS) was proposed and focused on intervehicle communications. SPECS provided a software-based solution to satisfy the privacy requirement and gave lower message overhead and higher successful rate than previous solutions in the message verification phase. SPECS also presented the first group communication protocol to allow vehicles to authenticate and securely communicate with others in a group of known vehicles. Unfortunately, we find out that SPECS is vulnerable to impersonation attack. SPECS has a flow such that a malicious vehicle can force arbitrary vehicles to broadcast fake messages to other vehicles or even a malicious vehicle in the group can counterfeit another group member to send fake messages securely among themselves. In this paper, we provide a secure scheme that can achieve the security and privacy requirements, and overcome the weaknesses of SPECS. Moreover, we show the efficiency merits of our scheme through performance evaluations in terms of verification delay and transmission overhead.

Journal ArticleDOI
TL;DR: A new reversible watermarking scheme that can insert more data with lower distortion than any existing schemes and achieve a peak signal-to-noise ratio (PSNR) of about 1-2 dB greater than with the scheme of Hwang, the most efficient approach actually.
Abstract: In this paper, we propose a new reversible watermarking scheme. One first contribution is a histogram shifting modulation which adaptively takes care of the local specificities of the image content. By applying it to the image prediction-errors and by considering their immediate neighborhood, the scheme we propose inserts data in textured areas where other methods fail to do so. Furthermore, our scheme makes use of a classification process for identifying parts of the image that can be watermarked with the most suited reversible modulation. This classification is based on a reference image derived from the image itself, a prediction of it, which has the property of being invariant to the watermark insertion. In that way, the watermark embedder and extractor remain synchronized for message extraction and image reconstruction. The experiments conducted so far, on some natural images and on medical images from different modalities, show that for capacities smaller than 0.4 bpp, our method can insert more data with lower distortion than any existing schemes. For the same capacity, we achieve a peak signal-to-noise ratio (PSNR) of about 1-2 dB greater than with the scheme of Hwang , the most efficient approach actually.

Journal ArticleDOI
TL;DR: A new, robust median filtering forensic technique that operates by analyzing the statistical properties of the median filter residual (MFR), which is defined as the difference between an image in question and a median filtered version of itself.
Abstract: In order to verify the authenticity of digital images, researchers have begun developing digital forensic techniques to identify image editing. One editing operation that has recently received increased attention is median filtering. While several median filtering detection techniques have recently been developed, their performance is degraded by JPEG compression. These techniques suffer similar degradations in performance when a small window of the image is analyzed, as is done in localized filtering or cut-and-paste detection, rather than the image as a whole. In this paper, we propose a new, robust median filtering forensic technique. It operates by analyzing the statistical properties of the median filter residual (MFR), which we define as the difference between an image in question and a median filtered version of itself. To capture the statistical properties of the MFR, we fit it to an autoregressive (AR) model. We then use the AR coefficients as features for median filter detection. We test the effectiveness of our proposed median filter detection techniques through a series of experiments. These results show that our proposed forensic technique can achieve important performance gains over existing methods, particularly at low false-positive rates, with a very small dimension of features.

Journal ArticleDOI
TL;DR: The impact of antenna correlation on secrecy performance of multiple-input multiple-output wiretap channels where transmitter employs transmit antenna selection while receiver and eavesdropper perform maximal-ratio combining with arbitrary correlation is analyzed.
Abstract: We analyze the impact of antenna correlation on secrecy performance of multiple-input multiple-output wiretap channels where transmitter employs transmit antenna selection while receiver and eavesdropper perform maximal-ratio combining with arbitrary correlation. New closed-form expressions are derived for the exact and asymptotic (high signal-to-noise ratio in transmitter-receiver channel) secrecy outage probability.

Journal ArticleDOI
TL;DR: A robust hashing method is developed for detecting image forgery including removal, insertion, and replacement of objects, and abnormal color modification, and for locating the forged area.
Abstract: A robust hashing method is developed for detecting image forgery including removal, insertion, and replacement of objects, and abnormal color modification, and for locating the forged area. Both global and local features are used in forming the hash sequence. The global features are based on Zernike moments representing luminance and chrominance characteristics of the image as a whole. The local features include position and texture information of salient regions in the image. Secret keys are introduced in feature extraction and hash construction. While being robust against content-preserving image processing, the hash is sensitive to malicious tampering and, therefore, applicable to image authentication. The hash of a test image is compared with that of a reference image. When the hash distance is greater than a threshold τ1 and less than τ2, the received image is judged as a fake. By decomposing the hashes, the type of image forgery and location of forged areas can be determined. Probability of collision between hashes of different images approaches zero. Experimental results are presented to show effectiveness of the method.

Journal ArticleDOI
TL;DR: A lightweight and dependable trust system (LDTS) for WSNs, which employ clustering algorithms, and a self-adaptive weighted method is defined for trust aggregation at CH level, which surpasses the limitations of traditional weighting methods for trust factors.
Abstract: The resource efficiency and dependability of a trust system are the most fundamental requirements for any wireless sensor network (WSN). However, existing trust systems developed for WSNs are incapable of satisfying these requirements because of their high overhead and low dependability. In this work, we proposed a lightweight and dependable trust system (LDTS) for WSNs, which employ clustering algorithms. First, a lightweight trust decision-making scheme is proposed based on the nodes' identities (roles) in the clustered WSNs, which is suitable for such WSNs because it facilitates energy-saving. Due to canceling feedback between cluster members (CMs) or between cluster heads (CHs), this approach can significantly improve system efficiency while reducing the effect of malicious nodes. More importantly, considering that CHs take on large amounts of data forwarding and communication tasks, a dependability-enhanced trust evaluating approach is defined for cooperations between CHs. This approach can effectively reduce networking consumption while malicious, selfish, and faulty CHs. Moreover, a self-adaptive weighted method is defined for trust aggregation at CH level. This approach surpasses the limitations of traditional weighting methods for trust factors, in which weights are assigned subjectively. Theory as well as simulation results shows that LDTS demands less memory and communication overhead compared with the current typical trust systems for WSNs.

Journal ArticleDOI
TL;DR: This work adds traceability to an existing expressive, efficient, and secure CP-ABE scheme without weakening its security or setting any particular trade-off on its performance.
Abstract: In a ciphertext-policy attribute-based encryption (CP-ABE) system, decryption keys are defined over attributes shared by multiple users. Given a decryption key, it may not be always possible to trace to the original key owner. As a decryption privilege could be possessed by multiple users who own the same set of attributes, malicious users might be tempted to leak their decryption privileges to some third parties, for financial gain as an example, without the risk of being caught. This problem severely limits the applications of CP-ABE. Several traceable CP-ABE (T-CP-ABE) systems have been proposed to address this problem, but the expressiveness of policies in those systems is limited where only and gate with wildcard is currently supported. In this paper we propose a new T-CP-ABE system that supports policies expressed in any monotone access structures. Also, the proposed system is as efficient and secure as one of the best (non-traceable) CP-ABE systems currently available, that is, this work adds traceability to an existing expressive, efficient, and secure CP-ABE scheme without weakening its security or setting any particular trade-off on its performance.

Journal ArticleDOI
TL;DR: This paper proposes a component-based representation (CBR) approach to measure the similarity between a composite sketch and mugshot photograph and believes its prototype system will be of great value to law enforcement agencies in apprehending suspects in a timely fashion.
Abstract: The problem of automatically matching composite sketches to facial photographs is addressed in this paper. Previous research on sketch recognition focused on matching sketches drawn by professional artists who either looked directly at the subjects (viewed sketches) or used a verbal description of the subject's appearance as provided by an eyewitness (forensic sketches). Unlike sketches hand drawn by artists, composite sketches are synthesized using one of the several facial composite software systems available to law enforcement agencies. We propose a component-based representation (CBR) approach to measure the similarity between a composite sketch and mugshot photograph. Specifically, we first automatically detect facial landmarks in composite sketches and face photos using an active shape model (ASM). Features are then extracted for each facial component using multiscale local binary patterns (MLBPs), and per component similarity is calculated. Finally, the similarity scores obtained from individual facial components are fused together, yielding a similarity score between a composite sketch and a face photo. Matching performance is further improved by filtering the large gallery of mugshot images using gender information. Experimental results on matching 123 composite sketches against two galleries with 10,123 and 1,316 mugshots show that the proposed method achieves promising performance (rank-100 accuracies of 77.2% and 89.4%, respectively) compared to a leading commercial face recognition system (rank-100 accuracies of 22.8% and 52.0%) and densely sampled MLBP on holistic faces (rank-100 accuracies of 27.6% and 10.6%). We believe our prototype system will be of great value to law enforcement agencies in apprehending suspects in a timely fashion.

Journal ArticleDOI
Jun Zhang1, Chao Chen1, Yang Xiang1, Wanlei Zhou1, Yong Xiang1 
TL;DR: The experimental results show that the proposed traffic classification scheme can achieve much better classification performance than existing state-of-the-art traffic classification methods.
Abstract: This paper presents a novel traffic classification scheme to improve classification performance when few training data are available. In the proposed scheme, traffic flows are described using the discretized statistical features and flow correlation information is modeled by bag-of-flow (BoF). We solve the BoF-based traffic classification in a classifier combination framework and theoretically analyze the performance benefit. Furthermore, a new BoF-based traffic classification method is proposed to aggregate the naive Bayes (NB) predictions of the correlated flows. We also present an analysis on prediction error sensitivity of the aggregation strategies. Finally, a large number of experiments are carried out on two large-scale real-world traffic datasets to evaluate the proposed scheme. The experimental results show that the proposed scheme can achieve much better classification performance than existing state-of-the-art traffic classification methods.

Journal ArticleDOI
TL;DR: A simple and efficient user authentication approach based on a fixed mouse-operation task that achieves false-acceptance rate of 8.74%, and a false-rejection rate of 7.69% with a corresponding authentication time of 11.8 seconds is presented.
Abstract: Behavior-based user authentication with pointing devices, such as mice or touchpads, has been gaining attention. As an emerging behavioral biometric, mouse dynamics aims to address the authentication problem by verifying computer users on the basis of their mouse operating styles. This paper presents a simple and efficient user authentication approach based on a fixed mouse-operation task. For each sample of the mouse-operation task, both traditional holistic features and newly defined procedural features are extracted for accurate and fine-grained characterization of a user's unique mouse behavior. Distance-measurement and eigenspace-transformation techniques are applied to obtain feature components for efficiently representing the original mouse feature space. Then a one-class learning algorithm is employed in the distance-based feature eigenspace for the authentication task. The approach is evaluated on a dataset of 5550 mouse-operation samples from 37 subjects. Extensive experimental results are included to demonstrate the efficacy of the proposed approach, which achieves a false-acceptance rate of 8.74%, and a false-rejection rate of 7.69% with a corresponding authentication time of 11.8 seconds. Two additional experiments are provided to compare the current approach with other approaches in the literature. Our dataset is publicly available to facilitate future research.

Journal ArticleDOI
TL;DR: This work proposes a novel technique based on additively homomorphic encryption that is efficient, requires no user interaction whatsoever (except for data upload and download), and allows evaluating any dynamically chosen function on inputs encrypted under different public keys.
Abstract: Secure multiparty computation enables a set of users to evaluate certain functionalities on their respective inputs while keeping these inputs encrypted throughout the computation. In many applications, however, outsourcing these computations to an untrusted server is desirable, so that the server can perform the computation on behalf of the users. Unfortunately, existing solutions are either inefficient, rely heavily on user interaction, or require the inputs to be encrypted under the same public key - drawbacks making the employment in practice very limited. We propose a novel technique based on additively homomorphic encryption that avoids all these drawbacks. This method is efficient, requires no user interaction whatsoever (except for data upload and download), and allows evaluating any dynamically chosen function on inputs encrypted under different public keys. Our solution assumes the existence of two non-colluding but untrusted servers that jointly perform the computation by means of a cryptographic protocol. This protocol is proven to be secure in the semi-honest model. By developing application-tailored variants of our approach, we demonstrate its versatility and apply it in two real-world scenarios from different domains, privacy-preserving face recognition and private smart metering. We also give a proof-of-concept implementation to highlight its feasibility.

Journal ArticleDOI
TL;DR: This paper proposes a trajectory privacy-preserving framework, named TrPF, for participatory sensing, and improves the theoretical mix-zones model with considering the time factor from the perspective of graph theory, and evaluates the effectiveness of the proposal on the basis of information entropy.
Abstract: The ubiquity of the various cheap embedded sensors on mobile devices, for example cameras, microphones, accelerometers, and so on, is enabling the emergence of participatory sensing applications. While participatory sensing can benefit the individuals and communities greatly, the collection and analysis of the participators' location and trajectory data may jeopardize their privacy. However, the existing proposals mostly focus on participators' location privacy, and few are done on participators' trajectory privacy. The effective analysis on trajectories that contain spatial-temporal history information will reveal participators' whereabouts and the relevant personal privacy. In this paper, we propose a trajectory privacy-preserving framework, named TrPF, for participatory sensing. Based on the framework, we improve the theoretical mix-zones model with considering the time factor from the perspective of graph theory. Finally, we analyze the threat models with different background knowledge and evaluate the effectiveness of our proposal on the basis of information entropy, and then compare the performance of our proposal with previous trajectory privacy protections. The analysis and simulation results prove that our proposal can protect participators' trajectories privacy effectively with lower information loss and costs than what is afforded by the other proposals.

Journal ArticleDOI
TL;DR: It is shown that selection of features together with fusion of LBP features significantly improved gender classification accuracy compared to previously published results, and a significant reduction in processing time is shown, which makes real-time applications of gender classification feasible.
Abstract: In this paper, we report our extension of the use of feature selection based on mutual information and feature fusion to improve gender classification of face images. We compare the results of fusing three groups of features, three spatial scales, and four different mutual information measures to select features. We also showed improved results by fusion of LBP features with different radii and spatial scales, and the selection of features using mutual information. As measures of mutual information we use minimum redundancy and maximal relevance (mRMR), normalized mutual information feature selection (NMIFS), conditional mutual information feature selection (CMIFS), and conditional mutual information maximization (CMIM). We tested the results on four databases: FERET and UND, under controlled conditions, the LFW database under unconstrained scenarios, and AR for occlusions. It is shown that selection of features together with fusion of LBP features significantly improved gender classification accuracy compared to previously published results. We also show a significant reduction in processing time because of the feature selection, which makes real-time applications of gender classification feasible.

Journal ArticleDOI
TL;DR: This study investigates physical-layer security in wireless ad hoc networks and investigates two types of multi-antenna transmission schemes for providing secrecy enhancements, and indicates that, under transmit power optimization, the beamforming scheme outperforms the sectoring scheme, except for the case where the number of transmit antennas are sufficiently large.
Abstract: We study physical-layer security in wireless ad hoc networks and investigate two types of multi-antenna transmission schemes for providing secrecy enhancements. To establish secure transmission against malicious eavesdroppers, we consider the generation of artificial noise with either sectoring or beamforming. For both approaches, we provide a statistical characterization and tradeoff analysis of the outage performance of the legitimate communication and the eavesdropping links. We then investigate the network-wide secrecy throughput performance of both schemes in terms of the secrecy transmission capacity, and study the optimal power allocation between the information signal and the artificial noise. Our analysis indicates that, under transmit power optimization, the beamforming scheme outperforms the sectoring scheme, except for the case where the number of transmit antennas are sufficiently large. Our study also reveals some interesting differences between the optimal power allocation for the sectoring and beamforming schemes.

Journal ArticleDOI
TL;DR: This paper designs a cloud-assisted privacy preserving mobile health monitoring system to protect the privacy of the involved parties and their data and adapts the outsourcing decryption technique and a newly proposed key private proxy reencryption to shift the computational complexity to the cloud without compromising clients' privacy and service providers' intellectual property.
Abstract: Cloud-assisted mobile health (mHealth) monitoring, which applies the prevailing mobile communications and cloud computing technologies to provide feedback decision support, has been considered as a revolutionary approach to improving the quality of healthcare service while lowering the healthcare cost. Unfortunately, it also poses a serious risk on both clients' privacy and intellectual property of monitoring service providers, which could deter the wide adoption of mHealth technology. This paper is to address this important problem and design a cloud-assisted privacy preserving mobile health monitoring system to protect the privacy of the involved parties and their data. Moreover, the outsourcing decryption technique and a newly proposed key private proxy reencryption are adapted to shift the computational complexity of the involved parties to the cloud without compromising clients' privacy and service providers' intellectual property. Finally, our security and performance analysis demonstrates the effectiveness of our proposed design.

Journal ArticleDOI
TL;DR: It is suggested that it is possible to construct a gait-based identification system for arbitrary probe views, by incorporating the information of gallery data with sufficient viewing angles, and ViDP performs even better than the state-of-the-art view transformation methods.
Abstract: Existing methods for multi-view gait-based identification mainly focus on transforming the features of one view to the features of another view, which is technically sound but has limited practical utility. In this paper, we propose a view-invariant discriminative projection (ViDP) method, to improve the discriminative ability of multi-view gait features by a unitary linear projection. It is implemented by iteratively learning the low dimensional geometry and finding the optimal projection according to the geometry. By virtue of ViDP, the multi-view gait features can be directly matched without knowing or estimating the viewing angles. The ViDP feature projected from gait energy image achieves promising performance in the experiments of multi-view gait-based identification. We suggest that it is possible to construct a gait-based identification system for arbitrary probe views, by incorporating the information of gallery data with sufficient viewing angles. In addition, ViDP performs even better than the state-of-the-art view transformation methods, which are trained for the combination of gallery and probe viewing angles in every evaluation.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a statistical framework for rigorously analyzing honeypot-captured cyber attack data, which is built on the concept of stochastic cyber attack process.
Abstract: Rigorously characterizing the statistical properties of cyber attacks is an important problem. In this paper, we propose the first statistical framework for rigorously analyzing honeypot-captured cyber attack data. The framework is built on the novel concept of stochastic cyber attack process, a new kind of mathematical objects for describing cyber attacks. To demonstrate use of the framework, we apply it to analyze a low-interaction honeypot dataset, while noting that the framework can be equally applied to analyze high-interaction honeypot data that contains richer information about the attacks. The case study finds, for the first time, that long-range dependence (LRD) is exhibited by honeypot-captured cyber attacks. The case study confirms that by exploiting the statistical properties (LRD in this case), it is feasible to predict cyber attacks (at least in terms of attack rate) with good accuracy. This kind of prediction capability would provide sufficient early-warning time for defenders to adjust their defense configurations or resource allocations. The idea of “gray-box” (rather than “black-box”) prediction is central to the utility of the statistical framework, and represents a significant step towards ultimately understanding (the degree of) the predictability of cyber attacks.