scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Dependable and Secure Computing in 2018"


Journal ArticleDOI
TL;DR: This paper has implemented a proof-of-concept for decentralized energy trading system using blockchain technology, multi-signatures, and anonymous encrypted messaging streams, enabling peers to anonymously negotiate energy prices and securely perform trading transactions.
Abstract: Smart grids equipped with bi-directional communication flow are expected to provide more sophisticated consumption monitoring and energy trading. However, the issues related to the security and privacy of consumption and trading data present serious challenges. In this paper we address the problem of providing transaction security in decentralized smart grid energy trading without reliance on trusted third parties. We have implemented a proof-of-concept for decentralized energy trading system using blockchain technology, multi-signatures, and anonymous encrypted messaging streams, enabling peers to anonymously negotiate energy prices and securely perform trading transactions. We conducted case studies to perform security analysis and performance evaluation within the context of the elicited security and privacy requirements.

837 citations


Journal ArticleDOI
TL;DR: This paper explores how to launch an inference attack exploiting social networks with a mixture of non-sensitive attributes and social relationships, and proposes a data sanitization method collectively manipulating user profile and friendship relations to protect against inference attacks in social networks.
Abstract: Releasing social network data could seriously breach user privacy. User profile and friendship relations are inherently private. Unfortunately, sensitive information may be predicted out of released data through data mining techniques. Therefore, sanitizing network data prior to release is necessary. In this paper, we explore how to launch an inference attack exploiting social networks with a mixture of non-sensitive attributes and social relationships. We map this issue to a collective classification problem and propose a collective inference model. In our model, an attacker utilizes user profile and social relationships in a collective manner to predict sensitive information of related victims in a released social network dataset. To protect against such attacks, we propose a data sanitization method collectively manipulating user profile and friendship relations. Besides sanitizing friendship relations, the proposed method can take advantages of various data-manipulating methods. We show that we can easily reduce adversary’s prediction accuracy on sensitive information, while resulting in less accuracy decrease on non-sensitive information towards three social network datasets. This is the first work to employ collective methods involving various data-manipulating methods and social relationships to protect against inference attacks in social networks.

437 citations


Journal ArticleDOI
TL;DR: MADAM is a novel host-based malware detection system for Android devices which simultaneously analyzes and correlates features at four levels: kernel, application, user and package, to detect and stop malicious behaviors.
Abstract: Android users are constantly threatened by an increasing number of malicious applications (apps), generically called malware. Malware constitutes a serious threat to user privacy, money, device and file integrity. In this paper we note that, by studying their actions, we can classify malware into a small number of behavioral classes, each of which performs a limited set of misbehaviors that characterize them. These misbehaviors can be defined by monitoring features belonging to different Android levels. In this paper we present MADAM, a novel host-based malware detection system for Android devices which simultaneously analyzes and correlates features at four levels: kernel, application, user and package, to detect and stop malicious behaviors. MADAM has been specifically designed to take into account those behaviors that are characteristics of almost every real malware which can be found in the wild. MADAM detects and effectively blocks more than 96 percent of malicious apps, which come from three large datasets with about 2,800 apps, by exploiting the cooperation of two parallel classifiers and a behavioral signature-based detector. Extensive experiments, which also includes the analysis of a testbed of 9,804 genuine apps, have been conducted to show the low false alarm rate, the negligible performance overhead and limited battery consumption.

343 citations


Journal ArticleDOI
Ding Wang1, Ping Wang1
TL;DR: In this paper, a security model that can accurately capture the practical capabilities of an adversary is defined and a broad set of twelve properties framed as a systematic methodology for comparative evaluation, allowing schemes to be rated across a common spectrum.
Abstract: As the most prevailing two-factor authentication mechanism, smart-card-based password authentication has been a subject of intensive research in the past two decades, and hundreds of this type of schemes have wave upon wave been proposed. In most of these studies, there is no comprehensive and systematical metric available for schemes to be assessed objectively, and the authors present new schemes with assertions of the superior aspects over previous ones, while overlooking dimensions on which their schemes fare poorly. Unsurprisingly, most of them are far from satisfactory—either are found short of important security goals or lack of critical properties, especially being stuck with the security-usability tension. To overcome this issue, in this work we first explicitly define a security model that can accurately capture the practical capabilities of an adversary and then suggest a broad set of twelve properties framed as a systematic methodology for comparative evaluation, allowing schemes to be rated across a common spectrum. As our main contribution, a new scheme is advanced to resolve the various issues arising from user corruption and server compromise, and it is formally proved secure under the harshest adversary model so far. In particular, by integrating “honeywords”, traditionally the purview of system security, with a “fuzzy-verifier”, our scheme hits “two birds”: it not only eliminates the long-standing security-usability conflict that is considered intractable in the literature, but also achieves security guarantees beyond the conventional optimal security bound.

323 citations


Journal ArticleDOI
TL;DR: A new authentication scheme for multi-server environments using Chebyshev chaotic map that provides strong authentication, and also supports biometrics & password change phase by a legitimate user at any time locally, and dynamic server addition phase.
Abstract: Multi-server environment is the most common scenario for a large number of enterprise class applications. In this environment, user registration at each server is not recommended. Using multi-server authentication architecture, user can manage authentication to various servers using single identity and password. We introduce a new authentication scheme for multi-server environments using Chebyshev chaotic map. In our scheme, we use the Chebyshev chaotic map and biometric verification along with password verification for authorization and access to various application servers. The proposed scheme is light-weight compared to other related schemes. We only use the Chebyshev chaotic map, cryptographic hash function and symmetric key encryption-decryption in the proposed scheme. Our scheme provides strong authentication, and also supports biometrics & password change phase by a legitimate user at any time locally, and dynamic server addition phase. We perform the formal security verification using the broadly-accepted Automated Validation of Internet Security Protocols and Applications (AVISPA) tool to show that the presented scheme is secure. In addition, we use the formal security analysis using the Burrows-Abadi-Needham (BAN) logic along with random oracle models and prove that our scheme is secure against different known attacks. High security and significantly low computation and communication costs make our scheme is very suitable for multi-server environments as compared to other existing related schemes.

171 citations


Journal ArticleDOI
TL;DR: This paper presents the system architecture of POCR and the associated toolkits required in the privacy preserving calculation of integers and rational numbers to ensure that commonly used outsourced operations can be handled on-the-fly.
Abstract: In this paper, we propose a framework for efficient and privacy-preserving outsourced calculation of rational numbers, which we refer to as POCR. Using POCR, a user can securely outsource the storing and processing of rational numbers to a cloud server without compromising the security of the (original) data and the computed results. We present the system architecture of POCR and the associated toolkits required in the privacy preserving calculation of integers and rational numbers to ensure that commonly used outsourced operations can be handled on-the-fly. We then prove that the proposed POCR achieves the goal of secure integer and rational number calculation without resulting in privacy leakage to unauthorized parties, and demonstrate the utility and the efficiency of POCR using simulations.

161 citations


Journal ArticleDOI
Qian Wang1, Yan Zhang1, Xiao Lu1, Zhibo Wang1, Zhan Qin2, Kui Ren2 
TL;DR: Experimental results show that the proposed schemes outperform the existing methods and improve the utility of real-time data sharing with strong privacy guarantee.
Abstract: Nowadays gigantic crowd-sourced data from mobile devices have become widely available in social networks, enabling the possibility of many important data mining applications to improve the quality of our daily lives. While providing tremendous benefits, the release of crowd-sourced social network data to the public will pose considerable threats to mobile users’ privacy. In this paper, we investigate the problem of real-time spatio-temporal data publishing in social networks with privacy preservation. Specifically, we consider continuous publication of population statistics and design RescueDP—an online aggregate monitoring framework over infinite streams with $w$ -event privacy guarantee. Its key components including adaptive sampling, adaptive budget allocation, dynamic grouping, perturbation and filtering, are seamlessly integrated as a whole to provide privacy-preserving statistics publishing on infinite time stamps. Moreover, we further propose an enhanced RescueDP with neural networks to accurately predict the values of statistics and improve the utility of released data. Both RescueDP and the enhanced RescueDP are proved satisfying $w$ -event privacy. We evaluate the proposed schemes with real-world as well as synthetic datasets and compare them with two $w$ -event privacy-assured representative methods. Experimental results show that the proposed schemes outperform the existing methods and improve the utility of real-time data sharing with strong privacy guarantee.

148 citations


Journal ArticleDOI
TL;DR: A parallel log parser (namely POP) on top of Spark, a large-scale data processing platform is designed and implemented to address the effectiveness of existing log parsers and their limitations when applying them into practice.
Abstract: Logs are widely used in system management for dependability assurance because they are often the only data available that record detailed system runtime behaviors in production. Because the size of logs is constantly increasing, developers (and operators) intend to automate their analysis by applying data mining methods, therefore structured input data (e.g., matrices) are required. This triggers a number of studies on log parsing that aims to transform free-text log messages into structured events. However, due to the lack of open-source implementations of these log parsers and benchmarks for performance comparison, developers are unlikely to be aware of the effectiveness of existing log parsers and their limitations when applying them into practice. They must often reimplement or redesign one, which is time-consuming and redundant. In this paper, we first present a characterization study of the current state of the art log parsers and evaluate their efficacy on five real-world datasets with over ten million log messages. We determine that, although the overall accuracy of these parsers is high, they are not robust across all datasets. When logs grow to a large scale (e.g., 200 million log messages), which is common in practice, these parsers are not efficient enough to handle such data on a single computer. To address the above limitations, we design and implement a parallel log parser (namely POP) on top of Spark, a large-scale data processing platform. Comprehensive experiments have been conducted to evaluate POP on both synthetic and real-world datasets. The evaluation results demonstrate the capability of POP in terms of accuracy, efficiency, and effectiveness on subsequent log mining tasks.

146 citations


Journal ArticleDOI
TL;DR: This paper utilizes a system theoretic approach, based on prior research on system safety, that takes both physical and cyber components into account to analyze the threats exploited by Stuxnet and concludes that such an approach is capable of identifying cyber threats towards CPSs at the design level.
Abstract: Cyber physical systems (CPSs) are increasingly being adopted in a wide range of industries such as smart power grids. Even though the rapid proliferation of CPSs brings huge benefits to our society, it also provides potential attackers with many new opportunities to affect the physical world such as disrupting the services controlled by CPSs. Stuxnet is an example of such an attack that was designed to interrupt the Iranian nuclear program. In this paper, we show how the vulnerabilities exploited by Stuxnet could have been addressed at the design level. We utilize a system theoretic approach, based on prior research on system safety, that takes both physical and cyber components into account to analyze the threats exploited by Stuxnet. We conclude that such an approach is capable of identifying cyber threats towards CPSs at the design level and provide practical recommendations that CPS designers can utilize to design a more secure CPS.

145 citations


Journal ArticleDOI
TL;DR: This work tackles the challenge of supporting large-scale similarity search over encrypted feature-rich multimedia data, by considering the search criteria as a high-dimensional feature vector instead of a keyword, and built on carefully-designed fuzzy Bloom filters which utilize locality sensitive hashing to encode an index associating the file identifiers and feature vectors.
Abstract: Storage services allow data owners to store their huge amount of potentially sensitive data, such as audios, images, and videos, on remote cloud servers in encrypted form. To enable retrieval of encrypted files of interest, searchable symmetric encryption (SSE) schemes have been proposed. However, many schemes construct indexes based on keyword-file pairs and focus on boolean expressions of exact keyword matches. Moreover, most dynamic SSE schemes cannot achieve forward privacy and reveal unnecessary information when updating the encrypted databases. We tackle the challenge of supporting large-scale similarity search over encrypted feature-rich multimedia data, by considering the search criteria as a high-dimensional feature vector instead of a keyword. Our solutions are built on carefully-designed fuzzy Bloom filters which utilize locality sensitive hashing (LSH) to encode an index associating the file identifiers and feature vectors. Our schemes are proven to be secure against adaptively chosen query attack and forward private in the standard model . We have evaluated the performance of our scheme on real-world high-dimensional datasets, and achieved a search quality of 99 percent recall with only a few number of hash tables for LSH. This shows that our index is compact and searching is not only efficient but also accurate.

129 citations


Journal ArticleDOI
TL;DR: An efficient Cross-Domain HandShake (CDHS) scheme is constructed that allows symptoms-matching within MHSNs and proves the security of the scheme, and a comparative summary demonstrates that the proposed CDHS scheme requires fewer computation and lower communication costs.
Abstract: With rapid developments of sensor, wireless and mobile communication technologies, Mobile Healthcare Social Networks (MHSNs) have emerged as a popular means of communication in healthcare services. Within MHSNs, patients can use their mobile devices to securely share their experiences, broaden their understanding of the illness or symptoms, form a supportive network, and transmit information (e.g., state of health and new symptoms) between users and other stake holders (e.g., medical center). Despite the benefits afforded by MHSNs, there are underlying security and privacy issues (e.g., due to the transmission of messages via a wireless channel). The handshake scheme is an important cryptographic mechanism, which can provide secure communication in MHSNs (e.g., anonymity and mutual authentication between users, such as patients). In this paper, we present a new framework for the handshake scheme in MHSNs, which is based on hierarchical identity-based cryptography. We then construct an efficient Cross-Domain HandShake (CDHS) scheme that allows symptoms-matching within MHSNs. For example, using the proposed CDHS scheme, two patients registered with different healthcare centers can achieve mutual authentication and generate a session key for future secure communications. We then prove the security of the scheme, and a comparative summary demonstrates that the proposed CDHS scheme requires fewer computation and lower communication costs. We also implement the proposed CDHS scheme and three related schemes in a proof of concept Android app to demonstrate utility of the scheme. Findings from the evaluations demonstrate that the proposed CDHS scheme achieves a reduction of 18.14 and 5.41 percent in computation cost and communication cost, in comparison to three other related handshake schemes.

Journal ArticleDOI
TL;DR: A JPEG encryption algorithm is proposed, which enciphers an image to a smaller size and keeps the format compliant to JPEG decoder and outperforms a previous work in terms of separation capability, embedding capacity and security.
Abstract: While most techniques of reversible data hiding in encrypted images (RDH-EI) are developed for uncompressed images, this paper provides a separable reversible data hiding protocol for encrypted JPEG bitstreams. We first propose a JPEG encryption algorithm, which enciphers an image to a smaller size and keeps the format compliant to JPEG decoder. After a content owner uploads the encrypted JPEG bitstream to a remote server, a data hider embeds an additional message into the encrypted copy without changing the bitstream size. On the recipient side, the original bitstream can be reconstructed losslessly using an iterative recovery algorithm based on the blocking artifact. Since message extraction and image recovery are separable, anyone who has the embedding key can extract the message from the marked encrypted copy. Experimental results show that the proposed method outperforms a previous work in terms of separation capability, embedding capacity and security.

Journal ArticleDOI
TL;DR: In this paper, the authors used digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots, and designed the Social Fingerprinting technique, which is able to discriminate among spammers and genuine accounts in both a supervised and an unsupervised fashion.
Abstract: Spambot detection in online social networks is a long-lasting challenge involving the study and design of detection techniques capable of efficiently identifying ever-evolving spammers. Recently, a new wave of social spambots has emerged, with advanced human-like characteristics that allow them to go undetected even by current state-of-the-art algorithms. In this paper, we show that efficient spambots detection can be achieved via an in-depth analysis of their collective behaviors exploiting the digital DNA technique for modeling the behaviors of social network users. Inspired by its biological counterpart, in the digital DNA representation the behavioral lifetime of a digital account is encoded in a sequence of characters. Then, we define a similarity measure for such digital DNA sequences. We build upon digital DNA and the similarity between groups of users to characterize both genuine accounts and spambots. Leveraging such a characterization, we design the Social Fingerprinting technique, which is able to discriminate among spambots and genuine accounts in both a supervised and an unsupervised fashion. We also evaluate the effectiveness of Social Fingerprinting and we compare it with three state-of-the-art detection showing the superiority of our solution. Finally, among the peculiarities of our approach is the possibility to apply off-the-shelf DNA analysis techniques to study online users behaviors and to efficiently rely on a limited number of lightweight account characteristics.

Journal ArticleDOI
TL;DR: A privacy-preserving access control scheme for ICN and its corresponding attribute management solution are presented and the proposed approach is compatible with existing flat name based ICN architectures.
Abstract: Information Centric Networking (ICN) is a new network architecture that aims to overcome the weakness of existing IP-based networking architecture. Instead of establishing a connection between the communicating hosts, ICN focuses on the content, i.e., data, transmitted in network. Content copies in ICN can be cached at different locations. The content is out of its owner's control once it is published. Thus, enforcing access control policies on distributed content copies is crucial in ICN. Attribute-Based Encryption (ABE) is a feasible approach to enforce such control mechanisms in this environment. However, applying ABE in ICN faces two challenges: from management perspective, it is complicated to manage attributes in distributed manners; from privacy protection perspective, unlike in traditional networks, the enforced content access policies are public to all the ICN users. Thus, it is desirable that unauthorized content viewers are not able to retrieve the access policy. To this end, a privacy-preserving access control scheme for ICN and its corresponding attribute management solution are presented in this paper. The proposed approach is compatible with existing flat name based ICN architectures.

Journal ArticleDOI
Jiaojiao Jiang1, Sheng Wen1, Shui Yu1, Yang Xiang1, Wanlei Zhou1 
TL;DR: The proposed method is the first that can be used to identify rumor sources in time-varying social networks and addresses the scalability issue of source identification problems, and therefore dramatically promotes the efficiency of rumor source identification.
Abstract: Identifying rumor sources in social networks plays a critical role in limiting the damage caused by them through the timely quarantine of the sources. However, the temporal variation in the topology of social networks and the ongoing dynamic processes challenge our traditional source identification techniques that are considered in static networks. In this paper, we borrow an idea from criminology and propose a novel method to overcome the challenges. First, we reduce the time-varying networks to a series of static networks by introducing a time-integrating window. Second, instead of inspecting every individual in traditional techniques, we adopt a reverse dissemination strategy to specify a set of suspects of the real rumor source. This process addresses the scalability issue of source identification problems, and therefore dramatically promotes the efficiency of rumor source identification. Third, to determine the real source from the suspects, we employ a novel microscopic rumor spreading model to calculate the maximum likelihood (ML) for each suspect. The one who can provide the largest ML estimate is considered as the real source. The evaluations are carried out on real social networks with time-varying topology. The experiment results show that our method can reduce 60 - 90 percent of the source seeking area in various time-varying social networks. The results further indicate that our method can accurately identify the real source, or an individual who is very close to the real source. To the best of our knowledge, the proposed method is the first that can be used to identify rumor sources in time-varying social networks.

Journal ArticleDOI
TL;DR: This paper first proposes two kinds of non-interactive commitments for traitor tracing, and presents a fully secure traceable CP-ABE system for cloud storage service from the proposed commitment.
Abstract: Ciphertext-policy attribute-based encryption (CP-ABE) has been proposed to enable fine-grained access control on encrypted data for cloud storage service. In the context of CP-ABE, since the decryption privilege is shared by multiple users who have the same attributes, it is difficult to identify the original key owner when given an exposed key. This leaves the malicious cloud users a chance to leak their access credentials to outsourced data in clouds for profits without the risk of being caught, which severely damages data security. To address this problem, we add the property of traceability to the conventional CP-ABE. To catch people leaking their access credentials to outsourced data in clouds for profits effectively, in this paper, we first propose two kinds of non-interactive commitments for traitor tracing. Then we present a fully secure traceable CP-ABE system for cloud storage service from the proposed commitment. Our proposed commitments for traitor tracing may be of independent interest, as they are both pairing-friendly and homomorphic. We also provide extensive experimental results to confirm the feasibility and efficiency of the proposed solution.

Journal ArticleDOI
TL;DR: A new credibility analysis system for assessing information credibility on Twitter to prevent the proliferation of fake or malicious information is proposed and reveals that a significant balance between recall and precision was achieved for the tested dataset.
Abstract: Information credibility on Twitter has been a topic of interest among researchers in the fields of both computer and social sciences, primarily because of the recent growth of this platform as a tool for information dissemination. Twitter has made it increasingly possible to offer near-real-time transfer of information in a very cost-effective manner. It is now being used as a source of news among a wide array of users around the globe. The beauty of this platform is that it delivers timely content in a tailored manner that makes it possible for users to obtain news regarding their topics of interest. Consequently, the development of techniques that can verify information obtained from Twitter has become a challenging and necessary task. In this paper, we propose a new credibility analysis system for assessing information credibility on Twitter to prevent the proliferation of fake or malicious information. The proposed system consists of four integrated components: a reputation-based component, a credibility classifier engine, a user experience component, and a feature-ranking algorithm. The components operate together in an algorithmic form to analyze and assess the credibility of Twitter tweets and users. We tested the performance of our system on two different datasets from 489,330 unique Twitter accounts. We applied 10-fold cross-validation over four machine learning algorithms. The results reveal that a significant balance between recall and precision was achieved for the tested dataset.

Journal ArticleDOI
TL;DR: The credibility of theCVSS scoring data found in five leading databases-NVD, X-Force, OSVDB, CERT-VN, and Cisco-is assessed and it is concluded that with the exception of a few dimensions, the CVSS is quite trustworthy.
Abstract: The Common Vulnerability Scoring System (CVSS) is the state-of-the art system for assessing software vulnerabilities. However, it has been criticized for lack of validity and practitioner relevance. In this paper, the credibility of the CVSS scoring data found in five leading databases-NVD, X-Force, OSVDB, CERT-VN, and Cisco-is assessed. A Bayesian method is used to infer the most probable true values underlying the imperfect assessments of the databases, thus circumventing the problem that ground truth is not known. It is concluded that with the exception of a few dimensions, the CVSS is quite trustworthy. The databases are relatively consistent, but some are better than others. The expected accuracy of each database for a given dimension can be found by marginalizing confusion matrices. By this measure, NVD is the best and OSVDB is the worst of the assessed databases.

Journal ArticleDOI
TL;DR: This work proposed a novel authentication system PassMatrix, based on graphical passwords to resist shoulder surfing attacks, and implemented a PassMatrix prototype on Android and carried out real user experiments to evaluate its memorability and usability.
Abstract: Authentication based on passwords is used largely in applications for computer security and privacy. However, human actions such as choosing bad passwords and inputting passwords in an insecure way are regarded as “the weakest link” in the authentication chain. Rather than arbitrary alphanumeric strings, users tend to choose passwords either short or meaningful for easy memorization. With web applications and mobile apps piling up, people can access these applications anytime and anywhere with various devices. This evolution brings great convenience but also increases the probability of exposing passwords to shoulder surfing attacks. Attackers can observe directly or use external recording devices to collect users’ credentials. To overcome this problem, we proposed a novel authentication system PassMatrix, based on graphical passwords to resist shoulder surfing attacks. With a one-time valid login indicator and circulative horizontal and vertical bars covering the entire scope of pass-images, PassMatrix offers no hint for attackers to figure out or narrow down the password even they conduct multiple camera-based attacks. We also implemented a PassMatrix prototype on Android and carried out real user experiments to evaluate its memorability and usability. From the experimental result, the proposed system achieves better resistance to shoulder surfing attacks while maintaining usability.

Journal ArticleDOI
TL;DR: Theoretical analysis and experiment results show the proposed scheme to securely obtain the encrypted numbers’ sign can not only fix the soundness and security problems, but also achieve higher efficiency.
Abstract: Recently, Rahulamathavan et al. propose a privacy preserving scheme for outsourcing SVM classification. Their core contribution is a secure protocol to attain the sign of numbers in encrypted form. In this paper, we observe that Rahulamathavan et al.'s protocol will suffer from some soundness and security problems. Then, we propose a new scheme to securely obtain the encrypted numbers’ sign. Theoretical analysis and experiment results show our proposed scheme can not only fix the soundness and security problems, but also achieve higher efficiency.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate the effectiveness and advantages of exploiting a social botnet for spam distribution and digital-influence manipulation through real experiments on Twitter and also trace-driven simulations.
Abstract: Online social networks (OSNs) are increasingly threatened by social bot s which are software-controlled OSN accounts that mimic human users with malicious intentions. A social botnet refers to a group of social bots under the control of a single botmaster , which collaborate to conduct malicious behavior while mimicking the interactions among normal OSN users to reduce their individual risk of being detected. We demonstrate the effectiveness and advantages of exploiting a social botnet for spam distribution and digital-influence manipulation through real experiments on Twitter and also trace-driven simulations. We also propose the corresponding countermeasures and evaluate their effectiveness. Our results can help understand the potentially detrimental effects of social botnets and help OSNs improve their bot(net) detection systems.

Journal ArticleDOI
TL;DR: SAFEe-PC (Semi-Automated Feature generation for Phish Classification), a system to extract features, elevating some to higher level features, that are meant to defeat common phishing email detection strategies, is designed and developed.
Abstract: Phishing attacks continue to pose a major threat for computer system defenders, often forming the first step in a multi-stage attack. There have been great strides made in phishing detection; however, some phishing emails appear to pass through filters by making simple structural and semantic changes to the messages. We tackle this problem through the use of a machine learning classifier operating on a large corpus of phishing and legitimate emails. We design SAFe-PC (Semi-Automated Feature generation for Phish Classification), a system to extract features, elevating some to higher level features, that are meant to defeat common phishing email detection strategies. To evaluate SAFe-PC , we collect a large corpus of phishing emails from the central IT organization at a tier-1 university. The execution of SAFe-PC on the dataset exposes hitherto unknown insights on phishing campaigns directed at university users. SAFe-PC detects more than 70 percent of the emails that had eluded our production deployment of Sophos, a state-of-the-art email filtering tool. It also outperforms SpamAssassin, a commonly used email filtering tool. We also developed an online version of SAFe-PC , that can be incrementally retrained with new samples. Its detection performance improves with time as new samples are collected, while the time to retrain the classifier stays constant.

Journal ArticleDOI
TL;DR: This work proposes a new “Scale Inside-out” approach which during attacks, reduces the “Resource Utilization Factor” to a minimal value for quick absorption of the attack.
Abstract: The distributed denial of service (DDoS) attacks in cloud computing requires quick absorption of attack data. DDoS attack mitigation is usually achieved by dynamically scaling the cloud resources so as to quickly identify the onslaught features to combat the attack. The resource scaling comes with an additional cost which may prove to be a huge disruptive cost in the cases of longer, sophisticated, and repetitive attacks. In this work, we address an important problem, whether the resource scaling during attack, always result in rapid DDoS mitigation? For this purpose, we conduct real-time DDoS attack experiments to study the attack absorption and attack mitigation for various target services in the presence of dynamic cloud resource scaling. We found that the activities such as attack absorption which provide timely attack data input to attack analytics, are adversely compromised by the heavy resource usage generated by the attack. We show that the operating system level local resource contention, if reduced during attacks, can expedite the overall attack mitigation. The attack mitigation would otherwise not be completed by the dynamic scaling of resources alone. We conceived a novel relation which terms “Resource Utilization Factor” for each incoming request as the major component in forming the resource contention. To overcome these issues, we propose a new “Scale Inside-out” approach which during attacks, reduces the “Resource Utilization Factor” to a minimal value for quick absorption of the attack. The proposed approach sacrifices victim service resources and provides those resources to mitigation service in addition to other co-located services to ensure resource availability during the attack. Experimental evaluation shows up to 95 percent reduction in total attack downtime of the victim service in addition to considerable improvement in attack detection time, service reporting time, and downtime of co-located services.

Journal ArticleDOI
TL;DR: This paper explores the use of Bigraphical Reactive Systems to model the topology of cyber and physical spaces and their dynamics and proposes an automatic planning technique to identify an adaptation strategy enacting security policies at runtime to prevent, circumvent, or mitigate possible security requirements violations.
Abstract: Ubiquitous computing is resulting in a proliferation of cyber-physical systems that host or manage valuable physical and digital assets. These assets can be harmed by malicious agents through both cyber-enabled or physically-enabled attacks, particularly ones that exploit the often ignored interplay between the cyber and physical world. The explicit representation of spatial topology is key to supporting adaptive security policies. In this paper we explore the use of Bigraphical Reactive Systems to model the topology of cyber and physical spaces and their dynamics. We utilise such models to perform speculative threat analysis through model checking to reason about the consequences of the evolution of topological configurations on the satisfaction of security requirements. We further propose an automatic planning technique to identify an adaptation strategy enacting security policies at runtime to prevent, circumvent, or mitigate possible security requirements violations. We evaluate our approach using a case study concerned with countering insider threats in a building automation system.

Journal ArticleDOI
TL;DR: This work designs a VPSearch scheme by integrating an adapted homomorphic MAC technique with a privacy-preserving multi-keyword search scheme, which enables the client to verify search results efficiently without storing a local copy of the outsourced data.
Abstract: Although cloud computing offers elastic computation and storage resources, it poses challenges on verifiability of computations and data privacy. In this work we investigate verifiability for privacy-preserving multi-keyword search over outsourced documents. As the cloud server may return incorrect results due to system faults or incentive to reduce computation cost, it is critical to offer verifiability of search results and privacy protection for outsourced data at the same time. To fulfill these requirements, we design a V erifiable P rivacy-preserving keyword S earch scheme, called VPSearch, by integrating an adapted homomorphic MAC technique with a privacy-preserving multi-keyword search scheme. The proposed scheme enables the client to verify search results efficiently without storing a local copy of the outsourced data. We also propose a random challenge technique with ordering for verifying top- $k$ search results, which can detect incorrect top- $k$ results with probability close to 1. We provide detailed analysis on security, verifiability, privacy, and efficiency of the proposed scheme. Finally, we implement VPSearch using Matlab and evaluate its performance over three UCI bag-of-words data sets. Experiment results show that authentication tag generation incurs about 3 percent overhead only and a search query over 300,000 documents takes about 0.98 seconds on a laptop. To verify 300,000 similarity scores for one query, VPSearch costs only 0.29 seconds.

Journal ArticleDOI
TL;DR: A dynamic defense framework that selects an optimal countermeasure against different attack damage costs is proposed and a novel defense-centric model based on a service dependency graph is proposed that minimizes the security deployment cost with respect to the attack damage cost.
Abstract: Designing an efficient defense framework is challenging with respect to a network's complexity, widespread sophisticated attacks, attackers’ ability, and the diversity of security appliances. The Intrusion Response System (IRS) is intended to respond automatically to incidents by attuning the attack damage and countermeasure costs. The existing approaches inherit some limitations, such as using static countermeasure effectiveness, static countermeasure deployment cost, or neglecting the countermeasures’ negative impact on service quality (QoS). These limitations may lead the IRS to select inappropriate countermeasures and deployment locations, which in turn may reduce network performance and disconnect legitimate users. In this paper, we propose a dynamic defense framework that selects an optimal countermeasure against different attack damage costs. To measure the attack damage cost, we propose a novel defense-centric model based on a service dependency graph. To select the optimal countermeasure dynamically, we formulate the problem at hand using a multi-objective optimization concept that maximizes the security benefit, minimizes the negative impact on users and services, and minimizes the security deployment cost with respect to the attack damage cost.

Journal ArticleDOI
TL;DR: Experimental results show that negative iris recognition can achieve a highly promising recognition performance on the typical database CASIA-IrisV3-Interval, and it is shown that negativeiris recognition supports several important strategies in iris Recognition, e.g., shifting and masking.
Abstract: Elements of a person's biometrics are typically stable over the duration of a lifetime, and thus, it is highly important to protect biometric data while supporting recognition (it is also called secure biometric recognition). However, the biometric data that are derived from a person usually vary slightly due to a variety of reasons, such as distortion during picture capture, and it is difficult to use traditional techniques, such as classical encryption algorithms, in secure biometric recognition. The negative database ( NDB ) is a new technique for privacy preservation. Reversing the NDB has been demonstrated to be an NP -hard problem, and several algorithms for generating hard-to-reverse NDB s have been proposed. In this paper, first, we propose negative iris recognition, which is a novel secure iris recognition scheme that is based on the NDB . We show that negative iris recognition supports several important strategies in iris recognition, e.g., shifting and masking. Next, we analyze the security and efficiency of negative iris recognition. Experimental results show that negative iris recognition is an effective and secure iris recognition scheme. Specifically, negative iris recognition can achieve a highly promising recognition performance (i.e., GAR = 98.94% at FAR = 0.01%, EER = 0.60%) on the typical database CASIA-IrisV3-Interval.

Journal ArticleDOI
TL;DR: A provenance-based trust framework, namely PROVEST (PROVEnance-baSed Trust model), that aims to achieve accurate peer-to-peer trust assessment and maximize the delivery of correct messages received by destination nodes while minimizing message delay and communication cost under resource-constrained network environments is proposed.
Abstract: Delay tolerant networks (DTNs) are often encountered in military network environments where end-to-end connectivity is not guaranteed due to frequent disconnection or delay. This work proposes a provenance-based trust framework, namely PROVEST (PROVEnance-baSed Trust model) that aims to achieve accurate peer-to-peer trust assessment and maximize the delivery of correct messages received by destination nodes while minimizing message delay and communication cost under resource-constrained network environments. Provenance refers to the history of ownership of a valued object or information. We leverage the interdependency between trustworthiness of information source and information itself in PROVEST. PROVEST takes a data-driven approach to reduce resource consumption in the presence of selfish or malicious nodes while estimating a node's trust dynamically in response to changes in the environmental and node conditions. This work adopts a model-based method to evaluate the performance of PROVEST (i.e., trust accuracy and routing performance) using Stochastic Petri Nets. We conduct a comparative performance analysis of PROVEST against existing trust-based and non-trust-based DTN routing protocols to analyze the benefits of PROVEST. We validate PROVEST using a real dataset of DTN mobility traces.

Journal ArticleDOI
TL;DR: The design and implementation of PrivateZone was described, an Android application based on PrivateZone framework was developed, and the performance overhead imposed on the OS in the REE and SCLs in the PrEE.
Abstract: ARM TrustZone is widely used to provide a Trusted Execution Environment (TEE) for mobile devices However, the use of TrustZone is limited because TrustZone resources are only available for some pre-authorized applications In other words, only alliances of the TrustZone OS vendors and device manufacturers can use TrustZone to secure their services To help overcome this problem, we designed the PrivateZone framework to enable individual developers to utilize TrustZone resources Using PrivateZone, developers can run Security Critical Logics (SCL) in a Private Execution Environment (PrEE) The advantage of PrivateZone is its leveraging of TrustZone resources without undermining the security of existing services in the TEE To guarantee this, PrivateZone creates a PrEE using a memory region that is isolated from both the Rich Execution Environment (REE) and TEE In this paper, we describe the design and implementation of PrivateZone The prototype of PrivateZone was implemented on an Arndale board with a Cortex-A15 dual-core processor We built PrivateZone by exploring both security and virtualization extensions of the ARM architecture To illustrate the usage and the efficacy of PrivateZone, we developed an Android application based on PrivateZone framework, and evaluated the performance overhead imposed on the OS in the REE and SCLs in the PrEE

Journal ArticleDOI
TL;DR: This paper proposes a privacy-preserving data query method based on conditional oblivious transfer to guarantee that only the authorized entities can query users’ social data and the social cloud server cannot infer anything during the query.
Abstract: Human-to-human infection, as a type of fatal public health threats, can rapidly spread, resulting in a large amount of labor and health cost for treatment, control and prevention. To slow down the spread of infection, social network is envisioned to provide detailed contact statistics to isolate susceptive people who has frequent contacts with infected patients. In this paper, we propose a novel human-to-human infection analysis approach by exploiting social network data and health data that are collected by social network and e-healthcare technologies. We enable the social cloud server and health cloud server to exchange social contact information of infected patients and user's health condition in a privacy-preserving way. Specifically, we propose a privacy-preserving data query method based on conditional oblivious transfer to guarantee that only the authorized entities can query users’ social data and the social cloud server cannot infer anything during the query. In addition, we propose a privacy-preserving classification-based infection analysis method that can be performed by untrusted cloud servers without accessing the users’ health data. The performance evaluation shows that the proposed approach achieves higher infection analysis accuracy with the acceptable computational overhead.