scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Parallel and Distributed Systems in 2014"


Journal ArticleDOI
TL;DR: This paper proposes a basic idea for the MRSE based on secure inner product computation, and gives two significantly improved MRSE schemes to achieve various stringent privacy requirements in two different threat models and further extends these two schemes to support more search semantics.
Abstract: With the advent of cloud computing, data owners are motivated to outsource their complex data management systems from local sites to the commercial public cloud for great flexibility and economic savings. But for protecting data privacy, sensitive data have to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Thus, enabling an encrypted cloud data search service is of paramount importance. Considering the large number of data users and documents in the cloud, it is necessary to allow multiple keywords in the search request and return documents in the order of their relevance to these keywords. Related works on searchable encryption focus on single keyword search or Boolean keyword search, and rarely sort the search results. In this paper, for the first time, we define and solve the challenging problem of privacy-preserving multi-keyword ranked search over encrypted data in cloud computing (MRSE). We establish a set of strict privacy requirements for such a secure cloud data utilization system. Among various multi-keyword semantics, we choose the efficient similarity measure of "coordinate matching," i.e., as many matches as possible, to capture the relevance of data documents to the search query. We further use "inner product similarity" to quantitatively evaluate such similarity measure. We first propose a basic idea for the MRSE based on secure inner product computation, and then give two significantly improved MRSE schemes to achieve various stringent privacy requirements in two different threat models. To improve search experience of the data search service, we further extend these two schemes to support more search semantics. Thorough analysis investigating privacy and efficiency guarantees of proposed schemes is given. Experiments on the real-world data set further show proposed schemes indeed introduce low overhead on computation and communication.

979 citations


Journal ArticleDOI
TL;DR: This paper proposes Dekey, a new construction in which users do not need to manage any keys on their own but instead securely distribute the convergent key shares across multiple servers and demonstrates that Dekey incurs limited overhead in realistic environments.
Abstract: Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. Promising as it is, an arising challenge is to perform secure deduplication in cloud storage. Although convergent encryption has been extensively adopted for secure deduplication, a critical issue of making convergent encryption practical is to efficiently and reliably manage a huge number of convergent keys. This paper makes the first attempt to formally address the problem of achieving efficient and reliable key management in secure deduplication. We first introduce a baseline approach in which each user holds an independent master key for encrypting the convergent keys and outsourcing them to the cloud. However, such a baseline key management scheme generates an enormous number of keys with the increasing number of users and requires users to dedicatedly protect the master keys. To this end, we propose Dekey , a new construction in which users do not need to manage any keys on their own but instead securely distribute the convergent key shares across multiple servers. Security analysis demonstrates that Dekey is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement Dekey using the Ramp secret sharing scheme and demonstrate that Dekey incurs limited overhead in realistic environments.

511 citations


Journal ArticleDOI
Hamid Arabnejad1, Jorge G. Barbosa1
TL;DR: The analysis and experiments show that the PEFT algorithm outperforms the state-of-the-art list-based algorithms for heterogeneous systems in terms of schedule length ratio, efficiency, and frequency of best results.
Abstract: Efficient application scheduling algorithms are important for obtaining high performance in heterogeneous computing systems. In this paper, we present a novel list-based scheduling algorithm called Predict Earliest Finish Time (PEFT) for heterogeneous computing systems. The algorithm has the same time complexity as the state-of-the-art algorithm for the same purpose, that is, O(v2.p) for v tasks and p processors, but offers significant makespan improvements by introducing a look-ahead feature without increasing the time complexity associated with computation of an optimistic cost table (OCT). The calculated value is an optimistic cost because processor availability is not considered in the computation. Our algorithm is only based on an OCT that is used to rank tasks and for processor selection. The analysis and experiments based on randomly generated graphs with various characteristics and graphs of real-world applications show that the PEFT algorithm outperforms the state-of-the-art list-based algorithms for heterogeneous systems in terms of schedule length ratio, efficiency, and frequency of best results.

460 citations


Journal ArticleDOI
TL;DR: This work proposes a new Secure Outsourced ABE system, which supports both secure outsourced key-issuing and decryption and proposes an outsourced ABE construction which provides checkability of the outsourced computation results in an efficient way.
Abstract: Attribute-Based Encryption (ABE) is a promising cryptographic primitive which significantly enhances the versatility of access control mechanisms. Due to the high expressiveness of ABE policies, the computational complexities of ABE key-issuing and decryption are getting prohibitively high. Despite that the existing Outsourced ABE solutions are able to offload some intensive computing tasks to a third party, the verifiability of results returned from the third party has yet to be addressed. Aiming at tackling the challenge above, we propose a new Secure Outsourced ABE system, which supports both secure outsourced key-issuing and decryption. Our new method offloads all access policy and attribute related operations in the key-issuing process or decryption to a Key Generation Service Provider (KGSP) and a Decryption Service Provider (DSP), respectively, leaving only a constant number of simple operations for the attribute authority and eligible users to perform locally. In addition, for the first time, we propose an outsourced ABE construction which provides checkability of the outsourced computation results in an efficient way. Extensive security and performance analysis show that the proposed schemes are proven secure and practical.

403 citations


Journal ArticleDOI
TL;DR: This work studies the problem of finding the optimal attack strategy--i.e., a data-injection attacking strategy that selects a set of meters to manipulate so as to cause the maximum damage and formalizes the problem and develops efficient algorithms to identify the optimal meter set.
Abstract: It is critical for a power system to estimate its operation state based on meter measurements in the field and the configuration of power grid networks. Recent studies show that the adversary can bypass the existing bad data detection schemes, posing dangerous threats to the operation of power grid systems. Nevertheless, two critical issues remain open: 1) how can an adversary choose the meters to compromise to cause the most significant deviation of the system state estimation, and 2) how can a system operator defend against such attacks? To address these issues, we first study the problem of finding the optimal attack strategy--i.e., a data-injection attacking strategy that selects a set of meters to manipulate so as to cause the maximum damage. We formalize the problem and develop efficient algorithms to identify the optimal meter set. We implement and test our attack strategy on various IEEE standard bus systems, and demonstrate its superiority over a baseline strategy of random selections. To defend against false data-injection attacks, we propose a protection-based defense and a detection-based defense, respectively. For the protection-based defense, we identify and protect critical sensors and make the system more resilient to attacks. For the detection-based defense, we develop the spatial-based and temporal-based detection schemes to accurately identify data-injection attacks.

353 citations


Journal ArticleDOI
TL;DR: New public-key cryptosystems that produce constant-size ciphertexts such that efficient delegation of decryption rights for any set of ciphertextS are possible are described, giving the first public-keys patient-controlled encryption for flexible hierarchy.
Abstract: Data sharing is an important functionality in cloud storage. In this paper, we show how to securely, efficiently, and flexibly share data with others in cloud storage. We describe new public-key cryptosystems that produce constant-size ciphertexts such that efficient delegation of decryption rights for any set of ciphertexts are possible. The novelty is that one can aggregate any set of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. In other words, the secret key holder can release a constant-size aggregate key for flexible choices of ciphertext set in cloud storage, but the other encrypted files outside the set remain confidential. This compact aggregate key can be conveniently sent to others or be stored in a smart card with very limited secure storage. We provide formal security analysis of our schemes in the standard model. We also describe other application of our schemes. In particular, our schemes give the first public-key patient-controlled encryption for flexible hierarchy, which was yet to be known.

314 citations


Journal ArticleDOI
TL;DR: This paper proposes a new secure outsourcing algorithm for (variable-exponent, variable-base) exponentiation modulo a prime in the two untrusted program model and proposes the first efficient outsource-secure algorithm for simultaneous modular exponentiations.
Abstract: With the rapid development of cloud services, the techniques for securely outsourcing the prohibitively expensive computations to untrusted servers are getting more and more attention in the scientific community. Exponentiations modulo a large prime have been considered the most expensive operations in discrete-logarithm-based cryptographic protocols, and they may be burdensome for the resource-limited devices such as RFID tags or smartcards. Therefore, it is important to present an efficient method to securely outsource such operations to (untrusted) cloud servers. In this paper, we propose a new secure outsourcing algorithm for (variable-exponent, variable-base) exponentiation modulo a prime in the two untrusted program model. Compared with the state-of-the-art algorithm, the proposed algorithm is superior in both efficiency and checkability. Based on this algorithm, we show how to achieve outsource-secure Cramer-Shoup encryptions and Schnorr signatures. We then propose the first efficient outsource-secure algorithm for simultaneous modular exponentiations. Finally, we provide the experimental evaluation that demonstrates the efficiency and effectiveness of the proposed outsourcing algorithms and schemes.

296 citations


Journal ArticleDOI
TL;DR: Security analysis indicates that EPPDR can achieve privacy-preservation of electricity demand, forward secrecy of Users' session keys, and evolution of users' private keys.
Abstract: Smart grid has recently emerged as the next generation of power grid due to its distinguished features, such as distributed energy control, robust to load fluctuations, and close user-grid interactions. As a vital component of smart grid, demand response can maintain supply-demand balance and reduce users' electricity bills. Furthermore, it is also critical to preserve user privacy and cyber security in smart grid. In this paper, we propose an efficient privacy-preserving demand response (EPPDR) scheme which employs a homomorphic encryption to achieve privacy-preserving demand aggregation and efficient response. In addition, an adaptive key evolution technique is further investigated to ensure the users' session keys to be forward secure. Security analysis indicates that EPPDR can achieve privacy-preservation of electricity demand, forward secrecy of users' session keys, and evolution of users' private keys. In comparison with an existing scheme which also achieves forward secrecy, EPPDR has better efficiency in terms of computation and communication overheads and can adaptively control the key evolution to balance the trade-off between the communication efficiency and security level.

275 citations


Journal ArticleDOI
TL;DR: A pair of efficient and light-weight authentication protocols to enable remote WBAN users to anonymously enjoy healthcare service and outperform the existing schemes in terms of better trade-off between desirable security properties and computational overhead, nicely meeting the needs of WBANs.
Abstract: Wireless body area network (WBAN) has been recognized as one of the promising wireless sensor technologies for improving healthcare service, thanks to its capability of seamlessly and continuously exchanging medical information in real time. However, the lack of a clear in-depth defense line in such a new networking paradigm would make its potential users worry about the leakage of their private information, especially to those unauthenticated or even malicious adversaries. In this paper, we present a pair of efficient and light-weight authentication protocols to enable remote WBAN users to anonymously enjoy healthcare service. In particular, our authentication protocols are rooted with a novel certificateless signature (CLS) scheme, which is computational, efficient, and provably secure against existential forgery on adaptively chosen message attack in the random oracle model. Also, our designs ensure that application or service providers have no privilege to disclose the real identities of users. Even the network manager, which serves as private key generator in the authentication protocols, is prevented from impersonating legitimate users. The performance of our designs is evaluated through both theoretic analysis and experimental simulations, and the comparative studies demonstrate that they outperform the existing schemes in terms of better trade-off between desirable security properties and computational overhead, nicely meeting the needs of WBANs.

271 citations


Journal ArticleDOI
TL;DR: A DoS attack detection system that uses multivariate correlation analysis (MCA) for accurate network traffic characterization by extracting the geometrical correlations between network traffic features by learning the patterns of legitimate network traffic only is presented.
Abstract: Interconnected systems, such as Web servers, database servers, cloud computing servers and so on, are now under threads from network attackers. As one of most common and aggressive means, denial-of-service (DoS) attacks cause serious impact on these computing systems. In this paper, we present a DoS attack detection system that uses multivariate correlation analysis (MCA) for accurate network traffic characterization by extracting the geometrical correlations between network traffic features. Our MCA-based DoS attack detection system employs the principle of anomaly based detection in attack recognition. This makes our solution capable of detecting known and unknown DoS attacks effectively by learning the patterns of legitimate network traffic only. Furthermore, a triangle-area-based technique is proposed to enhance and to speed up the process of MCA. The effectiveness of our proposed detection system is evaluated using KDD Cup 99 data set, and the influences of both non-normalized data and normalized data on the performance of the proposed detection system are examined. The results show that our system outperforms two other previously developed state-of-the-art approaches in terms of detection accuracy.

244 citations


Journal ArticleDOI
TL;DR: This paper presents a verifiable privacy-preserving multi-keyword text search (MTS) scheme with similarity-based ranking and proposes two secure index schemes to meet the stringent privacy requirements under strong threat models.
Abstract: With the growing popularity of cloud computing, huge amount of documents are outsourced to the cloud for reduced management cost and ease of access. Although encryption helps protecting user data confidentiality, it leaves the well-functioning yet practically-efficient secure search functions over encrypted data a challenging problem. In this paper, we present a verifiable privacy-preserving multi-keyword text search (MTS) scheme with similarity-based ranking to address this problem. To support multi-keyword search and search result ranking, we propose to build the search index based on term frequency and the vector space model with cosine similarity measure to achieve higher search result accuracy. To improve the search efficiency, we propose a tree-based index structure and various adaptive methods for multi-dimensional (MD) algorithm so that the practical search efficiency is much better than that of linear search. To further enhance the search privacy, we propose two secure index schemes to meet the stringent privacy requirements under strong threat models, i.e., known ciphertext model and known background model. In addition, we devise a scheme upon the proposed index tree structure to enable authenticity check over the returned search results. Finally, we demonstrate the effectiveness and efficiency of the proposed schemes through extensive experimental evaluation.

Journal ArticleDOI
TL;DR: A new decentralized access control scheme for secure data storage in clouds that supports anonymous authentication and access control and has the added feature of access control in which only valid users are able to decrypt the stored information.
Abstract: We propose a new decentralized access control scheme for secure data storage in clouds that supports anonymous authentication. In the proposed scheme, the cloud verifies the authenticity of the series without knowing the user's identity before storing data. Our scheme also has the added feature of access control in which only valid users are able to decrypt the stored information. The scheme prevents replay attacks and supports creation, modification, and reading data stored in the cloud. We also address user revocation. Moreover, our authentication and access control scheme is decentralized and robust, unlike other access control schemes designed for clouds which are centralized. The communication, computation, and storage overheads are comparable to centralized approaches.

Journal ArticleDOI
TL;DR: This paper designs an expressive, efficient and revocable data access control scheme for multi-authority cloud storage systems, where there are multiple authorities co-exist and each authority is able to issue attributes independently.
Abstract: Data access control is an effective way to ensure the data security in the cloud Due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems Ciphertext-Policy Attribute-based Encryption (CP-ABE) is regarded as one of the most suitable technologies for data access control in cloud storage, because it gives data owners more direct control on access policies However, it is difficult to directly apply existing CP-ABE schemes to data access control for cloud storage systems because of the attribute revocation problem In this paper, we design an expressive, efficient and revocable data access control scheme for multi-authority cloud storage systems, where there are multiple authorities co-exist and each authority is able to issue attributes independently Specifically, we propose a revocable multi-authority CP-ABE scheme, and apply it as the underlying techniques to design the data access control scheme Our attribute revocation method can efficiently achieve both forward security and backward security The analysis and simulation results show that our proposed data access control scheme is secure in the random oracle model and is more efficient than previous works

Journal ArticleDOI
TL;DR: The design and development of the automata processor is presented, a massively parallel non-von Neumann semiconductor architecture that is purpose-built for automata processing that exceeds the capabilities of high-performance FPGA-based implementations of regular expression processors.
Abstract: We present the design and development of the automata processor, a massively parallel non-von Neumann semiconductor architecture that is purpose-built for automata processing. This architecture can directly implement non-deterministic finite automata in hardware and can be used to implement complex regular expressions, as well as other types of automata which cannot be expressed as regular expressions. We demonstrate that this architecture exceeds the capabilities of high-performance FPGA-based implementations of regular expression processors. We report on the development of an XML-based language for describing automata for easy compilation targeted to the hardware. The automata processor can be effectively utilized in a diverse array of applications driven by pattern matching, such as cyber security and computational biology.

Journal ArticleDOI
TL;DR: This paper solves the open problem of collaborative learning by utilizing the power of cloud computing and adopts and tailor the BGN "doubly homomorphic" encryption algorithm for the multiparty setting to support flexible operations over ciphertexts.
Abstract: To improve the accuracy of learning result, in practice multiple parties may collaborate through conducting joint Back-Propagation neural network learning on the union of their respective data sets. During this process no party wants to disclose her/his private data to others. Existing schemes supporting this kind of collaborative learning are either limited in the way of data partition or just consider two parties. There lacks a solution that allows two or more parties, each with an arbitrarily partitioned data set, to collaboratively conduct the learning. This paper solves this open problem by utilizing the power of cloud computing. In our proposed scheme, each party encrypts his/her private data locally and uploads the ciphertexts into the cloud. The cloud then executes most of the operations pertaining to the learning algorithms over ciphertexts without knowing the original private data. By securely offloading the expensive operations to the cloud, we keep the computation and communication costs on each party minimal and independent to the number of participants. To support flexible operations over ciphertexts, we adopt and tailor the BGN "doubly homomorphic" encryption algorithm for the multiparty setting. Numerical analysis and experiments on commodity cloud show that our scheme is secure, efficient, and accurate.

Journal ArticleDOI
TL;DR: This paper proposes a dynamic resource allocation strategy to counter DDoS attacks against individual cloud customers and establishes a mathematical model to approximate the needs of the resource investment based on queueing theory.
Abstract: Cloud is becoming a dominant computing platform. Naturally, a question that arises is whether we can beat notorious DDoS attacks in a cloud environment. Researchers have demonstrated that the essential issue of DDoS attack and defense is resource competition between defenders and attackers. A cloud usually possesses profound resources and has full control and dynamic allocation capability of its resources. Therefore, cloud offers us the potential to overcome DDoS attacks. However, individual cloud hosted servers are still vulnerable to DDoS attacks if they still run in the traditional way. In this paper, we propose a dynamic resource allocation strategy to counter DDoS attacks against individual cloud customers. When a DDoS attack occurs, we employ the idle resources of the cloud to clone sufficient intrusion prevention servers for the victim in order to quickly filter out attack packets and guarantee the quality of the service for benign users simultaneously. We establish a mathematical model to approximate the needs of our resource investment based on queueing theory. Through careful system analysis and real-world data set experiments, we conclude that we can defeat DDoS attacks in a cloud environment.

Journal ArticleDOI
TL;DR: This article proposes an elastic auto-parallelization solution that can dynamically adjust the number of channels used to achieve high throughput without unnecessarily wasting resources and can handle partitioned stateful operators via run-time state migration, which is fully transparent to the application developers.
Abstract: This article addresses the profitability problem associated with auto-parallelization of general-purpose distributed data stream processing applications. Auto-parallelization involves locating regions in the application's data flow graph that can be replicated at run-time to apply data partitioning, in order to achieve scale. In order to make auto-parallelization effective in practice, the profitability question needs to be answered: How many parallel channels provide the best throughput? The answer to this question changes depending on the workload dynamics and resource availability at run-time. In this article, we propose an elastic auto-parallelization solution that can dynamically adjust the number of channels used to achieve high throughput without unnecessarily wasting resources. Most importantly, our solution can handle partitioned stateful operators via run-time state migration, which is fully transparent to the application developers. We provide an implementation and evaluation of the system on an industrial-strength data stream processing platform to validate our solution.

Journal ArticleDOI
TL;DR: This work proposes a programming framework called Medusa which enables developers to leverage the capabilities of GPUs by writing sequential C/C++ code and develops a series of graph-centric optimizations based on the architecture features of GPUs for efficiency.
Abstract: Graphs are common data structures for many applications, and efficient graph processing is a must for application performance. Recently, the graphics processing unit (GPU) has been adopted to accelerate various graph processing algorithms such as BFS and shortest paths. However, it is difficult to write correct and efficient GPU programs and even more difficult for graph processing due to the irregularities of graph structures. To simplify graph processing on GPUs, we propose a programming framework called Medusa which enables developers to leverage the capabilities of GPUs by writing sequential C/C++ code. Medusa offers a small set of user-defined APIs and embraces a runtime system to automatically execute those APIs in parallel on the GPU. We develop a series of graph-centric optimizations based on the architecture features of GPUs for efficiency. Additionally, Medusa is extended to execute on multiple GPUs within a machine. Our experiments show that 1) Medusa greatly simplifies implementation of GPGPU programs for graph processing, with many fewer lines of source code written by developers and 2) the optimization techniques significantly improve the performance of the runtime system, making its performance comparable with or better than manually tuned GPU graph operations.

Journal ArticleDOI
TL;DR: Simulation experiments show that the proposed algorithm increases the likelihood of deadlines being met and reduces the total execution time of applications as the budget available for replication increases.
Abstract: The elasticity of Cloud infrastructures makes them a suitable platform for execution of deadline-constrained workflow applications, because resources available to the application can be dynamically increased to enable application speedup. Existing research in execution of scientific workflows in Clouds either try to minimize the workflow execution time ignoring deadlines and budgets or focus on the minimization of cost while trying to meet the application deadline. However, they implement limited contingency strategies to correct delays caused by underestimation of tasks execution time or fluctuations in the delivered performance of leased public Cloud resources. To mitigate effects of performance variation of resources on soft deadlines of workflow applications, we propose an algorithm that uses idle time of provisioned resources and budget surplus to replicate tasks. Simulation experiments with four well-known scientific workflows show that the proposed algorithm increases the likelihood of deadlines being met and reduces the total execution time of applications as the budget available for replication increases.

Journal ArticleDOI
TL;DR: Three online incentive mechanisms, named TBA, TOIM and TOIMAD, based on online reverse auction are designed, designed to pursue platform utility maximization, while toIM and ToIM-AD achieve the crucial property of truthfulness.
Abstract: Off-the-shelf smartphones have boosted large scale participatory sensing applications as they are equipped with various functional sensors, possess powerful computation and communication capabilities, and proliferate at a breathtaking pace Yet the low participation level of smartphone users due to various resource consumptions, such as time and power, remains a hurdle that prevents the enjoyment brought by sensing applications Recently, some researchers have done pioneer works in motivating users to contribute their resources by designing incentive mechanisms, which are able to provide certain rewards for participation However, none of these works considered smartphone users’ nature of opportunistically occurring in the area of interest Specifically, for a general smartphone sensing application, the platform would distribute tasks to each user on her arrival and has to make an immediate decision according to the user’s reply To accommodate this general setting, we design three online incentive mechanisms, named TBA, TOIM and TOIM-AD, based on online reverse auction TBA is designed to pursue platform utility maximization, while TOIM and TOIM-AD achieve the crucial property of truthfulness All mechanisms possess the desired properties of computational efficiency, individual rationality, and profitability Besides, they are highly competitive compared to the optimal offline solution The extensive simulation results reveal the impact of the key parameters and show good approximation to the state-of-the-art offline mechanism

Journal ArticleDOI
TL;DR: This paper proposes a scalable two-phase top-down specialization (TDS) approach to anonymize large-scale data sets using the MapReduce framework on cloud and demonstrates that the scalability and efficiency of TDS can be significantly improved over existing approaches.
Abstract: A large number of cloud services require users to share private data like electronic health records for data analysis or mining, bringing privacy concerns. Anonymizing data sets via generalization to satisfy certain privacy requirements such as k-anonymity is a widely used category of privacy preserving techniques. At present, the scale of data in many cloud applications increases tremendously in accordance with the Big Data trend, thereby making it a challenge for commonly used software tools to capture, manage, and process such large-scale data within a tolerable elapsed time. As a result, it is a challenge for existing anonymization approaches to achieve privacy preservation on privacy-sensitive large-scale data sets due to their insufficiency of scalability. In this paper, we propose a scalable two-phase top-down specialization (TDS) approach to anonymize large-scale data sets using the MapReduce framework on cloud. In both phases of our approach, we deliberately design a group of innovative MapReduce jobs to concretely accomplish the specialization computation in a highly scalable way. Experimental evaluation results demonstrate that with our approach, the scalability and efficiency of TDS can be significantly improved over existing approaches.

Journal ArticleDOI
TL;DR: In this article, the authors present an analytical model based on stochastic reward nets (SRNs) that is both scalable to model systems composed of thousands of resources and flexible to represent different policies and cloud-specific strategies.
Abstract: Cloud data center management is a key problem due to the numerous and heterogeneous strategies that can be applied, ranging from the VM placement to the federation with other clouds. Performance evaluation of cloud computing infrastructures is required to predict and quantify the cost-benefit of a strategy portfolio and the corresponding quality of service (QoS) experienced by users. Such analyses are not feasible by simulation or on-the-field experimentation, due to the great number of parameters that have to be investigated. In this paper, we present an analytical model, based on stochastic reward nets (SRNs), that is both scalable to model systems composed of thousands of resources and flexible to represent different policies and cloud-specific strategies. Several performance metrics are defined and evaluated to analyze the behavior of a cloud data center: utilization, availability, waiting time, and responsiveness. A resiliency analysis is also provided to take into account load bursts. Finally, a general approach is presented that, starting from the concept of system capacity, can help system managers to opportunely set the data center parameters under different working conditions.

Journal ArticleDOI
TL;DR: A dynamic trust management protocol for secure routing optimization in DTN environments in the presence of well-behaved, selfish and malicious nodes is designed and validated and can effectively trade off message overhead and message delay for a significant gain in delivery ratio.
Abstract: Delay tolerant networks (DTNs) are characterized by high end-to-end latency, frequent disconnection, and opportunistic communication over unreliable wireless links. In this paper, we design and validate a dynamic trust management protocol for secure routing optimization in DTN environments in the presence of well-behaved, selfish and malicious nodes. We develop a novel model-based methodology for the analysis of our trust protocol and validate it via extensive simulation. Moreover, we address dynamic trust management, i.e., determining and applying the best operational settings at runtime in response to dynamically changing network conditions to minimize trust bias and to maximize the routing application performance. We perform a comparative analysis of our proposed routing protocol against Bayesian trust-based and non-trust based (PROPHET and epidemic) routing protocols. The results demonstrate that our protocol is able to deal with selfish behaviors and is resilient against trust-related attacks. Furthermore, our trust-based routing protocol can effectively trade off message overhead and message delay for a significant gain in delivery ratio. Our trust-based routing protocol operating under identified best settings outperforms Bayesian trust-based routing and PROPHET, and approaches the ideal performance of epidemic routing in delivery ratio and message delay without incurring high message or protocol maintenance overhead.

Journal ArticleDOI
TL;DR: Theoretical analysis and experimental results demonstrate that the proposed scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.
Abstract: Cloud computing opens a new era in IT as it can provide various elastic and scalable IT services in a pay-as-you-go fashion, where its users can reduce the huge capital investments in their own IT infrastructure. In this philosophy, users of cloud storage services no longer physically maintain direct control over their data, which makes data security one of the major concerns of using cloud. Existing research work already allows data integrity to be verified without possession of the actual data file. When the verification is done by a trusted third party, this verification process is also called data auditing, and this third party is called an auditor. However, such schemes in existence suffer from several common drawbacks. First, a necessary authorization/authentication process is missing between the auditor and cloud service provider, i.e., anyone can challenge the cloud service provider for a proof of integrity of certain file, which potentially puts the quality of the so-called ‘auditing-as-a-service’ at risk; Second, although some of the recent work based on BLS signature can already support fully dynamic data updates over fixed-size data blocks, they only support updates with fixed-sized blocks as basic unit, which we call coarse-grained updates. As a result, every small update will cause re-computation and updating of the authenticator for an entire file block, which in turn causes higher storage and communication overheads. In this paper, we provide a formal analysis for possible types of fine-grained data updates and propose a scheme that can fully support authorized auditing and fine-grained update requests. Based on our scheme, we also propose an enhancement that can dramatically reduce communication overheads for verifying small updates. Theoretical analysis and experimental results demonstrate that our scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.

Journal ArticleDOI
TL;DR: This work proposes a heuristic energy-aware stochastic task scheduling algorithm called ESTS, which can achieve high scheduling performance for BoT applications with low time complexity O(n(M + logn), where n is the number of tasks and M is the total number of processor frequencies.
Abstract: In the past few years, with the rapid development of heterogeneous computing systems (HCS), the issue of energy consumption has attracted a great deal of attention. How to reduce energy consumption is currently a critical issue in designing HCS. In response to this challenge, many energy-aware scheduling algorithms have been developed primarily using the dynamic voltage-frequency scaling (DVFS) capability which has been incorporated into recent commodity processors. However, these techniques are unsatisfactory in minimizing both schedule length and energy consumption. Furthermore, most algorithms schedule tasks according to their average-case execution times and do not consider task execution times with probability distributions in the real-world. In realizing this, we study the problem of scheduling a bag-of-tasks (BoT) application, made of a collection of independent stochastic tasks with normal distributions of task execution times, on a heterogeneous platform with deadline and energy consumption budget constraints. We build execution time and energy consumption models for stochastic tasks on a single processor. We derive the expected value and variance of schedule length on HCS by Clark's equations. We formulate our stochastic task scheduling problem as a linear programming problem, in which we maximize the weighted probability of combined schedule length and energy consumption metric under deadline and energy consumption budget constraints. We propose a heuristic energy-aware stochastic task scheduling algorithm called ESTS to solve this problem. Our algorithm can achieve high scheduling performance for BoT applications with low time complexity $O(n(M+\log n))$ , where $n$ is the number of tasks and $M$ is the total number of processor frequencies. Our extensive simulations for performance evaluation based on randomly generated stochastic applications and real-world applications clearly demonstrate that our proposed heuristic algorithm can improve the weighted probability that both the deadline and the energy consumption budget constraints can be met, and has the capability of balancing between schedule length and energy consumption.

Journal ArticleDOI
TL;DR: This paper proposes a cyber-physical codesign approach to structural health monitoring based on wireless sensor networks that closely integrates flexibility- based damage localization methods that allow a tradeoff between the number of sensors and the resolution of damage localization, and an energy-efficient, multilevel computing architecture specifically designed to leverage the multiresolution feature of the flexibility-based approach.
Abstract: Our deteriorating civil infrastructure faces the critical challenge of long-term structural health monitoring for damage detection and localization. In contrast to existing research that often separates the designs of wireless sensor networks and structural engineering algorithms, this paper proposes a cyber-physical codesign approach to structural health monitoring based on wireless sensor networks. Our approach closely integrates 1) flexibility-based damage localization methods that allow a tradeoff between the number of sensors and the resolution of damage localization, and 2) an energy-efficient, multilevel computing architecture specifically designed to leverage the multiresolution feature of the flexibility-based approach. The proposed approach has been implemented on the Intel Imote2 platform. Experiments on a simulated truss structure and a real full-scale truss structure demonstrate the system's efficacy in damage localization and energy efficiency.

Journal ArticleDOI
TL;DR: This paper proposes a novel collaborative filtering-based Web service recommender system to help users select services with optimal Quality-of-Service (QoS) performance, and achieves considerable improvement on the recommendation accuracy.
Abstract: Web services are integrated software components for the support of interoperable machine-to-machine interaction over a network. Web services have been widely employed for building service-oriented applications in both industry and academia in recent years. The number of publicly available Web services is steadily increasing on the Internet. However, this proliferation makes it hard for a user to select a proper Web service among a large amount of service candidates. An inappropriate service selection may cause many problems (e.g., ill-suited performance) to the resulting applications. In this paper, we propose a novel collaborative filtering-based Web service recommender system to help users select services with optimal Quality-of-Service (QoS) performance. Our recommender system employs the location information and QoS values to cluster users and services, and makes personalized service recommendation for users based on the clustering results. Compared with existing service recommendation methods, our approach achieves considerable improvement on the recommendation accuracy. Comprehensive experiments are conducted involving more than 1.5 million QoS records of real-world Web services to demonstrate the effectiveness of our approach.

Journal ArticleDOI
TL;DR: The basic idea of iTrust is introducing a periodically available Trusted Authority to judge the node's behavior based on the collected routing evidences and probabilistically checking, and correlates detection probability with a node's reputation, which allows a dynamic detection probability determined by the trust of the users.
Abstract: Malicious and selfish behaviors represent a serious threat against routing in delay/disruption tolerant networks (DTNs) Due to the unique network characteristics, designing a misbehavior detection scheme in DTN is regarded as a great challenge In this paper, we propose iTrust, a probabilistic misbehavior detection scheme, for secure DTN routing toward efficient trust establishment The basic idea of iTrust is introducing a periodically available Trusted Authority (TA) to judge the node's behavior based on the collected routing evidences and probabilistically checking We model iTrust as the inspection game and use game theoretical analysis to demonstrate that, by setting an appropriate investigation probability, TA could ensure the security of DTN routing at a reduced cost To further improve the efficiency of the proposed scheme, we correlate detection probability with a node's reputation, which allows a dynamic detection probability determined by the trust of the users The extensive analysis and simulation results demonstrate the effectiveness and efficiency of the proposed scheme

Journal ArticleDOI
TL;DR: The results show that the proposed protocols have better performance than the existing secure protocols for CWSNs, in terms of security overhead and energy consumption.
Abstract: Secure data transmission is a critical issue for wireless sensor networks (WSNs). Clustering is an effective and practical way to enhance the system performance of WSNs. In this paper, we study a secure data transmission for cluster-based WSNs (CWSNs), where the clusters are formed dynamically and periodically. We propose two secure and efficient data transmission (SET) protocols for CWSNs, called SET-IBS and SET-IBOOS, by using the identity-based digital signature (IBS) scheme and the identity-based online/offline digital signature (IBOOS) scheme, respectively. In SET-IBS, security relies on the hardness of the Diffie-Hellman problem in the pairing domain. SET-IBOOS further reduces the computational overhead for protocol security, which is crucial for WSNs, while its security relies on the hardness of the discrete logarithm problem. We show the feasibility of the SET-IBS and SET-IBOOS protocols with respect to the security requirements and security analysis against various attacks. The calculations and simulations are provided to illustrate the efficiency of the proposed protocols. The results show that the proposed protocols have better performance than the existing secure protocols for CWSNs, in terms of security overhead and energy consumption.

Journal ArticleDOI
TL;DR: The proposed probabilistic approach to in-network caching exhibits ideal performance both in terms of network resource utilization and in termsof resource allocation fairness among competing content flows.
Abstract: We introduce the concept of resource management for in-network caching environments. We argue that in Information-Centric Networking environments, deterministically caching content messages at predefined places along the content delivery path results in unfair and inefficient content multiplexing between different content flows, as well as in significant caching redundancy. Instead, allocating resources along the path according to content flow characteristics results in better use of network resources and therefore, higher overall performance. The design principles of our proposed in-network caching scheme, which we call ProbCache, target these two outcomes, namely reduction of caching redundancy and fair content flow multiplexing along the delivery path. In particular, ProbCache approximates the caching capability of a path and caches contents probabilistically to: 1) leave caching space for other flows sharing (part of) the same path, and 2) fairly multiplex contents in caches along the path from the server to the client. We elaborate on the content multiplexing fairness of ProbCache and find that it sometimes behaves in favor of content flows connected far away from the source, that is, it gives higher priority to flows travelling longer paths, leaving little space to shorter-path flows. We introduce an enhanced version of the main algorithm that guarantees fair behavior to all participating content flows. We evaluate the proposed schemes in both homogeneous and heterogeneous cache size environments and formulate a framework for resource allocation in in-network caching environments. The proposed probabilistic approach to in-network caching exhibits ideal performance both in terms of network resource utilization and in terms of resource allocation fairness among competing content flows. Finally, and in contrast to the expected behavior, we find that the efficient design of ProbCache results in fast convergence to caching of popular content items.