scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Parallel and Distributed Systems in 2012"


Journal ArticleDOI
TL;DR: This paper proposes an efficient and privacy-preserving aggregation scheme, named EPPA, for smart grid communications that resists various security threats and preserve user privacy, and has significantly less computation and communication overhead than existing competing approaches.
Abstract: The concept of smart grid has emerged as a convergence of traditional power system engineering and information and communication technology. It is vital to the success of next generation of power grid, which is expected to be featuring reliable, efficient, flexible, clean, friendly, and secure characteristics. In this paper, we propose an efficient and privacy-preserving aggregation scheme, named EPPA, for smart grid communications. EPPA uses a superincreasing sequence to structure multidimensional data and encrypt the structured data by the homomorphic Paillier cryptosystem technique. For data communications from user to smart grid operation center, data aggregation is performed directly on ciphertext at local gateways without decryption, and the aggregation result of the original data can be obtained at the operation center. EPPA also adopts the batch verification technique to reduce authentication cost. Through extensive analysis, we demonstrate that EPPA resists various security threats and preserve user privacy, and has significantly less computation and communication overhead than existing competing approaches.

682 citations


Journal ArticleDOI
TL;DR: This paper defines and solves the problem of secure ranked keyword search over encrypted cloud data, and explores the statistical measure approach from information retrieval to build a secure searchable index, and develops a one-to-many order-preserving mapping technique to properly protect those sensitive score information.
Abstract: Cloud computing economically enables the paradigm of data service outsourcing. However, to protect data privacy, sensitive cloud data have to be encrypted before outsourced to the commercial public cloud, which makes effective data utilization service a very challenging task. Although traditional searchable encryption techniques allow users to securely search over encrypted data through keywords, they support only Boolean search and are not yet sufficient to meet the effective data utilization need that is inherently demanded by large number of users and huge amount of data files in cloud. In this paper, we define and solve the problem of secure ranked keyword search over encrypted cloud data. Ranked search greatly enhances system usability by enabling search result relevance ranking instead of sending undifferentiated results, and further ensures the file retrieval accuracy. Specifically, we explore the statistical measure approach, i.e., relevance score, from information retrieval to build a secure searchable index, and develop a one-to-many order-preserving mapping technique to properly protect those sensitive score information. The resulting design is able to facilitate efficient server-side ranking without losing keyword privacy. Thorough analysis shows that our proposed solution enjoys “as-strong-as-possible” security guarantee compared to previous searchable encryption schemes, while correctly realizing the goal of ranked keyword search. Extensive experimental results demonstrate the efficiency of the proposed solution.

526 citations


Journal ArticleDOI
TL;DR: This paper addresses the construction of an efficient PDP scheme for distributed cloud storage to support the scalability of service and data migration, in which it considers the existence of multiple cloud service providers to cooperatively store and maintain the clients' data.
Abstract: Provable data possession (PDP) is a technique for ensuring the integrity of data in storage outsourcing. In this paper, we address the construction of an efficient PDP scheme for distributed cloud storage to support the scalability of service and data migration, in which we consider the existence of multiple cloud service providers to cooperatively store and maintain the clients' data. We present a cooperative PDP (CPDP) scheme based on homomorphic verifiable response and hash index hierarchy. We prove the security of our scheme based on multiprover zero-knowledge proof system, which can satisfy completeness, knowledge soundness, and zero-knowledge properties. In addition, we articulate performance optimization mechanisms for our scheme, and in particular present an efficient method for selecting optimal parameter values to minimize the computation costs of clients and storage service providers. Our experiments show that our solution introduces lower computation and communication overheads in comparison with noncooperative approaches.

473 citations


Journal ArticleDOI
TL;DR: A novel approximate analytical model is described that allows cloud operators to determine the relationship between the number of servers and input buffer size and the performance indicators such as mean number of tasks in the system, blocking probability, and probability that a task will obtain immediate service.
Abstract: Successful development of cloud computing paradigm necessitates accurate performance evaluation of cloud data centers. As exact modeling of cloud centers is not feasible due to the nature of cloud centers and diversity of user requests, we describe a novel approximate analytical model for performance evaluation of cloud server farms and solve it to obtain accurate estimation of the complete probability distribution of the request response time and other important performance indicators. The model allows cloud operators to determine the relationship between the number of servers and input buffer size, on one side, and the performance indicators such as mean number of tasks in the system, blocking probability, and probability that a task will obtain immediate service, on the other.

387 citations


Journal ArticleDOI
TL;DR: StreamCloud is presented, a scalable and elastic stream processing engine for processing large data stream volumes that uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead.
Abstract: Many applications in several domains such as telecommunications, network security, large-scale sensor networks, require online processing of continuous data flows. They produce very high loads that requires aggregating the processing capacity of many nodes. Current Stream Processing Engines do not scale with the input load due to single-node bottlenecks. Additionally, they are based on static configurations that lead to either under or overprovisioning. In this paper, we present StreamCloud, a scalable and elastic stream processing engine for processing large data stream volumes. StreamCloud uses a novel parallelization technique that splits queries into subqueries that are allocated to independent sets of nodes in a way that minimizes the distribution overhead. Its elastic protocols exhibit low intrusiveness, enabling effective adjustment of resources to the incoming load. Elasticity is combined with dynamic load balancing to minimize the computational resources used. The paper presents the system design, implementation, and a thorough evaluation of the scalability and elasticity of the fully implemented system.

329 citations


Journal ArticleDOI
TL;DR: A survey of the different parallel programming models and tools available today with special consideration to their suitability for high-performance computing finds that hybrid parallel programming is the current way of harnessing the capabilities of computer clusters with multi-core nodes.
Abstract: In this work, we present a survey of the different parallel programming models and tools available today with special consideration to their suitability for high-performance computing. Thus, we review the shared and distributed memory approaches, as well as the current heterogeneous parallel programming model. In addition, we analyze how the partitioned global address space (PGAS) and hybrid parallel programming models are used to combine the advantages of shared and distributed memory systems. The work is completed by considering languages with specific parallel support and the distributed programming paradigm. In all cases, we present characteristics, strengths, and weaknesses. The study shows that the availability of multi-core CPUs has given new impulse to the shared memory parallel programming approach. In addition, we find that hybrid parallel programming is the current way of harnessing the capabilities of computer clusters with multi-core nodes. On the other hand, heterogeneous programming is found to be an increasingly popular paradigm, as a consequence of the availability of multi-core CPUs+GPUs systems. The use of open industry standards like OpenMP, MPI, or OpenCL, as opposed to proprietary solutions, seems to be the way to uniformize and extend the use of parallel programming models.

257 citations


Journal ArticleDOI
TL;DR: This work mathematically formulate this problem as a stochastic optimization problem and approximately solve it by using the Lyapunov optimization approach, and has found a good tradeoff between cost saving and storage capacity.
Abstract: Recently intensive efforts have been made on the transformation of the world's largest physical system, the power grid, into a “smart grid” by incorporating extensive information and communication infrastructures. Key features in such a “smart grid” include high penetration of renewable and distributed energy sources, large-scale energy storage, market-based online electricity pricing, and widespread demand response programs. From the perspective of residential customers, we can investigate how to minimize the expected electricity cost with real-time electricity pricing, which is the focus of this paper. By jointly considering energy storage, local distributed generation such as photovoltaic (PV) modules or small wind turbines, and inelastic or elastic energy demands, we mathematically formulate this problem as a stochastic optimization problem and approximately solve it by using the Lyapunov optimization approach. From the theoretical analysis, we have also found a good tradeoff between cost saving and storage capacity. A salient feature of our proposed approach is that it can operate without any future knowledge on the related stochastic models (e.g., the distribution) and is easy to implement in real time. We have also evaluated our proposed solution with practical data sets and validated its effectiveness.

233 citations


Journal ArticleDOI
TL;DR: A discrimination algorithm using the flow correlation coefficient as a similarity metric among suspicious flows is proposed using the size and organization of current botnets and demonstrated the effectiveness of the proposed method in practice.
Abstract: Distributed Denial of Service (DDoS) attack is a critical threat to the Internet, and botnets are usually the engines behind them. Sophisticated botmasters attempt to disable detectors by mimicking the traffic patterns of flash crowds. This poses a critical challenge to those who defend against DDoS attacks. In our deep study of the size and organization of current botnets, we found that the current attack flows are usually more similar to each other compared to the flows of flash crowds. Based on this, we proposed a discrimination algorithm using the flow correlation coefficient as a similarity metric among suspicious flows. We formulated the problem, and presented theoretical proofs for the feasibility of the proposed discrimination method in theory. Our extensive experiments confirmed the theoretical analysis and demonstrated the effectiveness of the proposed method in practice.

221 citations


Journal ArticleDOI
TL;DR: A new performance metric, accumulated bandwidthdistance product (ABDP), is introduced, to represent the total communication resource usages, and demonstrates that the total cost for the centralized architecture scales linearly as O(λN), with N being the number of smart meters, and λ being the average traffic rate on a smart meter.
Abstract: In this paper, we investigate the scalability of three communication architectures for advanced metering infrastructure (AMI) in smart grid. AMI in smart grid is a typical cyber-physical system (CPS) example, in which large amount of data from hundreds of thousands of smart meters are collected and processed through an AMI communication infrastructure. Scalability is one of the most important issues for the AMI deployment in smart grid. In this study, we introduce a new performance metric, accumulated bandwidthdistance product (ABDP), to represent the total communication resource usages. For each distributed communication architecture, we formulate an optimization problem and obtain the solutions for minimizing the total cost of the system that considers both the ABDP and the deployment cost of the meter data management system (MDMS). The simulation results indicate the significant benefits of the distributed communication architectures over the traditional centralized one. More importantly, we analyze the scalability of the total cost of the communication system (including MDMS) with regard to the traffic load on the smart meters for both the centralized and the distributed communication architectures. Through the closed form expressions obtained in our analysis, we demonstrate that the total cost for the centralized architecture scales linearly as O(λN), with N being the number of smart meters, and λ being the average traffic rate on a smart meter. In contrast, the total cost for the fully distributed communication architecture is O(λ2/3 N2/3), which is significantly lower.

216 citations


Journal ArticleDOI
TL;DR: This work proposes a threshold proxy re-encryption scheme and integrates it with a decentralized erasure code such that a secure distributed storage system is formulated and fully integrates encrypting, encoding, and forwarding.
Abstract: A cloud storage system, consisting of a collection of storage servers, provides long-term storage services over the Internet. Storing data in a third party's cloud system causes serious concern over data confidentiality. General encryption schemes protect data confidentiality, but also limit the functionality of the storage system because a few operations are supported over encrypted data. Constructing a secure storage system that supports multiple functions is challenging when the storage system is distributed and has no central authority. We propose a threshold proxy re-encryption scheme and integrate it with a decentralized erasure code such that a secure distributed storage system is formulated. The distributed storage system not only supports secure and robust data storage and retrieval, but also lets a user forward his data in the storage servers to another user without retrieving the data back. The main technical contribution is that the proxy re-encryption scheme supports encoding operations over encrypted messages as well as forwarding operations over encoded and encrypted messages. Our method fully integrates encrypting, encoding, and forwarding. We analyze and suggest suitable parameters for the number of copies of a message dispatched to storage servers and the number of storage servers queried by a key server. These parameters allow more flexible adjustment between the number of storage servers and robustness.

213 citations


Journal ArticleDOI
TL;DR: A new metric is introduced that detects the quality of friendships between nodes accurately and defines the community of each node as the set of nodes having close friendship relations with this node either directly or indirectly.
Abstract: Routing in delay tolerant networks is a challenging problem due to the intermittent connectivity between nodes resulting in the frequent absence of end-to-end path for any source-destination pair at any given time. Recently, this problem has attracted a great deal of interest and several approaches have been proposed. Since Mobile Social Networks (MSNs) are increasingly popular type of Delay Tolerant Networks (DTNs), making accurate analysis of social network properties of these networks is essential for designing efficient routing protocols. In this paper, we introduce a new metric that detects the quality of friendships between nodes accurately. Utilizing this metric, we define the community of each node as the set of nodes having close friendship relations with this node either directly or indirectly. We also present Friendship-Based Routing in which periodically differentiated friendship relations are used in forwarding of messages. Extensive simulations on both real and synthetic traces show that the introduced algorithm is more efficient than the existing algorithms.

Journal ArticleDOI
TL;DR: It is shown in this paper that the most damaging attack can be identified through a max-min attacker-defender model, and hence provides an in-depth insight on effective attack prevention when resource budgets are limited.
Abstract: Cyber security is becoming an area of growing concern in the electric power industry with the development of smart grid. False data injection attack, which is against state estimation through SCADA network, has recently attracted wide research interests. This paper further develops the concept of load redistribution (LR) attack, a special type of false data injection attack. The damage of LR attacks to power system operations can manifest in an immediate or a delayed fashion. For the immediate attacking goal, we show in this paper that the most damaging attack can be identified through a max-min attacker-defender model. Benders decomposition within a restart framework is used to solve the bilevel immediate LR attack problem with a moderate computational effort. Its effectiveness has been validated by the Karush-Kuhn-Tucker (KKT)-based method solution in our previous work. For the delayed attacking goal, we propose a trilevel model to identify the most damaging attack and transform the model into an equivalent single-level mixed-integer problem for final solution. In summary, this paper enables quantitative analysis of the damage of LR attacks to power system operations and security, and hence provides an in-depth insight on effective attack prevention when resource budgets are limited. A 14-bus system is used to test the correctness of the proposed model and algorithm.

Journal ArticleDOI
TL;DR: This paper characterize the optimum inter-link allocation strategy against random attacks in the case where the topology of each individual network is unknown and shows that this strategy yields better performance compared to all possible strategies, including strategies using random allocation, unidirectional interlinks, etc.
Abstract: We consider a cyber-physical system consisting of two interacting networks, i.e., a cyber network overlaying a physical network. It is envisioned that these systems are more vulnerable to attacks since node failures in one network may result in (due to the interdependence) failures in the other network, causing a cascade of failures that would potentially lead to the collapse of the entire infrastructure. The robustness of interdependent systems against this sort of catastrophic failure hinges heavily on the allocation of the (interconnecting) links that connect nodes in one network to nodes in the other network. In this paper, we characterize the optimum inter-link allocation strategy against random attacks in the case where the topology of each individual network is unknown. In particular, we analyze the “regular” allocation strategy that allots exactly the same number of bidirectional internetwork links to all nodes in the system. We show, both analytically and experimentally, that this strategy yields better performance (from a network resilience perspective) compared to all possible strategies, including strategies using random allocation, unidirectional interlinks, etc.

Journal ArticleDOI
TL;DR: This paper proposes a new QoS-based workflow scheduling algorithm based on a novel concept called Partial Critical Paths (PCP), that tries to minimize the cost of workflow execution while meeting a user-defined deadline.
Abstract: Recently, utility Grids have emerged as a new model of service provisioning in heterogeneous distributed systems. In this model, users negotiate with service providers on their required Quality of Service and on the corresponding price to reach a Service Level Agreement. One of the most challenging problems in utility Grids is workflow scheduling, i.e., the problem of satisfying the QoS of the users as well as minimizing the cost of workflow execution. In this paper, we propose a new QoS-based workflow scheduling algorithm based on a novel concept called Partial Critical Paths (PCP), that tries to minimize the cost of workflow execution while meeting a user-defined deadline. The PCP algorithm has two phases: in the deadline distribution phase it recursively assigns subdeadlines to the tasks on the partial critical paths ending at previously assigned tasks, and in the planning phase it assigns the cheapest service to each task while meeting its subdeadline. The simulation results show that the performance of the PCP algorithm is very promising.

Journal ArticleDOI
TL;DR: By developing a practical fault-tolerant method, this work offset the noise of RF tag data and mine frequent trajectory patterns as models of regular activities and verifies the feasibility and the effectiveness of this design.
Abstract: Activity monitoring, a crucial task in many applications, is often conducted expensively using video cameras. Effectively monitoring a large field by analyzing images from multiple cameras remains a challenging issue. Other approaches generally require the tracking objects to attach special devices, which are infeasible in many scenarios. To address the issue, we propose to use RF tag arrays for activity monitoring, where data mining techniques play a critical role. The RFID technology provides an economically attractive solution due to the low cost of RF tags and readers. Another novelty of this design is that the tracking objects do not need to be equipped with any RF transmitters or receivers. By developing a practical fault-tolerant method, we offset the noise of RF tag data and mine frequent trajectory patterns as models of regular activities. Our empirical study using real RFID systems and data sets verifies the feasibility and the effectiveness of this design.

Journal ArticleDOI
TL;DR: This work concerns itself with predicting the parking occupancy given time-varying arrival and departure rates of a typical international airport, and provides closed forms for the probability distribution of the parking lot occupancy as a function of time.
Abstract: Recently, Olariu et al. [3], [7], [18], [19], [20] proposed to refer to a dynamic group of vehicles whose excess computing, sensing, communication, and storage resources can be coordinated and dynamically allocated to authorized users, as a vehicular cloud. One of the characteristics that distinguishes vehicular clouds from conventional clouds is the dynamically changing amount of available resources that, in some cases, may fluctuate rather abruptly. In this work, we envision a vehicular cloud involving cars in the long-term parking lot of a typical international airport. The patrons of such a parking lot are typically on travel for several days, providing a pool of cars that can serve as the basis for a datacenter at the airport. We anticipate a park and plug scenario where the cars that participate in the vehicular cloud are plugged into a standard power outlet and are provided Ethernet connection to a central server at the airport. In order to be able to schedule resources and to assign computational tasks to the various cars in the vehicular cloud, a fundamental prerequisite is to have an accurate picture of the number of vehicles that are expected to be present in the parking lot as a function of time. What makes the problem difficult is the time-varying nature of the arrival and departure rates. In this work, we concern ourselves with predicting the parking occupancy given time-varying arrival and departure rates. Our main contribution is to provide closed forms for the probability distribution of the parking lot occupancy as a function of time, for the expected number of cars in the parking lot and its variance, and for the limiting behavior of these parameters as time increases. In addition to analytical results, we have obtained a series of empirical results that confirm the accuracy of our analytical predictions.

Journal ArticleDOI
TL;DR: This proposed approach tries to account for link stability and for minimum drain rate energy consumption and a novel routing protocol called Link-stAbility and Energy aware Routing protocols (LAER) is proposed.
Abstract: Energy awareness for computation and protocol management is becoming a crucial factor in the design of protocols and algorithms. On the other hand, in order to support node mobility, scalable routing strategies have been designed and these protocols try to consider the path duration in order to respect some QoS constraints and to reduce the route discovery procedures. Often energy saving and path duration and stability can be two contrasting efforts and trying to satisfy both of them can be very difficult. In this paper, a novel routing strategy is proposed. This proposed approach tries to account for link stability and for minimum drain rate energy consumption. In order to verify the correctness of the proposed solution a biobjective optimization formulation has been designed and a novel routing protocol called Link-stAbility and Energy aware Routing protocols (LAER) is proposed. This novel routing scheme has been compared with other three protocols: PERRA, GPSR, and E-GPSR. The protocol performance has been evaluated in terms of Data Packet Delivery Ratio, Normalized Control Overhead, Link duration, Nodes lifetime, and Average energy consumption.

Journal ArticleDOI
TL;DR: The proposed protocol aims at minimizing the overall network overhead and energy expenditure associated with the multihop data retrieval process while also ensuring balanced energy consumption among SNs and prolonged network lifetime.
Abstract: A large class of Wireless Sensor Networks (WSN) applications involve a set of isolated urban areas (e.g., urban parks or building blocks) covered by sensor nodes (SNs) monitoring environmental parameters. Mobile sinks (MSs) mounted upon urban vehicles with fixed trajectories (e.g., buses) provide the ideal infrastructure to effectively retrieve sensory data from such isolated WSN fields. Existing approaches involve either single-hop transfer of data from SNs that lie within the MS's range or heavy involvement of network periphery nodes in data retrieval, processing, buffering, and delivering tasks. These nodes run the risk of rapid energy exhaustion resulting in loss of network connectivity and decreased network lifetime. Our proposed protocol aims at minimizing the overall network overhead and energy expenditure associated with the multihop data retrieval process while also ensuring balanced energy consumption among SNs and prolonged network lifetime. This is achieved through building cluster structures consisted of member nodes that route their measured data to their assigned cluster head (CH). CHs perform data filtering upon raw data exploiting potential spatial-temporal data redundancy and forward the filtered information to appropriate end nodes with sufficient residual energy, located in proximity to the MS's trajectory. Simulation results confirm the effectiveness of our approach against as well as its performance gain over alternative methods.

Journal ArticleDOI
TL;DR: This paper describes efficient and flexible RCU implementations based on primitives commonly available to user-level applications and compares them with each other and with standard locking, which enables choosing the best mechanism for a given workload.
Abstract: Read-copy update (RCU) is a synchronization technique that often replaces reader-writer locking because RCU's read-side primitives are both wait-free and an order of magnitude faster than uncontended locking. Although RCU updates are relatively heavy weight, the importance of read-side performance is increasing as computing systems become more responsive to changes in their environments. RCU is heavily used in several kernel-level environments. Unfortunately, kernel-level implementations use facilities that are often unavailable to user applications. The few prior user-level RCU implementations either provided inefficient read-side primitives or restricted the application architecture. This paper fills this gap by describing efficient and flexible RCU implementations based on primitives commonly available to user-level applications. Finally, this paper compares these RCU implementations with each other and with standard locking, which enables choosing the best mechanism for a given workload. This work opens the door to widespread user-application use of RCU.

Journal ArticleDOI
TL;DR: This paper proposes a jump-stay channel-hopping (CH) algorithm for blind rendezvous that derives upper bounds on the maximum time-to-rendezvous (TTR) and the expected TTR of the algorithm for both 2-user and multiuser scenarios.
Abstract: Cognitive radio networks (CRNs) have emerged as advanced and promising paradigm to exploit the existing wireless spectrum opportunistically. It is crucial for users in CRNs to search for neighbors via rendezvous process and thereby establish the communication links to exchange the information necessary for spectrum management and channel contention, etc. This paper focuses on the design of algorithms for blind rendezvous, i.e., rendezvous without using any centralized controller and common control channel (CCC). We propose a jump-stay channel-hopping (CH) algorithm for blind rendezvous. The basic idea is to generate CH sequence in rounds and each round consists of a jump-pattern and a stay-pattern. Users “jump” on available channels in the jump-pattern while “stay” on a specific channel in the stay-pattern. We prove that two users can achieve rendezvous in one of four possible pattern combinations: jump-stay, stay-jump, jump-jump, and stay-stay. Compared with the existing CH algorithms, our algorithm has the overall best performance in various scenarios and is applicable to rendezvous of multiuser and multihop scenarios. We derive upper bounds on the maximum time-to-rendezvous (TTR) and the expected TTR of our algorithm for both 2-user and multiuser scenarios (shown in Table 1). Extensive simulations are conducted to evaluate the performance of our algorithm.

Journal ArticleDOI
TL;DR: A novel bandwidth-efficient cooperative authentication (BECAN) scheme for filtering injected false data based on the random graph characteristics of sensor node deployment and the cooperative bit-compressed authentication technique can save energy by early detecting and filtering the majority of injectedfalse data with minor extra overheads at the en-route nodes.
Abstract: Injecting false data attack is a well known serious threat to wireless sensor network, for which an adversary reports bogus information to sink causing error decision at upper level and energy waste in en-route nodes. In this paper, we propose a novel bandwidth-efficient cooperative authentication (BECAN) scheme for filtering injected false data. Based on the random graph characteristics of sensor node deployment and the cooperative bit-compressed authentication technique, the proposed BECAN scheme can save energy by early detecting and filtering the majority of injected false data with minor extra overheads at the en-route nodes. In addition, only a very small fraction of injected false data needs to be checked by the sink, which thus largely reduces the burden of the sink. Both theoretical and simulation results are given to demonstrate the effectiveness of the proposed scheme in terms of high filtering probability and energy saving.

Journal ArticleDOI
TL;DR: This paper considers congestion caused by power surpluses produced from households' solar units on rooftops or on ground, and proposes a model for the disconnection process via smart metering communications between smart meters and the utility control center.
Abstract: The operation and control of the existing power grid system, which is challenged with rising demands and peak loads, has been considered passive. Congestion is often discovered in high-demand regions, and at locations where abundant renewable energy is generated and injected into the grid; this is attributed to a lack of transmission lines, transfer capability, and transmission capacity. While developing distributed generation (DG) tends to alleviate the traditional congestion problem, employing information and communications technology (ICT) helps manage DG more effectively. ICT involves a vast amount of data to facilitate a broader knowledge of the network status. Data computation and communications are critical elements that can impact the system performance. In this paper, we consider congestion caused by power surpluses produced from households' solar units on rooftops or on ground. Disconnecting some solar units is required to maintain the reliability of the distribution grid. We propose a model for the disconnection process via smart metering communications between smart meters and the utility control center. By modeling the surplus congestion issue as a knapsack problem, we can solve it by proposed greedy solutions. Reduced computation time and data traffic in the network can be achieved.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an innovative public cloud usage model for small-to-medium scale scientific communities to utilize elastic resources on a public cloud site while maintaining their flexible system controls, i.e., create, activate, suspend, resume, deactivate, and destroy their high-level management entities.
Abstract: The basic idea behind cloud computing is that resource providers offer elastic resources to end users. In this paper, we intend to answer one key question to the success of cloud computing: in cloud, can small-to-medium scale scientific communities benefit from the economies of scale? Our research contributions are threefold: first, we propose an innovative public cloud usage model for small-to-medium scale scientific communities to utilize elastic resources on a public cloud site while maintaining their flexible system controls, i.e., create, activate, suspend, resume, deactivate, and destroy their high-level management entities-service management layers without knowing the details of management. Second, we design and implement an innovative system-DawningCloud, at the core of which are lightweight service management layers running on top of a common management service framework. The common management service framework of DawningCloud not only facilitates building lightweight service management layers for heterogeneous workloads, but also makes their management tasks simple. Third, we evaluate the systems comprehensively using both emulation and real experiments. We found that for four traces of two typical scientific workloads: High-Throughput Computing (HTC) and Many-Task Computing (MTC), DawningCloud saves the resource consumption maximally by 59.5 and 72.6 percent for HTC and MTC service providers, respectively, and saves the total resource consumption maximally by 54 percent for the resource provider with respect to the previous two public cloud solutions. To this end, we conclude that small-to-medium scale scientific communities indeed can benefit from the economies of scale of public clouds with the support of the enabling system.

Journal ArticleDOI
TL;DR: A cloud-based scheme for efficiently protecting source nodes' location privacy against Hotspot-Locating attack by creating a cloud with an irregular shape of fake traffic, to counteract the inconsistency in the traffic pattern and camouflage the source node in the nodes forming the cloud.
Abstract: In wireless sensor networks, adversaries can make use of the traffic information to locate the monitored objects, e.g., to hunt endangered animals or kill soldiers. In this paper, we first define a hotspot phenomenon that causes an obvious inconsistency in the network traffic pattern due to the large volume of packets originating from a small area. Second, we develop a realistic adversary model, assuming that the adversary can monitor the network traffic in multiple areas, rather than the entire network or only one area. Using this model, we introduce a novel attack called Hotspot-Locating where the adversary uses traffic analysis techniques to locate hotspots. Finally, we propose a cloud-based scheme for efficiently protecting source nodes' location privacy against Hotspot-Locating attack by creating a cloud with an irregular shape of fake traffic, to counteract the inconsistency in the traffic pattern and camouflage the source node in the nodes forming the cloud. To reduce the energy cost, clouds are active only during data transmission and the intersection of clouds creates a larger merged cloud, to reduce the number of fake packets and also boost privacy preservation. Simulation and analytical results demonstrate that our scheme can provide stronger privacy protection than routing-based schemes and requires much less energy than global-adversary-based schemes.

Journal ArticleDOI
TL;DR: A privacy-preserving decentralized key-policy ABE scheme where each authority can issue secret keys to a user independently without knowing anything about his GID, which is the first decentralized ABE scheme with privacy- Preserving based on standard complexity assumptions.
Abstract: Decentralized attribute-based encryption (ABE) is a variant of a multiauthority ABE scheme where each authority can issue secret keys to the user independently without any cooperation and a central authority. This is in contrast to the previous constructions, where multiple authorities must be online and setup the system interactively, which is impractical. Hence, it is clear that a decentralized ABE scheme eliminates the heavy communication cost and the need for collaborative computation in the setup stage. Furthermore, every authority can join or leave the system freely without the necessity of reinitializing the system. In contemporary multiauthority ABE schemes, a user's secret keys from different authorities must be tied to his global identifier (GID) to resist the collusion attack. However, this will compromise the user's privacy. Multiple authorities can collaborate to trace the user by his GID, collect his attributes, then impersonate him. Therefore, constructing a decentralized ABE scheme with privacy-preserving remains a challenging research problem. In this paper, we propose a privacy-preserving decentralized key-policy ABE scheme where each authority can issue secret keys to a user independently without knowing anything about his GID. Therefore, even if multiple authorities are corrupted, they cannot collect the user's attributes by tracing his GID. Notably, our scheme only requires standard complexity assumptions (e.g., decisional bilinear Diffie-Hellman) and does not require any cooperation between the multiple authorities, in contrast to the previous comparable scheme that requires nonstandard complexity assumptions (e.g., q-decisional Diffie-Hellman inversion) and interactions among multiple authorities. To the best of our knowledge, it is the first decentralized ABE scheme with privacy-preserving based on standard complexity assumptions.

Journal ArticleDOI
TL;DR: The design has been generalized and adopted on both homogeneous and heterogeneous wireless sensor networks and can recover all sensing data even these data has been aggregated, called “recoverable.”
Abstract: Recently, several data aggregation schemes based on privacy homomorphism encryption have been proposed and investigated on wireless sensor networks. These data aggregation schemes provide better security compared with traditional aggregation since cluster heads (aggregator) can directly aggregate the ciphertexts without decryption; consequently, transmission overhead is reduced. However, the base station only retrieves the aggregated result, not individual data, which causes two problems. First, the usage of aggregation functions is constrained. For example, the base station cannot retrieve the maximum value of all sensing data if the aggregated result is the summation of sensing data. Second, the base station cannot confirm data integrity and authenticity via attaching message digests or signatures to each sensing sample. In this paper, we attempt to overcome the above two drawbacks. In our design, the base station can recover all sensing data even these data has been aggregated. This property is called “recoverable.” Experiment results demonstrate that the transmission overhead is still reduced even if our approach is recoverable on sensing data. Furthermore, the design has been generalized and adopted on both homogeneous and heterogeneous wireless sensor networks.

Journal ArticleDOI
TL;DR: A novel Sybil attack detection mechanism, Footprint, using the trajectories of vehicles for identification while still preserving their location privacy, which can recognize and therefore dismiss “communities” of Sybil trajectories.
Abstract: In urban vehicular networks, where privacy, especially the location privacy of anonymous vehicles is highly concerned, anonymous verification of vehicles is indispensable. Consequently, an attacker who succeeds in forging multiple hostile identifies can easily launch a Sybil attack, gaining a disproportionately large influence. In this paper, we propose a novel Sybil attack detection mechanism, Footprint, using the trajectories of vehicles for identification while still preserving their location privacy. More specifically, when a vehicle approaches a road-side unit (RSU), it actively demands an authorized message from the RSU as the proof of the appearance time at this RSU. We design a location-hidden authorized message generation scheme for two objectives: first, RSU signatures on messages are signer ambiguous so that the RSU location information is concealed from the resulted authorized message; second, two authorized messages signed by the same RSU within the same given period of time (temporarily linkable) are recognizable so that they can be used for identification. With the temporal limitation on the linkability of two authorized messages, authorized messages used for long-term identification are prohibited. With this scheme, vehicles can generate a location-hidden trajectory for location-privacy-preserved identification by collecting a consecutive series of authorized messages. Utilizing social relationship among trajectories according to the similarity definition of two trajectories, Footprint can recognize and therefore dismiss “communities” of Sybil trajectories. Rigorous security analysis and extensive trace-driven simulations demonstrate the efficacy of Footprint.

Journal ArticleDOI
TL;DR: This paper proposes predict and relay (PER), an efficient routing algorithm for DTNs, where nodes determine the probability distribution of future contact times and choose a proper next-hop in order to improve the end-to-end delivery probability.
Abstract: Routing is one of the most challenging, open problems in disruption-tolerant networks (DTNs) because of the short-lived wireless connectivity environment. To deal with this issue, researchers have investigated routing based on the prediction of future contacts, taking advantage of nodes' mobility history. However, most of the previous work focused on the prediction of whether two nodes would have a contact, without considering the time of the contact. This paper proposes predict and relay (PER), an efficient routing algorithm for DTNs, where nodes determine the probability distribution of future contact times and choose a proper next-hop in order to improve the end-to-end delivery probability. The algorithm is based on two observations: one is that nodes usually move around a set of well-visited landmark points instead of moving randomly; the other is that node mobility behavior is semi-deterministic and could be predicted once there is sufficient mobility history information. Specifically, our approach employs a time-homogeneous semi-Markov process model that describes node mobility as transitions between landmarks. Then, we extend it to handle the scenario where we consider the transition time between two landmarks. A simulation study shows that this approach improves the delivery ratio and also reduces the delivery latency compared to traditional DTN routing schemes.

Journal ArticleDOI
TL;DR: This paper presents a novel sleep-scheduling technique called Virtual Backbone Scheduling (VBS), designed for WSNs has redundant sensor nodes, and proposes approximation algorithms based on the Schedule Transition Graph (STG) and Virtual Scheduling Graph (VSG).
Abstract: Wireless Sensor Networks (WSNs) are key for various applications that involve long-term and low-cost monitoring and actuating. In these applications, sensor nodes use batteries as the sole energy source. Therefore, energy efficiency becomes critical. We observe that many WSN applications require redundant sensor nodes to achieve fault tolerance and Quality of Service (QoS) of the sensing. However, the same redundancy may not be necessary for multihop communication because of the light traffic load and the stable wireless links. In this paper, we present a novel sleep-scheduling technique called Virtual Backbone Scheduling (VBS). VBS is designed for WSNs has redundant sensor nodes. VBS forms multiple overlapped backbones which work alternatively to prolong the network lifetime. In VBS, traffic is only forwarded by backbone sensor nodes, and the rest of the sensor nodes turn off their radios to save energy. The rotation of multiple backbones makes sure that the energy consumption of all sensor nodes is balanced, which fully utilizes the energy and achieves a longer network lifetime compared to the existing techniques. The scheduling problem of VBS is formulated as the Maximum Lifetime Backbone Scheduling (MLBS) problem. Since the MLBS problem is NP-hard, we propose approximation algorithms based on the Schedule Transition Graph (STG) and Virtual Scheduling Graph (VSG). We also present an Iterative Local Replacement (ILR) scheme as a distributed implementation. Theoretical analyses and simulation studies verify that VBS is superior to the existing techniques.

Journal ArticleDOI
TL;DR: This paper presents a methodology for producing matrix multiplication kernels tuned for a specific architecture, through a canonical process of heuristic autotuning, based on generation of multiple code variants and selecting the fastest ones through benchmarking.
Abstract: In recent years, the use of graphics chips has been recognized as a viable way of accelerating scientific and engineering applications, even more so since the introduction of the Fermi architecture by NVIDIA, with features essential to numerical computing, such as fast double precision arithmetic and memory protected with error correction codes. Being the crucial component of numerical software packages, such as LAPACK and ScaLAPACK, the general dense matrix multiplication routine is one of the more important workloads to be implemented on these devices. This paper presents a methodology for producing matrix multiplication kernels tuned for a specific architecture, through a canonical process of heuristic autotuning, based on generation of multiple code variants and selecting the fastest ones through benchmarking. The key contribution of this work is in the method for generating the search space; specifically, pruning it to a manageable size. Performance numbers match or exceed other available implementations.