scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Parallel and Distributed Systems in 2015"


Journal ArticleDOI
Xu Chen1
TL;DR: This paper designs a decentralized computation offloading mechanism that can achieve a Nash equilibrium of the game and quantify its efficiency ratio over the centralized optimal solution and demonstrates that the proposed mechanism can achieve efficient computation off loading performance and scale well as the system size increases.
Abstract: Mobile cloud computing is envisioned as a promising approach to augment computation capabilities of mobile devices for emerging resource-hungry mobile applications. In this paper, we propose a game theoretic approach for achieving efficient computation offloading for mobile cloud computing. We formulate the decentralized computation offloading decision making problem among mobile device users as a decentralized computation offloading game. We analyze the structural property of the game and show that the game always admits a Nash equilibrium. We then design a decentralized computation offloading mechanism that can achieve a Nash equilibrium of the game and quantify its efficiency ratio over the centralized optimal solution. Numerical results demonstrate that the proposed mechanism can achieve efficient computation offloading performance and scale well as the system size increases.

759 citations


Journal ArticleDOI
TL;DR: This survey presents a synthesized overview of the current state of research on smart grid development, and identifies the current research problems in the areas of cloud-based energy management, information management, and security in smart grid.
Abstract: The fast-paced development of power systems necessitates smart grids to facilitate real-time control and monitoring with bidirectional communication and electricity flows. Future smart grids are expected to have reliable, efficient, secured, and cost-effective power management with the implementation of distributed architecture. To focus on these requirements, we provide a comprehensive survey on different cloud computing applications for the smart grid architecture, in three different areas— energy management , information management , and security . In these areas, the utility of cloud computing applications is discussed, while giving directions on future opportunities for the development of the smart grid. We also highlight different challenges existing in the conventional smart grid (without cloud application) that can be overcome using cloud. In this survey, we present a synthesized overview of the current state of research on smart grid development. We also identify the current research problems in the areas of cloud-based energy management, information management, and security in smart grid.

398 citations


Journal ArticleDOI
TL;DR: This paper makes the first attempt to formally address the problem of authorized data deduplication, and shows that the proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.
Abstract: Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality of sensitive data while supporting deduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing. To better protect data security, this paper makes the first attempt to formally address the problem of authorized data deduplication. Different from traditional deduplication systems, the differential privileges of users are further considered in duplicate check besides the data itself. We also present several new deduplication constructions supporting authorized duplicate check in a hybrid cloud architecture. Security analysis demonstrates that our scheme is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement a prototype of our proposed authorized duplicate check scheme and conduct testbed experiments using our prototype. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.

394 citations


Journal ArticleDOI
TL;DR: This paper adopts a power-law decaying data model verified by real data sets and proposes a random projection-based estimation algorithm for this data model, which requires fewer compressed measurements and greatly reduces the energy consumption.
Abstract: Data collection is a crucial operation in wireless sensor networks. The design of data collection schemes is challengingdue to the limited energy supply and the hot spot problem. Leveraging empirical observations that sensory data possess strongspatiotemporal compressibility, this paper proposes a novel compressive data collection scheme for wireless sensor networks. We adopt a power-law decaying data model verified by real data sets and then propose a random projection-based estimation algorithm for this data model. Our scheme requires fewer compressed measurements, thus greatly reduces the energy consumption. It allowssimple routing strategy without much computation and control overheads, which leads to strong robustness in practical applications. Analytically, we prove that it achieves the optimal estimation error bound. Evaluations on real data sets (from the GreenOrbs, IntelLab and NBDC-CTD projects) show that compared with existing approaches, this new scheme prolongs the network lifetime by $1.5 \times$ to $2 \times$ for estimation error 5-20 percent.

263 citations


Journal ArticleDOI
TL;DR: A novel offloading system to design robust offloading decisions for mobile services is proposed and its approach considers the dependency relations among component services and aims to optimize execution time and energy consumption of executing mobile services.
Abstract: The development of cloud computing and virtualization techniques enables mobile devices to overcome the severity of scarce resource constrained by allowing them to offload computation and migrate several computation parts of an application to powerful cloud servers. A mobile device should judiciously determine whether to offload computation as well as what portion of an application should be offloaded to the cloud. This paper considers a mobile computation offloading problem where multiple mobile services in workflows can be invoked to fulfill their complex requirements and makes decision on whether the services of a workflow should be offloaded. Due to the mobility of portable devices, unstable connectivity of mobile networks can impact the offloading decision. To address this issue, we propose a novel offloading system to design robust offloading decisions for mobile services. Our approach considers the dependency relations among component services and aims to optimize execution time and energy consumption of executing mobile services. To this end, we also introduce a mobility model and a trade-off fault-tolerance mechanism for the offloading system. A genetic algorithm (GA) based offloading method is then designed and implemented after carefully modifying parts of a generic GA to match our special needs for the stated problem. Experimental results are promising and show near-optimal solutions for all of our studied cases with almost linear algorithmic complexity with respect to the problem size.

261 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel framework with preservation and repudiation (ACPN) for VANETs, and introduces the public-key cryptography to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs.
Abstract: In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles’ real IDs. The self-generated PKC-based pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.

245 citations


Journal ArticleDOI
TL;DR: The proposed EDTM can evaluate trustworthiness of sensor nodes more precisely and prevent the security breaches more effectively and outperforms other similar models, e.g., NBBTE trust model.
Abstract: Trust models have been recently suggested as an effective security mechanism for Wireless Sensor Networks (WSNs). Considerable research has been done on modeling trust. However, most current research work only takes communication behavior into account to calculate sensor nodes’ trust value, which is not enough for trust evaluation due to the widespread malicious attacks. In this paper, we propose an Efficient Distributed Trust Model (EDTM) for WSNs. First, according to the number of packets received by sensor nodes, direct trust and recommendation trust are selectively calculated. Then, communication trust, energy trust and data trust are considered during the calculation of direct trust. Furthermore, trust reliability and familiarity are defined to improve the accuracy of recommendation trust. The proposed EDTM can evaluate trustworthiness of sensor nodes more precisely and prevent the security breaches more effectively. Simulation results show that EDTM outperforms other similar models, e.g., NBBTE trust model.

232 citations


Journal ArticleDOI
TL;DR: This paper presents a unique method of performance analysis and optimization for sparse matrix-vector multiplication (SpMV) on GPU, which has wide adaptability for different types of sparse matrices and is different from existing methods which only adapt to some particular sparseMatrices.
Abstract: This paper presents a unique method of performance analysis and optimization for sparse matrix-vector multiplication (SpMV) on GPU. This method has wide adaptability for different types of sparse matrices and is different from existing methods which only adapt to some particular sparse matrices. In addition, our method does not need additional benchmarks to get optimized parameters, which are calculated directly through the probability mass function (PMF). We make the following contributions. (1) We present a PMF to analyze precisely the distribution pattern of non-zero elements in a sparse matrix. The PMF can provide theoretical basis for the compression of a sparse matrix. (2) Compression efficiency of COO, CSR, ELL, and HYB can be analyzed precisely through the PMF, and combined with the hardware parameters of GPU, the performance of SpMV based on COO, CSR, ELL, and HYB can be estimated. Furthermore, the most appropriate format for SpMV can be selected according to estimated value of the performance. Experiments prove that the theoretical estimated values and the tested values have high consistency. (3) For HYB, the optimal segmentation threshold can be found through the PMF to achieve the optimal performance for SpMV. Our performance modeling and analysis are very accurate. The order of magnitude of the estimated speedup and that of the tested speedup for each of the ten tested sparse matrices based on the three formats COO, CSR, and ELL are the same. The percentage of relative difference between an estimated value and a tested value is less than 20 percent for over 80 percent cases. The performance improvement of our algorithm is also effective. The average performance improvement of the optimal solution for HYB is over 15 percent compared with that of the automatic solution provided by CUSPARSE lib.

195 citations


Journal ArticleDOI
TL;DR: The combination of the solutions to TCOV and NCON offers a promising solution to the original MSD problem that balances the load of different sensors and prolongs the network lifetime consequently.
Abstract: Coverage of interest points and network connectivity are two main challenging and practically important issues of Wireless Sensor Networks (WSNs). Although many studies have exploited the mobility of sensors to improve the quality of coverage andconnectivity, little attention has been paid to the minimization of sensors’ movement, which often consumes the majority of the limited energy of sensors and thus shortens the network lifetime significantly. To fill in this gap, this paper addresses the challenges of the Mobile Sensor Deployment (MSD) problem and investigates how to deploy mobile sensors with minimum movement to form a WSN that provides both target coverage and network connectivity. To this end, the MSD problem is decomposed into two sub-problems: the Target COVerage (TCOV) problem and the Network CONnectivity (NCON) problem. We then solve TCOV and NCON one by one and combine their solutions to address the MSD problem. The NP-hardness of TCOV is proved. For a special case of TCOV where targets disperse from each other farther than double of the coverage radius, an exact algorithm based on the Hungarian method is proposed to find the optimal solution. For general cases of TCOV, two heuristic algorithms, i.e., the Basic algorithm based on clique partition and the TV-Greedy algorithm based on Voronoi partition of the deployment region, are proposed to reduce the total movement distance ofsensors. For NCON, an efficient solution based on the Steiner minimum tree with constrained edge length is proposed. Thecombination of the solutions to TCOV and NCON, as demonstrated by extensive simulation experiments, offers a promising solutionto the original MSD problem that balances the load of different sensors and prolongs the network lifetime consequently.

167 citations


Journal ArticleDOI
TL;DR: An improved hybrid version of the CRO method called HCRO (hybrid CRO) is developed for solving the DAG-based task scheduling problem, and a new selection strategy is proposed that reduces the chance of cloning before new molecules are generated.
Abstract: Scheduling for directed acyclic graph (DAG) tasks with the objective of minimizing makespan has become an important problem in a variety of applications on heterogeneous computing platforms, which involves making decisions about the execution order of tasks and task-to-processor mapping. Recently, the chemical reaction optimization (CRO) method has proved to be very effective in many fields. In this paper, an improved hybrid version of the CRO method called HCRO (hybrid CRO) is developed for solving the DAG-based task scheduling problem. In HCRO, the CRO method is integrated with the novel heuristic approaches, and a new selection strategy is proposed. More specifically, the following contributions are made in this paper. (1) A Gaussian random walk approach is proposed to search for optimal local candidate solutions. (2) A left or right rotating shift method based on the theory of maximum Hamming distance is used to guarantee that our HCRO algorithm can escape from local optima. (3) A novel selection strategy based on the normal distribution and a pseudo-random shuffle approach are developed to keep the molecular diversity. Moreover, an exclusive-OR (XOR) operator between two strings is introduced to reduce the chance of cloning before new molecules are generated. Both simulation and real-life experiments have been conducted in this paper to verify the effectiveness of HCRO. The results show that the HCRO algorithm schedules the DAG tasks much better than the existing algorithms in terms of makespan and speed of convergence.

144 citations


Journal ArticleDOI
TL;DR: This paper considers a smart power system in which users are equipped with energy storage devices, and proposes two distributed demand side management algorithms executed by users in which each user tries to minimize its energy payment, while still preserving the privacy of users as well as minimizing the amount of required signaling with the central controller.
Abstract: Demand-side management, together with the integration of distributed energy storage have an essential role in the process of improving the efficiency and reliability of the power grid. In this paper, we consider a smart power system in which users are equipped with energy storage devices. Users will request their energy demands from an energy provider who determines their energy payments based on the load profiles of users. By scheduling the energy consumption and storage of users regulated by a central controller, the energy provider tries to minimize the square euclidean distance between the instantaneous energy demand and the average demand of the power system. The users intend to reduce their energy payment by jointly scheduling their appliances and controlling the charging and discharging process for their energy storage devices. We apply game theory to formulate the energy consumption and storage game for the distributed design, in which the players are the users and their strategies are the energy consumption schedules for appliances and storage devices. Based on the game theory setup and proximal decomposition, we also propose two distributed demand side management algorithms executed by users in which each user tries to minimize its energy payment, while still preserving the privacy of users as well as minimizing the amount of required signaling with the central controller. In simulation results, we show that the proposed algorithms provide optimality for both energy provider and users.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed scheme can significantly reduce communication cost compared to the conventional schemes using dense random projections and sparse random projections, indicating that the scheme can be a more practical alternative for data gathering applications in WSNs.
Abstract: In this paper, we study the problem of data gathering with compressive sensing (CS) in wireless sensor networks (WSNs). Unlike the conventional approaches, which require uniform sampling in the traditional CS theory, we propose a random walk algorithm for data gathering in WSNs. However, such an approach will conform to path constraints in networks and result in the non-uniform selection of measurements. It is still unknown whether such a non-uniform method can be used for CS to recover sparse signals in WSNs. In this paper, from the perspectives of CS theory and graph theory, we provide mathematical foundations to allow random measurements to be collected in a random walk based manner. We find that the random matrix constructed from our random walk algorithm can satisfy the expansion property of expander graphs. The theoretical analysis shows that a k-sparse signal can be recovered using `1 minimization decoding algorithm when it takes m = O(k log(n=k)) independent random walks with the length of each walk t = O(n=k) in a random geometric network with n nodes. We also carry out simulations to demonstrate the effectiveness of the proposed scheme. Simulation results show that our proposed scheme can significantly reduce communication cost compared to the conventional schemes using dense random projections and sparse random projections, indicating that our scheme can be a more practical alternative for data gathering applications in WSNs.

Journal ArticleDOI
TL;DR: This paper surveys the architectural approaches proposed for designing memory systems and, specifically, caches with emerging memory technologies, and presents a classification of these technologies and architectural approaches based on their key characteristics.
Abstract: Recent trends of CMOS scaling and increasing number of on-chip cores have led to a large increase in the size of on-chip caches. Since SRAM has low density and consumes large amount of leakage power, its use in designing on-chip caches has become more challenging. To address this issue, researchers are exploring the use of several emerging memory technologies, such as embedded DRAM, spin transfer torque RAM, resistive RAM, phase change RAM and domain wall memory. In this paper, we survey the architectural approaches proposed for designing memory systems and, specifically, caches with these emerging memorytechnologies. To highlight their similarities and differences, we present a classification of these technologies and architectural approaches based on their key characteristics. We also briefly summarize the challenges in using these technologies for architecting caches. We believe that this survey will help the readers gain insights into the emerging memory device technologies, and their potential use in designing future computing systems.

Journal ArticleDOI
TL;DR: Large-scale simulations driven by Google cluster traces show that DRFH significantly outperforms the traditional slot-based scheduler, leading to much higher resource utilization with substantially shorter job completion times.
Abstract: We study the multi-resource allocation problem in cloud computing systems where the resource pool is constructed from a large number of heterogeneous servers, representing different points in the configuration space of resources such as processing, memory, and storage. We design a multi-resource allocation mechanism, called DRFH, that generalizes the notion of Dominant Resource Fairness (DRF) from a single server to multiple heterogeneous servers. DRFH provides a number of highly desirable properties. With DRFH, no user prefers the allocation of another user; no one can improve its allocation without decreasing that of the others; and more importantly, no coalition behavior of misreporting resource demands can benefit all its members. DRFH also ensures some level of service isolation among the users. As a direct application, we design a simple heuristic that implements DRFH in real-world systems. Large-scale simulations driven by Google cluster traces show that DRFH significantly outperforms the traditional slot-based scheduler, leading to much higher resource utilization with substantially shorter job completion times.

Journal ArticleDOI
TL;DR: HireSome-II can protect cloud privacy, as a cloud is not required to unveil all its transaction records, and significantly reduces the time complexity of developing a cross-cloud service composition plan as only representative ones are recruited, which is demanded for big data processing.
Abstract: Cloud computing promises a scalable infrastructure for processing big data applications such as medical data analysis. Cross-cloud service composition provides a concrete approach capable for large-scale big data processing. However, the complexity of potential compositions of cloud services calls for new composition and aggregation methods, especially when some private clouds refuse to disclose all details of their service transaction records due to business privacy concerns in cross-cloud scenarios. Moreover, the credibility of cross-clouds and on-line service compositions will become suspicional, if a cloud fails to deliver its services according to its “promised” quality. In view of these challenges, we propose a privacy-aware cross-cloud service composition method, named HireSome-II (History record-based Service optimization method) based on its previous basic version HireSome-I. In our method, to enhance the credibility of a composition plan, the evaluation of a service is promoted by some of its QoS history records, rather than its advertised QoS values. Besides, the $k$ -means algorithm is introduced into our method as a data filtering tool to select representative history records. As a result, HireSome-II can protect cloud privacy, as a cloud is not required to unveil all its transaction records. Furthermore, it significantly reduces the time complexity of developing a cross-cloud service composition plan as only representative ones are recruited, which is demanded for big data processing. Simulation and analytical results demonstrate the validity of our method compared to a benchmark.

Journal ArticleDOI
TL;DR: This work forms the dynamic VM provisioning and allocation problem for the auction-based model as an integer program considering multiple types of resources and designs truthful greedy and optimal mechanisms for the problem such that the cloud provider provisions VMs based on the requests of the winning users and determines their payments.
Abstract: A major challenging problem for cloud providers is designing efficient mechanisms for virtual machine (VM) provisioning and allocation. Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. Recently, cloud providers have introduced auction-based models for VM provisioning and allocation which allow users to submit bids for their requested VMs. We formulate the dynamic VM provisioning and allocation problem for the auction-based model as an integer program considering multiple types of resources. We then design truthful greedy and optimal mechanisms for the problem such that the cloud provider provisions VMs based on the requests of the winning users and determines their payments. We show that the proposed mechanisms are truthful, that is, the users do not have incentives to manipulate the system by lying about their requested bundles of VM instances and their valuations. We perform extensive experiments using real workload traces in order to investigate the performance of the proposed mechanisms. Our proposed mechanisms achieve promising results in terms of revenue for the cloud provider.

Journal ArticleDOI
TL;DR: This paper proposes a soft real-time fault-tolerant task allocation algorithm (FTAOA) for WSNs in using primary/backup (P/B) technique to support fault tolerance mechanism and shows the feasibility and effectiveness of FTAOA.
Abstract: One of challenging issues for task allocation problem in wireless sensor networks (WSNs) is distributing sensing tasks rationally among sensor nodes to reduce overall power consumption and ensure these tasks finished before deadlines. In this paper, we propose a soft real-time fault-tolerant task allocation algorithm (FTAOA) for WSNs in using primary/backup (P/B) technique to support fault tolerance mechanism. In the proposed algorithm, the construction process of discrete particle swarm optimization (DPSO) is achieved through adopting a binary matrix encoding form, minimizing tasks execution time, saving node energy cost, balancing network load, and defining a fitness function for improving scheduling effectiveness and system reliability. Furthermore, FTAOA employs passive backup copies overlapping technology and is capable to determinate the mode of backup copies adaptively through scheduling primary copies as early as possible and backup copies as late as possible. To improve resource utilization, we allocate tasks to the nodes with high performance in terms of load, energy consumption, and failure ratio. Analysis and simulation results show the feasibility and effectiveness of FTAOA. FTAOA can strike a good balance between local solution and global exploration and achieve a satisfactory result within a short period of time.

Journal ArticleDOI
TL;DR: This study shows the g-good-neighbor conditional diagnosable of k-ary n-cube is several times larger than the classical diagnosability of k/s n, which is a family of popular networks.
Abstract: The diagnosability of a system is defined as the maximum number of faulty processors that the system can guarantee to identify, which plays an important role in measuring of the reliability of multiprocessor systems. In the work of Peng et al. in 2012, they proposed a new measure for fault diagnosis of systems, namely, $g$ -good-neighbor conditional diagnosability. It is defined as the diagnosability of a multiprocessor system under the assumption that every fault-free node contains at least $g$ fault-free neighbors, which can measure the reliability of interconnection networks in heterogeneous environments more accurately than traditional diagnosability. The $k$ -ary $n$ -cube is a family of popular networks. In this study, we first investigate and determine the $R_g$ -connectivity of $k$ -ary $n$ -cube for $0\le g\le n.$ Based on this, we determine the $g$ -good-neighbor conditional diagnosability of $k$ -ary $n$ -cube under the PMC model and MM* model for $k\ge 4, n\ge 3$ and $0\le g\le n.$ Our study shows the $g$ -good-neighbor conditional diagnosability of $k$ -ary $n$ -cube is several times larger than the classical diagnosability of $k$ -ary $n$ -cube.

Journal ArticleDOI
TL;DR: This paper proposes two heuristic algorithms, called energy-aware MapReduce scheduling algorithms (EMRSA-I and EMRSA-II), that find the assignments of map and reduce tasks to the machine slots in orderto minimize the energy consumed when executing the application.
Abstract: The majority of large-scale data intensive applications executed by data centers are based on MapReduce or its open-source implementation, Hadoop. Such applications are executed on large clusters requiring large amounts of energy, making the energy costs a considerable fraction of the data center’s overall costs. Therefore minimizing the energy consumption when executing each MapReduce job is a critical concern for data centers. In this paper, we propose a framework for improving the energy efficiency of MapReduce applications, while satisfying the service level agreement (SLA). We first model the problem of energy-aware scheduling of a single MapReduce job as an Integer Program. We then propose two heuristic algorithms, called energy-aware MapReduce scheduling algorithms (EMRSA-I and EMRSA-II), that find the assignments of map and reduce tasks to the machine slots in order to minimize the energy consumed when executing the application. We perform extensive experiments on a Hadoop cluster to determine the energy consumption and execution time for several workloads from the HiBench benchmark suite including TeraSort, PageRank, and K-means clustering, and then use this data in an extensive simulation study to analyze the performance of the proposed algorithms. The results show that EMRSA-I and EMRSA-II are able to find near optimal job schedules consuming approximately 40 percent less energy on average than the schedules obtained by a common practice scheduler that minimizes the makespan.

Journal ArticleDOI
TL;DR: This paper forms the problem as a stochastic program that captures service request distribution, server provisioning, energy storage management, generator scheduling, power transactions between smart microgrids, and main grids, and uses the Lyapunov optimization technique to design an operation algorithm.
Abstract: In this paper, we investigate the problem of minimizing energy cost for distributed Internet data centers (IDCs) in smart microgrids while taking system dynamics into consideration. Specifically, IDC operators expect to minimize the long-term energy cost with the uncertainties in electricity price, workload, renewable energy generation, and power outage state. At first, we formulate the problem as a stochastic program that captures service request distribution, server provisioning, energy storage management, generator scheduling, power transactions between smart microgrids, and main grids. Second, we use the Lyapunov optimization technique to design an operation algorithm, which enables an explicit tradeoff between energy cost saving and battery investment cost. Finally, the effectiveness of the proposed algorithm is evaluated with practical data.

Journal ArticleDOI
TL;DR: The approach is efficient and outperforms a reference algorithm based on optimal traffic light scheduling and does not rely on traffic light or intersection controller facilities, which makes it flexible and applicable to various kinds of intersections.
Abstract: Traffic control at intersections is a key issue and hot research topic in intelligent transportation systems. Existing approaches, including traffic light scheduling and trajectory maneuver, are either inaccurate and inflexible or complicated and costly. More importantly, due to the dynamics of traffic, it is really difficult to obtain the optimal solution in a real-time way. Inspired by the emergence of vehicular ad hoc network, we propose a novel approach to traffic control at intersections. Via vehicle to vehicle or vehicle to infrastructure communications, vehicles can compete for the privilege of passing the intersection, i.e., traffic is controlled via coordination among vehicles. Such an approach is flexible and efficient. To realize the coordination among vehicles, we first model the problem as a new variant of the classic mutual exclusion problem, and then design algorithms to solve new problem. Both centralized and distributed algorithms are. We conduct extensive simulations to evaluate the performance of our proposed algorithms. The results show that, our approach is efficient and outperforms a reference algorithm based on optimal traffic light scheduling. Moreover, our approach does not rely on traffic light or intersection controller facilities, which makes it flexible and applicable to various kinds of intersections.

Journal ArticleDOI
TL;DR: This paper develops an outsourced policy updating method that enabling efficient access control with dynamic policy updating for big data in the cloud and proposes an efficient and secure method that allows data owner to check whether the cloud server has updated the ciphertexts correctly.
Abstract: Due to the high volume and velocity of big data, it is an effective option to store big data in the cloud, as the cloud has capabilities of storing big data and processing high volume of user access requests. Attribute-based encryption (ABE) is a promising technique to ensure the end-to-end security of big data in the cloud. However, the policy updating has always been a challenging issue when ABE is used to construct access control schemes. A trivial implementation is to let data owners retrieve the data and re-encrypt it under the new access policy, and then send it back to the cloud. This method, however, incurs a high communication overhead and heavy computation burden on data owners. In this paper, we propose a novel scheme that enabling efficient access control with dynamic policy updating for big data in the cloud. We focus on developing an outsourced policy updating method for ABE systems. Our method can avoid the transmission of encrypted data and minimize the computation work of data owners, by making use of the previously encrypted data with old access policies. Moreover, we also propose policy updating algorithms for different types of access policies. Finally, we propose an efficient and secure method that allows data owner to check whether the cloud server has updated the ciphertexts correctly. The analysis shows that our policy updating outsourcing scheme is correct, complete, secure and efficient.

Journal ArticleDOI
TL;DR: This work focuses on an existing U2IoT architecture, to design an aggregated-proof based hierarchical authentication scheme (APHA) for the layered networks, and proves that the BAN logic formal analysis is performed to prove that the proposed APHA has no obvious security defects.
Abstract: The Internet of Things (IoT) is becoming an attractive system paradigm to realize interconnections through the physical, cyber, and social spaces. During the interactions among the ubiquitous things, security issues become noteworthy, and it is significant to establish enhanced solutions for security protection. In this work, we focus on an existing U2IoT architecture (i.e., unit IoT and ubiquitous IoT), to design an aggregated-proof based hierarchical authentication scheme (APHA) for the layered networks. Concretely, 1) the aggregated-proofs are established for multiple targets to achieve backward and forward anonymous data transmission; 2) the directed path descriptors, homomorphism functions, and Chebyshev chaotic maps are jointly applied for mutual authentication; 3) different access authorities are assigned to achieve hierarchical access control. Meanwhile, the BAN logic formal analysis is performed to prove that the proposed APHA has no obvious security defects, and it is potentially available for the U2IoT architecture and other IoT applications.

Journal ArticleDOI
TL;DR: A divide-and-conquer strategy with parallel computing mechanism has been adopted and an algorithm called Community-based Greedy algorithm for mining top-K influential nodes and precision analysis is given to show approximation guarantees of the models.
Abstract: With the proliferation of mobile devices and wireless technologies, mobile social network systems are increasingly available. A mobile social network plays an essential role as the spread of information and influence in the form of “word-of-mouth”. It is a fundamental issue to find a subset of influential individuals in a mobile social network such that targeting them initially (e.g., to adopt a new product) will maximize the spread of the influence (further adoptions of the new product). The problem of finding the most influential nodes is unfortunately NP-hard. It has been shown that a Greedy algorithm with provable approximation guarantees can give good approximation; However, it is computationally expensive, if not prohibitive, to run the greedy algorithm on a large mobile social network. In this paper, a divide-and-conquer strategy with parallel computing mechanism has been adopted. We first propose an algorithm called Community-based Greedy algorithm for mining top-K influential nodes. It encompasses two components: dividing the large-scale mobile social network into several communities by taking into account information diffusion and selecting communities to find influential nodes by a dynamic programming. Then, to further improve the performance, we parallelize the influence propagation based on communities and consider the influence propagation crossing communities. Also, we give precision analysis to show approximation guarantees of our models. Experiments on real large-scale mobile social networks show that the proposed methods are much faster than previous algorithms, meanwhile, with high accuracy.

Journal ArticleDOI
TL;DR: Cura is designed to provide a cost-effective solution to efficiently handle MapReduce production workloads that have a significant amount of interactive jobs and implements a globally efficient resource allocation scheme that significantly reduces the resource usage cost in the cloud.
Abstract: This paper presents a new MapReduce cloud service model, Cura, for provisioning cost-effective MapReduce services in a cloud. In contrast to existing MapReduce cloud services such as a generic compute cloud or a dedicated MapReduce cloud, Cura has a number of unique benefits. First, Cura is designed to provide a cost-effective solution to efficiently handle MapReduce production workloads that have a significant amount of interactive jobs. Second, unlike existing services that require customers to decide the resources to be used for the jobs, Cura leverages MapReduce profiling to automatically create the best cluster configuration for the jobs. While the existing models allow only a per-job resource optimization for the jobs, Cura implements a globally efficient resource allocation scheme that significantly reduces the resource usage cost in the cloud. Third, Cura leverages unique optimization opportunities when dealing with workloads that can withstand some slack. By effectively multiplexing the available cloud resources among the jobs based on the job requirements, Cura achieves significantly lower resource usage costs for the jobs. Cura’s core resource management schemes include cost-aware resource provisioning, VM-aware scheduling and online virtual machine reconfiguration. Our experimental results using Facebook-like workload traces show that our techniques lead to more than 80 percent reduction in the cloud compute infrastructure cost with upto 65 percent reduction in job response times.

Journal ArticleDOI
TL;DR: This paper proposes a new cloud-based automation architecture for industrial automation, which includes different functionalities from feedback control and telemetry to plant optimization and enterprise management, and focuses on the feedback control layer as the most time-critical and demanding functionality.
Abstract: New cloud services are being developed to support a wide variety of real-life applications. In this paper, we introduce a new cloud service: industrial automation, which includes different functionalities from feedback control and telemetry to plant optimization and enterprise management. We focus our study on the feedback control layer as the most time-critical and demanding functionality. Today’s large-scale industrial automation projects are expensive and time-consuming. Hence, we propose a new cloud-based automation architecture, and we analyze cost and time savings under the proposed architecture. We show that significant cost and time savings can be achieved, mainly due to the virtualization of controllers and the reduction of hardware cost and associated labor. However, the major difficulties in providing cloud-based industrial automation systems are timeliness and reliability. Offering automation functionalities from the cloud over the Internet puts the controlled processes at risk due to varying communication delays and potential failure of virtual machines and/or links. Thus, we design an adaptive delay compensator and a distributed fault tolerance algorithm to mitigate delays and failures, respectively. We theoretically analyze the performance of the proposed architecture when compared to the traditional systems and prove zero or negligible change in performance. To experimentally evaluate our approach, we implement our controllers on commercial clouds and use them to control: (i) a physical model of a solar power plant, where we show that the fault-tolerance algorithm effectively makes the system unaware of faults, and (ii) industry-standard emulation with large injected delays and disturbances, where we show that the proposed cloud-based controllers perform indistinguishably from the best-known counterparts: local controllers.

Journal ArticleDOI
TL;DR: LIBRA as discussed by the authors is a lightweight strategy to address the data skew problem among the reducers of MapReduce applications, which does not require any pre-run sampling of the input data or prevent the overlap between the map and the reduce stages.
Abstract: MapReduce is an effective tool for parallel data processing. One significant issue in practical MapReduce applications is data skew: the imbalance in the amount of data assigned to each task. This causes some tasks to take much longer to finish than others and can significantly impact performance. This paper presents LIBRA, a lightweight strategy to address the data skew problem among the reducers of MapReduce applications. Unlike previous work, LIBRA does not require any pre-run sampling of the input data or prevent the overlap between the map and the reduce stages. It uses an innovative sampling method which can achieve a highly accurate approximation to the distribution of the intermediate data by sampling only a small fraction of the intermediate data during the normal map processing. It allows the reduce tasks to start copying as soon as the chosen sample map tasks (only a small fraction of map tasks which are issued first) complete. It supports the split of large keys when application semantics permit and the total order of the output data. It considers the heterogeneity of the computing resources when balancing the load among the reduce tasks appropriately. LIBRA is applicable to a wide range of applications and is transparent to the users. We implement LIBRA in Hadoop and our experiments show that LIBRA has negligible overhead and can speed up the execution of some popular applications by up to a factor of 4.

Journal ArticleDOI
TL;DR: This article proposes a novel community detection algorithm, and introduces two metrics: intra- centrality and inter-centrality, to characterize nodes in communities, based on which an efficient data forwarding algorithm for DTN and a worm containment strategy for OSN are proposed.
Abstract: Community detection is an important issue due to its wide use in designing network protocols such as data forwarding in Delay Tolerant Networks (DTN) and worm containment in Online Social Networks (OSN). However, most of the existing community detection algorithms focus on binary networks. Since most networks are naturally weighted such as DTN or OSN, in this article, we address the problems of community detection in weighted networks, exploit community for data forwarding in DTN and worm containment in OSN, and demonstrate how community can facilitate these network designs. Specifically, we propose a novel community detection algorithm, and introduce two metrics: intra-centrality and inter-centrality, to characterize nodes in communities, based on which we propose an efficient data forwarding algorithm for DTN and a worm containment strategy for OSN. Extensive trace-driven simulation results show that the proposed community detection algorithm, the data forwarding algorithm, and the worm containment strategy significantly outperform existing works.

Journal ArticleDOI
TL;DR: A shared authority based privacy-preserving authentication protocol (SAPA) is proposed to address above privacy issue for cloud storage and universal composability model is established to prove that the SAPA theoretically has the design correctness.
Abstract: Cloud computing is an emerging data interactive paradigm to realize users' data remotely stored in an online cloud server. Cloud services provide great conveniences for the users to enjoy the on-demand cloud applications without considering the local infrastructure limitations. During the data accessing, different users may be in a collaborative relationship, and thus data sharing becomes significant to achieve productive benefits. The existing security solutions mainly focus on the authentication to realize that a user's privative data cannot be illegally accessed, but neglect a subtle privacy issue during a user challenging the cloud server to request other users for data sharing. The challenged access request itself may reveal the user's privacy no matter whether or not it can obtain the data access permissions. In this paper, we propose a shared authority based privacy-preserving authentication protocol (SAPA) to address above privacy issue for cloud storage. In the SAPA, 1) shared access authority is achieved by anonymous access request matching mechanism with security and privacy considerations (e.g., authentication, data anonymity, user privacy, and forward security); 2) attribute based access control is adopted to realize that the user can only access its own data fields; 3) proxy re-encryption is applied to provide data sharing among the multiple users. Meanwhile, universal composability (UC) model is established to prove that the SAPA theoretically has the design correctness. It indicates that the proposed protocol is attractive for multi-user collaborative cloud applications.

Journal ArticleDOI
TL;DR: COUPON, a novel cooperative sensing and data forwarding framework that considers that packets are spatial-temporal correlated in the forwarding process, and derive the dissemination law of correlated packets and shows that the cooperative forwarding schemes can achieve better tradeoff between delivery delay and transmission overhead.
Abstract: Human-carried or vehicle-mounted sensors can be exploited to collect data ubiquitously for building various sensing maps. Most of existing mobile sensing applications consider users reporting and accessing sensing data through the Internet. However, this approach cannot be applied in the scenarios with poor network coverage or expensive network access. Existing data forwarding schemes for mobile opportunistic networks are not sufficient for sensing applications as spatial-temporal correlation among sensory data has not been explored. In order to build sensing maps satisfying specific sensing quality with low delay and energy consumption, we design COUPON, a novel cooperative sensing and data forwarding framework. We first notice that cooperative sensing scheme can eliminate sampling redundancy and hence save energy. Then we design two cooperative forwarding schemes by leveraging data fusion: Epidemic Routing with Fusion (ERF) and Binary Spray-and-Wait with Fusion (BSWF). Different from previous work assuming that all packets are propagated independently, we consider that packets are spatial-temporal correlated in the forwarding process, and derive the dissemination law of correlated packets. Both the theoretic analysis and simulation results show that our cooperative forwarding schemes can achieve better tradeoff between delivery delay and transmission overhead. We also evaluate our proposed framework and schemes with real mobile traces. Extensive simulations demonstrate that the cooperative sensing scheme can reduce the number of samplings by 93 percent compared with the non-cooperative scheme; ERF can reduce the transmission overhead by 78 percent compared with Epidemic Routing (ER); BSWF can increase the delivery ratio by 16 percent, and reduce the delivery delay and transmission overhead by 5 and 32 percent respectively, compared with Binary Spray-and-Wait (BSW).