scispace - formally typeset
Search or ask a question

Showing papers by "Albert Y. Zomaya published in 2017"


Journal ArticleDOI
TL;DR: A working prototype of the SeDaSC methodology is implemented and its performance is evaluated based on the time consumed during various operations to show that Se daSC has the potential to be effectively used for secure data sharing in the cloud.
Abstract: Cloud storage is an application of clouds that liberates organizations from establishing in-house data storage systems. However, cloud storage gives rise to security concerns. In case of group-shared data, the data face both cloud-specific and conventional insider threats. Secure data sharing among a group that counters insider threats of legitimate yet malicious users is an important research issue. In this paper, we propose the Secure Data Sharing in Clouds (SeDaSC) methodology that provides: 1) data confidentiality and integrity; 2) access control; 3) data sharing (forwarding) without using compute-intensive reencryption; 4) insider threat security; and 5) forward and backward access control. The SeDaSC methodology encrypts a file with a single encryption key. Two different key shares for each of the users are generated, with the user only getting one share. The possession of a single share of a key allows the SeDaSC methodology to counter the insider threats. The other key share is stored by a trusted third party, which is called the cryptographic server. The SeDaSC methodology is applicable to conventional and mobile cloud computing environments. We implement a working prototype of the SeDaSC methodology and evaluate its performance based on the time consumed during various operations. We formally verify the working of SeDaSC by using high-level Petri nets, the Satisfiability Modulo Theories Library, and a Z3 solver. The results proved to be encouraging and show that SeDaSC has the potential to be effectively used for secure data sharing in the cloud.

184 citations


Journal ArticleDOI
TL;DR: A three-tier system architecture is proposed and mathematically characterize each tier in terms of energy consumption and latency so that the transmission latency and bandwidth burden caused by cloud computing can be effectively reduced.

121 citations


Journal ArticleDOI
TL;DR: A mobile service provisioning architecture named a mobile service sharing community is proposed and a service composition approach by utilizing the Krill-Herd algorithm is proposed, which can obtain superior solutions as compared with current standard composition methods in mobile environments.
Abstract: The advances in mobile technologies enable mobile devices to perform tasks that are traditionally run by personal computers as well as provide services to the others. Mobile users can form a service sharing community within an area by using their mobile devices. This paper highlights several challenges involved in building such service compositions in mobile communities when both service requesters and providers are mobile. To deal with them, we first propose a mobile service provisioning architecture named a mobile service sharing community and then propose a service composition approach by utilizing the Krill-Herd algorithm. To evaluate the effectiveness and efficiency of our approach, we build a simulation tool. The experimental results demonstrate that our approach can obtain superior solutions as compared with current standard composition methods in mobile environments. It can yield near-optimal solutions and has a nearly linear complexity with respect to a problem size.

109 citations


Journal ArticleDOI
TL;DR: A framework of new metrics able to assess performance and energy efficiency of cloud computing communication systems, processes and protocols is proposed and evaluated for the most common data center architectures including fat tree three-tier, BCube, DCell and Hypercube.
Abstract: Cloud computing has become a de facto approach for service provisioning over the Internet. It operates relying on a pool of shared computing resources available on demand and usually hosted in data centers. Assessing performance and energy efficiency of data centers becomes fundamental. Industries use a number of metrics to assess efficiency and energy consumption of cloud computing systems, focusing mainly on the efficiency of IT equipment, cooling and power distribution systems. However, none of the existing metrics is precise enough to distinguish and analyze the performance of data center communication systems from IT equipment. This paper proposes a framework of new metrics able to assess performance and energy efficiency of cloud computing communication systems, processes and protocols. The proposed metrics have been evaluated for the most common data center architectures including fat tree three-tier, BCube, DCell and Hypercube.

104 citations


Journal ArticleDOI
TL;DR: An online algorithm, Lyapunov optimization on time and energy cost (LOTEC), based on the technique of Lyap unov optimization is described, which is able to make control decision on application offloading by adjusting the two-way tradeoff between average response time and average cost.
Abstract: Cloud computing has become the de facto computing platform for application processing in the era of the Internet of Things (IoT). However, limitations of the cloud model, such as the high transmission latency and high costs are giving birth to a new computing paradigm called edge computing (a.k.a fog computing). Fog computing aims to move the data processing close to the network edge so as to reduce Internet traffic. However, since the servers at the fog layer are not as powerful as the ones in the cloud, there is a need to balance the data processing in between the fog and the cloud. Moreover, besides the data offloading issue, the energy efficiency of fog computing nodes has become an increasing concern. Densely deployed fog nodes are a major source of carbon footprint in IoT systems. To reduce the usage of the brown energy resources (e.g., powered by energy produced through fossil fuels), green energy is an alternative option. In this paper, we propose employing dual energy sources for supporting the fog nodes, where solar power is the primary energy supply and grid power is the backup supply. Based on that, we present a comprehensive analytic framework for incorporating green energy sources to support the running of IoT and fog computing-based systems, and to handle the tradeoff in terms of average response time, average monetary, and energy costs in the IoT. This paper describes an online algorithm, Lyapunov optimization on time and energy cost (LOTEC), based on the technique of Lyapunov optimization. LOTEC is a quantified near optimal solution and is able to make control decision on application offloading by adjusting the two-way tradeoff between average response time and average cost. We evaluate the performance of our proposed algorithm by a number of experiments. Rigorous analysis and simulations have demonstrated its performance.

89 citations


Journal ArticleDOI
TL;DR: A Balanced and file ReuseReplication Scheduling algorithm for cloud computing environments to optimally schedule scientific application workflows is presented and compared with a state-of-the-art scheduling approach; experiments prove its superior performance.

83 citations


Posted Content
TL;DR: The proposed manifesto addresses the major open challenges in Cloud computing by identifying themajor open challenges, emerging trends, and impact areas, and offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.
Abstract: The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.

80 citations


Journal ArticleDOI
TL;DR: Following Me Fog is proposed, a framework supporting a new seamless handover timing scheme among different computation access points when computation offloading is in action so that the offloading service is not interrupted, allowing fog computing to provide interruption-resistant services to mobile IoT devices.
Abstract: Equipped with easy-to-access micro computation access points, the fog computing architecture provides low-latency and ubiquitously available computation offloading services to many simple and cheap Internet of Things devices with limited computing and energy resources. One obstacle, however, is how to seamlessly hand over mobile IoT devices among different computation access points when computation offloading is in action so that the offloading service is not interrupted -- especially for time-sensitive applications. In this article, we propose Follow Me Fog (FMF), a framework supporting a new seamless handover timing scheme among different computation access points. Intrinsically, FMF supports a job pre-migration mechanism, which pre-migrates computation jobs when the handover is expected to happen. Such expectations can be indicated by constantly monitoring received signal strengths. Then we present the design and a prototype implementation of FMF. Our evaluation results demonstrate that FMF can achieve a substantial latency reduction (36.5 percent in our experiment). In conclusion, the FMF design clears a core obstacle, allowing fog computing to provide interruption-resistant services to mobile IoT devices.

70 citations


Journal ArticleDOI
TL;DR: The proposed rendezvous-based routing protocol is validated through experiment and compared with the existing protocols using some metrics such as packet delivery ratio, energy consumption, end-to-end latency, network life time.
Abstract: In wireless sensor networks, the sensor nodes find the route towards the sink to transmit data. Data transmission happens either directly to the sink node or through the intermediate nodes. As the sensor node has limited energy, it is very important to develop efficient routing technique to prolong network life time. In this paper we proposed rendezvous-based routing protocol, which creates a rendezvous region in the middle of the network and constructs a tree within that region. There are two different modes of data transmission in the proposed protocol. In Method 1, the tree is directed towards the sink and the source node transmits the data to the sink via this tree, whereas in Method 2, the sink transmits its location to the tree, and the source node gets the sink's location from the tree and transmits the data directly to the sink. The proposed protocol is validated through experiment and compared with the existing protocols using some metrics such as packet delivery ratio, energy consumption, end-to-end latency, network life time.

67 citations


BookDOI
31 Oct 2017
TL;DR: This paper presents a meta-modelling framework that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of manually cataloging and integrating different types of data into a single system.
Abstract: Big Data Storage Models.- Big Data Programming Models.- Programming Platforms for Big Data Analysis.- Big Data Analysis on Clouds.- Data Organization and Curation in Big Data.- Big Data Query Engines.- Unbounded Data Processing.- Semantic Data Integration.- Linked Data Management.- Non-native RDF Storage Engines.- Exploratory Ad-hoc Analysis for Big Data.- Pattern Matching over Linked Data Streams.- Searching the Big Data Practices and Experiences in Efficiently Querying Knowledge Bases.- Management and Analysis of Big Graph Data.- Similarity Search in Large-Scale Graph Databases.- Big Graphs Querying, Mining, and Beyond.- Link and Graph Mining in the Big Data Era.- Granular Social Network Model and Applications.- Big Data, IoT and Semantics.- SCADA Systems in the Cloud.- Quantitative Data Analysis in Finance.- Emerging Cost Effective Big Data Architectures.- Bringing High Performance Computing to Big Data.- Cognitive Computing where Big Data is Driving.- Privacy-Preserving Record Linkage for Big Data.

67 citations


Journal ArticleDOI
TL;DR: Numerical and simulation results show that the proposed cooperative communication strategy significantly increases the throughput of vehicular networks, compared with its non-cooperative counterpart, even when the traffic density is low.
Abstract: In this paper, we provide the detailed analysis of the achievable throughput of infrastructure-based vehicular network with a finite traffic density under a cooperative communication strategy, which explores the combined use of vehicle-to-infrastructure (V2I) communications, vehicle-to-vehicle (V2V) communications, the mobility of vehicles, and cooperations among vehicles and infrastructure to facilitate the data transmission. A closed form expression of the achievable throughput is obtained, which reveals the relationship between the achievable throughput and its major performance-impacting parameters, such as distance between adjacent infrastructure points, the radio ranges of infrastructure and vehicles, the transmission rates of V2I and V2V communications, and vehicular density. Numerical and simulation results show that the proposed cooperative communication strategy significantly increases the throughput of vehicular networks, compared with its non-cooperative counterpart, even when the traffic density is low. Our results shed insight on the optimum deployment of vehicular network infrastructure and the optimum design of cooperative communication strategies in vehicular networks to maximize the throughput.

Journal ArticleDOI
TL;DR: This paper studies to minimize the inter-DC traffic generated by MapReduce jobs targeting on geo-distributed big data, while providing predicted job completion time by applying the chance-constrained optimization technique.
Abstract: Big data analytics has attracted close attention from both industry and academic because of its great benefits in cost reduction and better decision making. As the fast growth of various global services, there is an increasing need for big data analytics across multiple data centers (DCs) located in different countries or regions. It asks for the support of a cross-DC data processing platform optimized for the geo-distributed computing environment. Although some recent efforts have been made for geo-distributed big data analytics, they cannot guarantee predictable job completion time, and would incur excessive traffic over the inter-DC network that is a scarce resource shared by many applications. In this paper, we study to minimize the inter-DC traffic generated by MapReduce jobs targeting on geo-distributed big data, while providing predicted job completion time. To achieve this goal, we formulate an optimization problem by jointly considering input data movement and task placement. Furthermore, we guarantee predictable job completion time by applying the chance-constrained optimization technique, such that the MapReduce job can finish within a predefined job completion time with high probability. To evaluate the performance of our proposal, we conduct extensive simulations using real traces generated by a set of queries on Hive. The results show that our proposal can reduce 55 percent inter-DC traffic compared with centralized processing by aggregating all data to a single data center.

Journal ArticleDOI
TL;DR: An entropic graph-based algorithm that operates in the manner of centralized computing is realized, in comparison with the proposed KL divergence-based algorithms, which demonstrate that they are able to achieve a comparable performance at a much lower communication cost.
Abstract: In this paper, we focus on detecting a special type of anomaly in wireless sensor network (WSN), which appears simultaneously in a collection of neighboring nodes and lasts for a significant period of time. Existing point-based techniques, in this context, are not very effective and efficient. With the proposed distributed segment-based recursive kernel density estimation, a global probability density function can be tracked and its difference between every two periods of time is continuously measured for decision making. Kullback–Leibler (KL) divergence is employed as the measure and, in order to implement distributed in-network estimation at a lower communication cost, several types of approximated KL divergence are proposed. In the meantime, an entropic graph-based algorithm that operates in the manner of centralized computing is realized, in comparison with the proposed KL divergence-based algorithms. Finally, the algorithms are evaluated using a real-world data set, which demonstrates that they are able to achieve a comparable performance at a much lower communication cost.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed method breaks the limitation on the scale of multidimensional data to be factorized and dramatically outperforms the traditional counterparts in terms of both scalability and efficiency.
Abstract: It has long been an important issue in various disciplines to examine massive multidimensional data superimposed by a high level of noises and interferences by extracting the embedded multi-way factors. With the quick increases of data scales and dimensions in the big data era, research challenges arise in order to (1) reflect the dynamics of large tensors while introducing no significant distortions in the factorization procedure and (2) handle influences of the noises in sophisticated applications. A hierarchical parallel processing framework over a GPU cluster, namely H-PARAFAC, has been developed to enable scalable factorization of large tensors upon a “divide-and-conquer” theory for Parallel Factor Analysis (PARAFAC). The H-PARAFAC framework incorporates a coarse-grained model for coordinating the processing of sub-tensors and a fine-grained parallel model for computing each sub-tensor and fusing sub-factors. Experimental results indicate that (1) the proposed method breaks the limitation on the scale of multidimensional data to be factorized and dramatically outperforms the traditional counterparts in terms of both scalability and efficiency, e.g., the runtime increases in the order of $n^2$ when the data volume increases in the order of $n^3$ , (2) H-PARAFAC has potentials in refraining the influences of significant noises, and (3) H-PARAFAC is far superior to the conventional window-based counterparts in preserving the features of multiple modes of large tensors.

Journal ArticleDOI
TL;DR: The best practices for green computing and the trade-off between green and high-performance policies is debated and the imminent challenges facing the efficient green operations of emerging IT technologies are discussed.
Abstract: The tremendous increase in global industrial activity has resulted in high utilization of natural energy resources and increase in global warming over the last few decades. Meanwhile, computing has become a popular utility of modern human lifestyle. With the increased popularity of computing and IT services, the corresponding energy consumption of the IT industry has also increased rapidly. The computing community realizes the importance of green measures and provides technological solutions that lead to its energy-aware operations along with facilitating the same in other IT enabled industries. Green and sustainable computing practices review the environmental impact of the computing industry to encourage the adoption of practices and technologies for efficient operations. “Green Computing” paradigm advocates the energy-proportional and efficient usage of computing resources in all emerging technologies, such as Big Data and Internet of Things (IoT). This article presents a review of green computing techniques amidst the emerging IT technologies that are evident in our society. The best practices for green computing and the trade-off between green and high-performance policies is debated. Further, we discuss the imminent challenges facing the efficient green operations of emerging IT technologies.

Proceedings ArticleDOI
01 Nov 2017
TL;DR: In this paper, the authors proposed a novel model for intrusion detection which is based on dimension reduction algorithm and a classifier, which can be used as an online machine learning algorithm The proposed model uses Principal Component Analysis (PCA) to reduce dimensions of dataset from a large number of features to a small number.
Abstract: Internet of Things (IoT) devices and services have gained wide spread growth in many commercial and mission critical applications The devices and services suffer from intrusions, attacks and malicious activities To protect valuable data transmitted through IoT networks and users'privacy, intrusion detection systems (IDS) should be developed to match with the characteristics of IoT, which requires real-time monitoring This paper proposes a novel model for intrusion detection which is based on dimensionreduction algorithm and a classifier, which can be used as an online machine learning algorithm The proposed model uses Principal Component Analysis (PCA) to reduce dimensions of dataset from a large number of features to a small number To develop a classifier, softmax regression and k-nearestneighbour algorithms are applied and compared Experimental results using KDD Cup 99 Data Set show that our proposed model performs optimally in labelling benign behaviours and identifying malicious behaviours Thecomputing complexity and time performance approve that the model can be used to detect intrusions in IoT

Journal ArticleDOI
TL;DR: It can be concluded from the experiment that the proposed methods can produce higher quality than five state-of-the-art methods, which prove the feasibility of incorporating the downsampling process in the spatiotemporal model under the framework of compressed sensing.
Abstract: The fusion of remote sensing images with different spatial and temporal resolutions is needed for diverse Earth observation applications. A small number of spatiotemporal fusion methods that use sparse representation appear to be more promising than weighted- and unmixing-based methods in reflecting abruptly changing terrestrial content. However, none of the existing dictionary-based fusion methods consider the downsampling process explicitly, which is the degradation and sparse observation from high-resolution images to the corresponding low-resolution images. In this paper, the downsampling process is described explicitly under the framework of compressed sensing for reconstruction. With the coupled dictionary to constrain the similarity of sparse coefficients, a new dictionary-based spatiotemporal fusion method is built and named compressed sensing for spatiotemporal fusion, for the spatiotemporal fusion of remote sensing images. To deal with images with a high-resolution difference, typically Landsat-7 and Moderate Resolution Imaging Spectrometer (MODIS), the proposed model is performed twice to shorten the gap between the small block size and the large resolution rate. In the experimental procedure, the near-infrared, red, and green bands of Landsat-7 and MODIS are fused with root mean square errors to check the prediction accuracy. It can be concluded from the experiment that the proposed methods can produce higher quality than five state-of-the-art methods, which prove the feasibility of incorporating the downsampling process in the spatiotemporal model under the framework of compressed sensing.

Journal ArticleDOI
TL;DR: Two-stage Stochastic Programming Resource A llocator (2SPRA) optimizes resource provisioning for containerized n-tier web services in accordance with fluctuations of incoming workload to accommodate predefined SLOs on response latency.
Abstract: Under today's bursty web traffic, the fine-grained per-container control promises more efficient resource provisioning for web services and better resource utilization in cloud datacenters. In this paper, we present Two -stage S tochastic P rogramming R esource A llocator (2SPRA). It optimizes resource provisioning for containerized n-tier web services in accordance with fluctuations of incoming workload to accommodate predefined SLOs on response latency. In particular, 2SPRA is capable of minimizing resource over-provisioning by addressing dynamics of web traffic as workload uncertainty in a native stochastic optimization model. Using special-purpose OpenOpt optimization framework, we fully implement 2SPRA in Python and evaluate it against three other existing allocation schemes, in a Docker-based CoreOS Linux VMs on Amazon EC2. We generate workloads based on four real-world web traces of various traffic variations: AOL, WorldCup98, ClarkNet, and NASA. Our experimental results demonstrate that 2SPRA achieves the minimum resource over-provisioning outperforming other schemes. In particular, 2SPRA allocates only 6.16 percent more than application's actual demand on average and at most 7.75 percent in the worst case. It achieves 3x further reduction in total resources provisioned compared to other schemes delivering overall cost-savings of 53.6 percent on average and up to 66.8 percent. Furthermore, 2SPRA demonstrates consistency in its provisioning decisions and robust responsiveness against workload fluctuations.

Journal ArticleDOI
TL;DR: The efficiency and applicability of the proposed approach is demonstrated via two novel applications: i) predictable auto-scaling policy setting which highlights the potential of distribution prediction in consistent definition of cloud elasticity rules; and ii) a distribution based admission controller which is able to efficiently admit or reject incoming queries based on probabilistic service level agreements compliance goals.
Abstract: Resource usage estimation for managing streaming workload in emerging applications domains such as enterprise computing, smart cities, remote healthcare, and astronomy, has emerged as a challenging research problem. Such resource estimation for processing continuous queries over streaming data is challenging due to: (i) uncertain stream arrival patterns, (ii) need to process different mixes of queries, and (iii) varying resource consumption. Existing techniques approximate resource usage for a query as a single point value which may not be sufficient because it is neither expressive enough nor does it capture the aforementioned nature of streaming workload. In this paper, we present a novel approach of using mixture density networks to estimate the whole spectrum of resource usage as probability density functions. We have evaluated our technique using the linear road benchmark and TPC-H in both private and public clouds. The efficiency and applicability of the proposed approach is demonstrated via two novel applications: i) predictable auto-scaling policy setting which highlights the potential of distribution prediction in consistent definition of cloud elasticity rules; and ii) a distribution based admission controller which is able to efficiently admit or reject incoming queries based on probabilistic service level agreements compliance goals.

Journal ArticleDOI
TL;DR: This paper analyzes the impact of the smart home scheduling to the electricity market is analyzed with a new smart-home-aware bi-level market model and proposes a game-theoretic algorithm that handles the bidirectional influence between both levels.
Abstract: In a smart community infrastructure that consists of multiple smart homes, smart controllers schedule various home appliances to balance energy consumption and reduce electricity bills of customers. In this paper, the impact of the smart home scheduling to the electricity market is analyzed with a new smart-home-aware bi-level market model. In this model, the customers schedule home appliances for bill reduction at the community level, whereas aggregators minimize the energy purchasing expense from utilities at the market level, both of which consider the smart home scheduling impacts. A game-theoretic algorithm is proposed to solve this formulation that handles the bidirectional influence between both levels. Comparing with the electricity market without smart home scheduling, our proposed infrastructure balances the energy load through reducing the peak-to-average ratio by up to 35.9%, whereas the average customer bill is reduced by up to 34.3%.

Journal ArticleDOI
TL;DR: A quantitative method for abnormal emotion detection on social media is proposed, which automatically captures the correlation between different features of the emotions, and saves a certain amount of time by batch calculation of the joint probability density of data sets.

Proceedings ArticleDOI
25 Jun 2017
TL;DR: A novel VNF migration algorithm called VNF Real-time Migration (VNF-RM) is developed for lower network latency in dynamically changing resource availability by reducing network latency by up to 70.90% after latency-aware VNF migrations.
Abstract: Network Function Virtualization (NFV) is an emerging network architecture to increase flexibility and agility within operator's networks by placing virtualized services on demand in Cloud data centers (CDCs). One of the main challenges for the NFV environment is how to minimize network latency in the rapidly changing network environments. Although many researchers have already studied in the field of Virtual Machine (VM) migration and Virtual Network Function (VNF) placement for efficient resource management in CDCs, VNF migration problem for low network latency among VNFs has not been studied yet to the best of our knowledge. To address this issue in this article, we i) formulate the VNF migration problem and ii) develop a novel VNF migration algorithm called VNF Real-time Migration (VNF-RM) for lower network latency in dynamically changing resource availability. As a result of experiments, the effectiveness of our algorithm is demonstrated by reducing network latency by up to 70.90% after latency-aware VNF migrations.

BookDOI
01 Jan 2017
TL;DR: The contemporary research efforts mostly focus on health information delivery methods to ensure the information exchange within a single BAN, and the efforts have been very limited in interconnecting several BANs remotely through the servers.
Abstract: Conventional healthcare services have seamlessly been integrated with pervasive computing paradigm and consequently cost-effective and dependable smart healthcare services and systems have emerged. Currently, the smart healthcare systems use Body Area Networks (BANs) and wearable devices for pervasive health monitoring and Ambient Assisted Living. The BANs use smartphones and several handheld devices to ensure ubiquitous access to the healthcare information and services. However, due to the intrinsic architectural limitations in terms of CPU speed, storage, and memory, the mobile and other computing devices seem inadequate to handle huge volumes of sensor data being generated unceasingly. In addition, the sensor data is highly complex and multi-dimensional. Therefore, integrating the BANs with large-scale and distributed computing paradigms, such as the cloud, cluster, and grid computing is inevitable to handle the processing and storage needs. Moreover, the contemporary research efforts mostly focus on health information delivery methods to ensure the information exchange within a single BAN. Consequently, the efforts have been very limited in interconnecting several BANs remotely through the servers.

Journal ArticleDOI
TL;DR: A centralized approach for the detection of abnormalities, as well as intrusions, such as forgery, insertions, and modifications in the ECG data is presented.
Abstract: The developments and applications of wireless body area networks (WBANs) for healthcare and remote monitoring have brought a revolution in the medical research field. Numerous physiological sensors are integrated in a WBAN architecture in order to monitor any significant changes in normal health conditions. This monitored data are then wirelessly transferred to a centralized personal server (PS). However, this transferred information can be captured and altered by an adversary during communication between the physiological sensors and the PS. Another scenario where changes can occur in the physiological data is an emergency situation, when there is a sudden change in the physiological values, e.g., changes occur in electrocardiogram (ECG) values just before the occurrence of a heart attack. This paper presents a centralized approach for the detection of abnormalities, as well as intrusions, such as forgery, insertions, and modifications in the ECG data. A simplified Markov model-based detection mechanism is used to detect changes in the ECG data. The features are extracted from the ECG data to form a feature set, which is then divided into sequences. The probability of each sequence is calculated, and based on this probability, the system decides whether the change has occurred or not. Our experiments and analyses show that the proposed scheme has a high detection rate for 5% as well as 10% abnormalities in the data set. The proposed scheme also has a higher true negative rate with a significantly reduced running time for both 5% and 10% abnormalities. Similarly, the receiver operating characteristic (ROC) and ROC convex hull have very promising results.

Journal ArticleDOI
TL;DR: This paper model a DC as a cyberphysical system (CPS) to capture the thermal properties exhibited by the DC and proposes a thermal-aware control strategy that uses a high-level centralized controller and a low- level centralized controller to manage and control the thermal status of the cyber components at different levels.
Abstract: Data centers (DCs) contribute toward the prevalent application and adoption of the cloud by providing architectural and operational foundation. To perform sustainable computation and storage, a DC is equipped with tens of thousands of servers, if not more. It is worth noting that the operational cost of a DC is being dominated by the cost spent on energy consumption. In this paper, we model a DC as a cyberphysical system (CPS) to capture the thermal properties exhibited by the DC. All software aspects, such as scheduling, load balancing, and all the computations performed by the devices, are considered the “cyber” component. The supported infrastructure, such as servers and switches, are modeled as the “physical” component of the CPS. We perform detailed modeling of the thermal characteristics displayed by the major components of the CPS. Moreover, we propose a thermal-aware control strategy that uses a high-level centralized controller and a low-level centralized controller to manage and control the thermal status of the cyber components at different levels. Our proposed strategy is testified and demonstrated by executing on a real DC workload and comparing it with three existing strategies, i.e., one classical and two thermal-aware strategies. Furthermore, we also perform formal modeling, analysis, and verification of the strategies using high-level Petri nets, the Z language, the Satisfiability Modulo Theories Library (SMT-Lib), and the Z3 solver.

Journal ArticleDOI
TL;DR: The results show that a smaller UL power compensation factor can greatly boost the ASE in the UL of dense SCNs, compared with existing work that does not differentiate LoS and NLoS transmissions.
Abstract: In this paper, we analyze the coverage probability and the area spectral efficiency (ASE) for the uplink (UL) of dense small cell networks (SCNs) considering a practical path loss model incorporating both line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions. Compared with the existing work, we adopt the following novel approaches in this paper: 1) we assume a practical user association strategy (UAS) based on the smallest path loss, or equivalently the strongest received signal strength; 2) we model the positions of both base stations (BSs) and the user equipments (UEs) as two independent homogeneous Poisson point processes; and 3) the correlation of BSs’ and UEs’ positions is considered, thus making our analytical results more accurate. The performance impact of LoS and NLoS transmissions on the ASE for the UL of dense SCNs is shown to be significant, both quantitatively and qualitatively, compared with existing work that does not differentiate LoS and NLoS transmissions. In particular, existing work predicted that a larger UL power compensation factor would always result in a better ASE in the practical range of BS density, i.e., $10^{1}\sim 10^{3}\,\textrm {BSs/km}^{2}$ . However, our results show that a smaller UL power compensation factor can greatly boost the ASE in the UL of dense SCNs, i.e., $10^{2}\sim 10^{3}\,\textrm {BSs/km}^{2}$ , while a larger UL power compensation factor is more suitable for sparse SCNs, i.e., $10^{1}\sim 10^{2}\,\textrm {BSs/km}^{2}$ .

Journal ArticleDOI
TL;DR: The proposed system, called Delphos, is a decentralized system where all decision-making process is performed within the network, in a collaborative way by the nodes, to accurately predict when the turbine will reach a damage state, thus allowing timely actions on the turbine operation to prevent accidents, reducing maintenance costs and delays in the power generation.

Book ChapterDOI
13 Nov 2017
TL;DR: In large-scale evaluations, the hybrid execution model surpasses the performance of the traditional cluster execution model with significantly less execution cost and offers an ideal solution for scientific workflows with complex precedence constraints.
Abstract: In this paper, we present a serverless workflow execution system (DEWE v3\(^{1}\)) with Function-as-a-Service (FaaS aka serverless computing) as the target execution environment. DEWE v3 is designed to address problems of (1) execution of large-scale scientific workflows and (2) resource underutilization. At its core is our novel hybrid (FaaS and dedicated/local clusters) job dispatching approach taking into account resource consumption patterns of different phases of workflow execution. In particular, the hybrid approach deals with the maximum execution duration limit, memory limit, and storage space limit. DEWE v3 significantly reduces the efforts needed to execute large-scale scientific workflow applications on public clouds. We have evaluated DEWE v3 on both AWS Lambda and Google Cloud Functions and demonstrate that FaaS offers an ideal solution for scientific workflows with complex precedence constraints. In our large-scale evaluations, the hybrid execution model surpasses the performance of the traditional cluster execution model with significantly less execution cost.

Journal ArticleDOI
TL;DR: The proposed mode-switched grid-based routing protocol for WSN selects one node per grid as the grid head and builds a routing path using each active grid head which leads to the sink, improving the network lifetime.
Abstract: A Wireless Sensor Network (WSN) consists of enormous amount of sensor nodes. These sensor nodes sense the changes in physical parameters from the sensing range and forward the information to the sink nodes or the base station. Since sensor nodes are driven with limited power batteries, prolonging the network lifetime is difficult and very expensive, especially for hostile locations. Therefore, routing protocols for WSN must strategically distribute the dissipation of energy, so as to increase the overall lifetime of the system. Current research trends from areas, such as from Internet of Things and fog computing use sensors as the source of data. Therefore, energy-efficient data routing in WSN is still a challenging task for real-time applications. Hierarchical grid-based routing is an energy-efficient method for routing of data packets. This method divides the sensing area into grids and is advantageous in wireless sensor networks to enhance network lifetime. The network is partitioned into virtual equal-sized grids. The proposed mode-switched grid-based routing protocol for WSN selects one node per grid as the grid head. The routing path to the sink is established using grid heads. Grid heads are switched between active and sleep modes alternately. Therefore, not all grid heads take part in the routing process at the same time. This saves energy in grid heads and improves the network lifetime. The proposed method builds a routing path using each active grid head which leads to the sink. For handling the mobile sink movement, the routing path changes only for some grid head nodes which are nearer to the grid, in which the mobile sink is currently positioned. Data packets generated at any source node are routed directly through the data disseminating grid head nodes on the routing path to the sink.