scispace - formally typeset
Search or ask a question

Showing papers in "The Journal of Supercomputing in 2017"


Journal ArticleDOI
TL;DR: The results show that the proposed algorithm hybrid algorithm (H-FSPSOTC) improved the performance of the clustering algorithm by generating a new subset of more informative features, and is compared with the other comparative algorithms published in the literature.
Abstract: The text clustering technique is an appropriate method used to partition a huge amount of text documents into groups. The documents size affects the text clustering by decreasing its performance. Subsequently, text documents contain sparse and uninformative features, which reduce the performance of the underlying text clustering algorithm and increase the computational time. Feature selection is a fundamental unsupervised learning technique used to select a new subset of informative text features to improve the performance of the text clustering and reduce the computational time. This paper proposes a hybrid of particle swarm optimization algorithm with genetic operators for the feature selection problem. The k-means clustering is used to evaluate the effectiveness of the obtained features subsets. The experiments were conducted using eight common text datasets with variant characteristics. The results show that the proposed algorithm hybrid algorithm (H-FSPSOTC) improved the performance of the clustering algorithm by generating a new subset of more informative features. The proposed algorithm is compared with the other comparative algorithms published in the literature. Finally, the feature selection technique encourages the clustering algorithm to obtain accurate clusters.

366 citations


Journal ArticleDOI
TL;DR: A new firmware update scheme that utilizes a blockchain technology is proposed to securely check a firmware version, validate the correctness of firmware, and download the latest firmware for the embedded devices.
Abstract: Embedded devices are going to be used extremely in Internet of Things (IoT) environments. The small and tiny IoT devices will operate and communicate each other without involvement of users, while their operations must be correct and protected against various attacks. In this paper, we focus on a secure firmware update issue, which is a fundamental security challenge for the embedded devices in an IoT environment. A new firmware update scheme that utilizes a blockchain technology is proposed to securely check a firmware version, validate the correctness of firmware, and download the latest firmware for the embedded devices. In the proposed scheme, an embedded device requests its firmware update to nodes in a blockchain network and gets a response to determine whether its firmware is up-to-date or not. If not latest, the embedded device downloads the latest firmware from a peer-to-peer firmware sharing network of the nodes. Even in the case that the version of the firmware is up-to-date, its integrity, i.e., correctness of firmware, is checked. The proposed scheme guarantees that the embedded device's firmware is up-to-date while not tampered. Attacks targeting known vulnerabilities on firmware of embedded devices are thus mitigated.

276 citations


Journal ArticleDOI
TL;DR: A modified Stable Election Protocol (SEP), named Prolong-SEP (P- SEP) is presented to prolong the stable period of Fog-supported sensor networks by maintaining balanced energy consumption.
Abstract: Energy efficiency is one of the main issues that will drive the design of fog-supported wireless sensor networks (WSNs). Indeed, the behavior of such networks becomes very unstable in node's heterogeneity and/or node's failure. In WSNs, clusters are dynamically built up by neighbor nodes, to save energy and prolong the network lifetime. One of the nodes plays the role of Cluster Head (CH) that is responsible for transferring data among the neighboring sensors. Due to pervasive use of WSNs, finding an energy-efficient policy to opt CHs in the WSNs has become increasingly important. Due to this motivations, in this paper, a modified Stable Election Protocol (SEP), named Prolong-SEP (P-SEP) is presented to prolong the stable period of Fog-supported sensor networks by maintaining balanced energy consumption. P-SEP enables uniform nodes distribution, new CH selecting policy, and prolong the time interval of the system, especially before the failure of the first node. P-SEP considers two-level nodes' heterogeneities: advanced and normal nodes. In P-SEP, the advanced and normal nodes have the opportunity to become CHs. The performance of the proposed approach is evaluated by varying the various parameters of the network in comparison with other state-of-the-art cluster-based routing protocols. The simulation results point out that, by varying the initial energy and node heterogeneity parameters, the network lifetime of P-SEP improved by 31, 29, 20 and 40 % in comparison with SEP, Low-Energy Adaptive Clustering Hierarchy with Deterministic Cluster-Head Selection (LEACH-DCHS), Modified SEP (M-SEP) and an efficient modified SEP (EM-SEP), respectively.

243 citations


Journal ArticleDOI
TL;DR: The paper presents opportunities of digital transformation of business as a changes associated with the application of digital technology in all aspects of business.
Abstract: The paper presents opportunities of digital transformation of business as a changes associated with the application of digital technology in all aspects of business. A research of digital business found that maturing digital businesses are focused on integrating digital technologies, such as social, mobile, analytics/big data and cloud, in the service of transforming how businesses work. The ability to digitally reimagine the business is determined in large part by a clear digital strategy supported by leaders who foster a culture able to change and invent the new. Unique to digital transformation is that risk taking is becoming a cultural norm as more digitally advanced companies seek new levels of competitive advantage. Among companies where big data, cloud, mobile, and social technologies are critical parts of the infrastructure, these technologies are, or will soon be profitable on average, had higher revenues, and achieved a bigger market valuation than competitors without a strong vision. As with any emerging technology, however, there are significant challenges associated with cloud, mobile, social, and big data initiatives. The survey suggests that the primary risks preventing their wider adoption are data security issues, lack of interoperability with existing IT systems, and lack of control.

220 citations


Journal ArticleDOI
TL;DR: This work has proposed an ultra-lightweight mutual authentication protocol which uses only bitwise operation and thus is very efficient in terms of storage and communication cost and thus the computation overhead is very low.
Abstract: Internet of Things (IoT) is an evolving architecture which connects multiple devices to Internet for communication or receiving updates from a cloud or a server. In future, the number of these connected devices will increase immensely making them an indistinguishable part of our daily lives. Although these devices make our lives more comfortable, they also put our personal information at risk. Therefore, security of these devices is also a major concern today. In this paper, we propose an ultra-lightweight mutual authentication protocol which uses only bitwise operation and thus is very efficient in terms of storage and communication cost. In addition, the computation overhead is very low. We have also compared our proposed work with the existing ones which verifies the strength of our protocol, as obtained results are promising. A brief cryptanalysis of our protocol that ensures untraceability is also presented.

209 citations


Journal ArticleDOI
TL;DR: An energy-efficient cluster-based dynamic routes adjustment approach (EECDRA) which aims to minimize the routes reconstruction cost of the sensor nodes while maintaining nearly optimal routes to the latest location of the mobile sinks.
Abstract: In wireless sensor networks (WSNs), sensor nodes near static sink will have more traffic load to forward and the network lifetime will get largely reduced. This problem is referred to as the hotspot problem. Recently, adopting sink mobility has been considered as a good strategy to overcome the hotspot problem. Despite its many advantages, due to the dynamic network topology caused by sink mobility, data transmission to the mobile sink is a challenging task. To achieve efficient data dissemination, nodes need to reconstruct their routes toward the latest location of the mobile sink, which weakens the energy conservation aim. In this paper, we proposed an energy-efficient cluster-based dynamic routes adjustment approach (EECDRA) which aims to minimize the routes reconstruction cost of the sensor nodes while maintaining nearly optimal routes to the latest location of the mobile sinks. The network is divided into several equal clusters and cluster heads are selected within each cluster. We also set some communication rules that manage routes reconstruction process accordingly requiring only a limited number of nodes to readjust their data delivery routes toward the mobile sinks. Simulation results show that the mobile sinks for reducing reconstruction of route have improved the energy efficiency and prolonged lifetime of wireless sensor network.

145 citations


Journal ArticleDOI
TL;DR: An intrusion detection system based on the decision tree using analysis of behavior information to detect APT attacks that intellectually change after intrusion into a system is proposed.
Abstract: Due to rapid growth of communications and networks, a cyber-attack with malicious codes has been coming as a new paradigm in information security area since last few years. In particular, an advanced persistent threats (APT) attack is bringing out big social issues. The APT attack uses social engineering methods to target various systems for intrusions. It breaks down the security of the target system to leak information or to destroy the system by giving monetary damages on the target. APT attacks make relatively simple attacks such as spear phishing during initial intrusion but a back door is created by leaking the long-term information after initial intrusion, and it transmits the malicious code by analyzing the internal network. In this paper, we propose an intrusion detection system based on the decision tree using analysis of behavior information to detect APT attacks that intellectually change after intrusion into a system. Furthermore, it can detect the possibility on the initial intrusion and minimize the damage size by quickly responding to APT attacks.

104 citations


Journal ArticleDOI
TL;DR: A queuing mathematical and analytical model is presented to study and analyze the performance of fog computing system and derived formulas for key performance metrics which include system response time, system loss rate, system throughput, CPU utilization, and the mean number of messages request.
Abstract: It is predicted by the year 2020, more than 50 billion devices will be connected to the Internet. Traditionally, cloud computing has been used as the preferred platform for aggregating, processing, and analyzing IoT traffic. However, the cloud may not be the preferred platform for IoT devices in terms of responsiveness and immediate processing and analysis of IoT data and requests. For this reason, fog or edge computing has emerged to overcome such problems, whereby fog nodes are placed in close proximity to IoT devices. Fog nodes are primarily responsible of the local aggregation, processing, and analysis of IoT workload, thereby resulting in significant notable performance and responsiveness. One of the open issues and challenges in the area of fog computing is efficient scalability in which a minimal number of fog nodes are allocated based on the IoT workload and such that the SLA and QoS parameters are satisfied. To address this problem, we present a queuing mathematical and analytical model to study and analyze the performance of fog computing system. Our mathematical model determines under any offered IoT workload the number of fog nodes needed so that the QoS parameters are satisfied. From the model, we derived formulas for key performance metrics which include system response time, system loss rate, system throughput, CPU utilization, and the mean number of messages request. Our analytical model is cross-validated using discrete event simulator simulations.

95 citations


Journal ArticleDOI
TL;DR: This paper aims to design a new cloud service selection model under the fuzzy environment by utilizing the analytical hierarchy process (AHP) and fuzzy technique for order preference by similarity to ideal solution (TOPSIS).
Abstract: Cloud service selection plays a crucial role in terms of on-demand service selection on a subscription basis. As a result of wide-range availability of cloud services with similar functionalities, it is very crucial to determine which service best addresses the user’s desires and objectives. This paper aims to design a new cloud service selection model under the fuzzy environment by utilizing the analytical hierarchy process (AHP) and fuzzy technique for order preference by similarity to ideal solution (TOPSIS). The AHP method is enforced to configure the structure of cloud service selection problem and to impel the criteria weight using the pairwise comparisons, and the TOPSIS method utilizes the final ranking of the solution. In our proposed model, the non-functional quality of service requirements is taken into consideration for selecting appropriate service. Furthermore, the proposed model exploits a set of pre-defined linguistic variables, parameterized by triangular fuzzy numbers for evaluating each criteria weights. The experimental results obtained using the real-time cloud service domains prove the efficacy of our proposed model and demonstrate the effectiveness by inducing better performance, when compared against other available cloud service selection algorithms. Finally, the sensitivity analysis is persuaded to confirm the robustness of our proposed model.

89 citations


Journal ArticleDOI
TL;DR: This special issue presents the importance of exascale computing for the maintenance of US leadership over the coming decades, and it is for this reason that the United States is doing strategic investments in HPC to meet increasing computing demands and emerging technological challenges.
Abstract: High-performance computing (HPC) is nowadays an essential tool for the solution of many problems that arise in both scientific and engineering realms. HPC platforms are based on clusters of multicore nodes, and half of these facilities all around the world also include some type of accelerator device such as graphics processing units (GPUs) or the Intel Xeon Phi coprocessor. Many research interests are addressed to optimize applications that can get the most of these configurations. At the same time, research on the HPC ecosystem (hardware, software tools, applications, etc.) is in the spotlight. In particular, exascale computing is receiving a major interest. The White House Office of Science and Technology Policy highlights the importance of exascale computing for the maintenance of US leadership over the coming decades, and it is for this reason that the United States is doing strategic investments in HPC to meet increasing computing demands and emerging technological challenges. Current and future research faces the natural problems that arise when concurrent resources becomevery large: huge electrical consumption, heat dissipation, and probability of failure, among others. Many problems arise as long as we proceed in the way to developing exascale systems. One of them is the increase of failure rates. This special issue presents the

87 citations


Journal ArticleDOI
TL;DR: An application-aware cloudlet selection strategy for multi-cloudlet scenario that can balance the load of the system by distributing the processes to be offloaded in various cloudlets, and the mathematical models of total power consumption and delay for the proposed strategy are developed.
Abstract: Latency- and power-aware offloading is a promising issue in the field of mobile cloud computing today. To provide latency-aware offloading, the concept of cloudlet has evolved. However, offloading an application to the most appropriate cloudlet is still a major challenge. This paper has proposed an application-aware cloudlet selection strategy for multi-cloudlet scenario. Different cloudlets are able to process different types of applications. When a request comes from a mobile device for offloading a task, the application type is verified first. According to the application type, the most suitable cloudlet is selected among multiple cloudlets present near the mobile device. By offloading computation using the proposed strategy, the energy consumption of mobile terminals can be reduced as well as latency in application execution can be decreased. Moreover, the proposed strategy can balance the load of the system by distributing the processes to be offloaded in various cloudlets. Consequently, the probability of putting all loads on a single cloudlet can be dealt for load balancing. The proposed algorithm is implemented in the mobile cloud computing laboratory of our university. In the experimental analyses, the sorting and searching processes, numerical operations, game and web service are considered as the tasks to be offloaded to the cloudlets based on the application type. The delays involved in offloading various applications to the cloudlets located at the university laboratory, using proposed algorithm are presented. The mathematical models of total power consumption and delay for the proposed strategy are also developed in this paper.

Journal ArticleDOI
TL;DR: A cloud implementation (developed using Apache Spark) of the popular K-means algorithm for unsupervised hyperspectral image clustering is presented and the experimental results suggest that cloud architectures allow for the efficient distributed processing of large hyperspectrals image data sets.
Abstract: Remotely sensed hyperspectral imaging offers the possibility to collect hundreds of images, at different wavelength channels, for the same area on the surface of the Earth. Hyperspectral images are characterized by their large volume and dimensionality, which makes their processing and storage difficult. As a result, several techniques have been developed in previous years to perform hyperspectral image analysis on high-performance computing architectures. However, the application of cloud computing techniques has not been as widespread. There are many potential advantages in exploiting cloud computing architectures for distributed hyperspectral image analysis. In this paper, we present a cloud implementation (developed using Apache Spark) of the popular K-means algorithm for unsupervised hyperspectral image clustering. The experimental results suggest that cloud architectures allow for the efficient distributed processing of large hyperspectral image data sets.

Journal ArticleDOI
TL;DR: Genetic algorithm was used to achieve global optimization with regard to service level agreement, service clustering was used for reducing the search space of the problem, and association rules were used for a composite service based on their histories to enhance service composition efficiency.
Abstract: One of the requirements of QoS-aware service composition in cloud computing environment is that it should be executed on-the-fly. It requires a trade-off between optimality and the execution speed of service composition. In line with this purpose, many researchers used combinatorial methods in previous works to achieve optimality within the shortest possible time. However, due to the ever-increasing number of services which leads to the enlargement of the search space of the problem, previous methods do not have adequate efficiency in composing the required services within reasonable time. In this paper, genetic algorithm was used to achieve global optimization with regard to service level agreement. Moreover, service clustering was used for reducing the search space of the problem, and association rules were used for a composite service based on their histories to enhance service composition efficiency. The conducted experiments acknowledged the higher efficiency of the proposed method in comparison with similar related works.

Journal ArticleDOI
TL;DR: A reliable IoT-based wireless video surveillance system that provides an optimal bandwidth distribution and allocation to minimize the overall surveillance video distortion and achieves high scalability is designed and evaluated.
Abstract: Large-scale video surveillance systems are among the necessities for securing our life these days. The high bandwidth demand and the large storage requirements are the main challenges in such systems. To face these challenges, the system can be deployed as a multi-tier framework that utilizes different technologies. In such a framework, technologies proposed under the umbrella of the Internet of Things (IoT) can play a significant rule in facing the challenges. In video surveillance, the cameras can be considered as "the things" that are streaming videos to a central processing and storage server (the cloud) through the Internet. Wireless technologies can be used to connect wireless cameras to the surveillance system more conveniently than wired cameras. Unfortunately, wireless communication in general tend to have limited bandwidth that needs careful management to achieve scalability. In this paper, we design and evaluate a reliable IoT-based wireless video surveillance system that provides an optimal bandwidth distribution and allocation to minimize the overall surveillance video distortion. We evaluate our system using NS-3 simulation. The results show that the proposed framework fully utilizes the available cloud bandwidth budget and achieves high scalability.

Journal ArticleDOI
TL;DR: The proposed protocol called defending against worm hole attack (DAWA) employs fuzzy logic system and artificial immune system to defend against wormhole attacks and outperforms other existing solutions in terms of false negative ratio, false positive ratio, detection ratio, packet delivery ratio, packets loss ratio and packets drop ratio.
Abstract: Mobile ad hoc networks (MANETs) are mobile networks, which are automatically outspread on a geographically limited region, without requiring any preexisting infrastructure. Mostly, nodes are both self-governed and self-organized without requiring a central monitoring. Because of their distributed characteristic, MANETs are vulnerable to a particular routing misbehavior, called wormhole attack. In wormhole attack, one attacker node tunnels packet from its position to the other attacker nodes. Such wormhole attack results in a fake route with fewer hop count. If source node selects this fictitious route, attacker nodes have the options of delivering the packets or dropping them. For this reason, this paper proposes an improvement over AODV routing protocol to design a wormhole-immune routing protocol. The proposed protocol called defending against wormhole attack (DAWA) employs fuzzy logic system and artificial immune system to defend against wormhole attacks. DAWA is evaluated through extensive simulations in the NS-2 environment. The results show that DAWA outperforms other existing solutions in terms of false negative ratio, false positive ratio, detection ratio, packet delivery ratio, packets loss ratio and packets drop ratio.

Journal ArticleDOI
TL;DR: This paper presents a novel virtual machine consolidation technique to achieve energy–QoS–temperature balance in the cloud data center and certifies that physical machine temperature, SLA, and migration technique together control the energy consumption and QoS in a cloud data Center.
Abstract: Cloud-based data centers consume a significant amount of energy which is a costly procedure. Virtualization technology, which can be regarded as the first step in the cloud by offering benefits like the virtual machine and live migration, is trying to overcome this problem. Virtual machines host workload, and because of the variability of workload, virtual machines consolidation is an effective technique to minimize the total number of active servers and unnecessary migrations and consequently improves energy consumption. Effective virtual machine placement and migration techniques act as a key issue to optimize the consolidation process. In this paper, we present a novel virtual machine consolidation technique to achieve energy–QoS–temperature balance in the cloud data center. We simulated our proposed technique in CloudSim simulation. Results of evaluation certify that physical machine temperature, SLA, and migration technique together control the energy consumption and QoS in a cloud data center.

Journal ArticleDOI
TL;DR: Two algorithms to enhance the adaptive-head clustering algorithm are proposed, namely, the improved adaptive- head and improved prediction-based adaptive head, which uses dynamic clustering to achieve impressive tracking quality and energy efficiency through optimally choosing the cluster head that participates in the tracking process.
Abstract: In recent years, there has been a growing interest in wireless sensor networks because of their potential usage in a wide variety of applications such as remote environmental monitoring and target tracking. Target tracking is a typical and substantial application of wireless sensor networks. Generally, target tracking aims basically at estimating the location of the target while it is moving within an area of interest and consequently report it to the base station in a timely manner. However, achieving a high accuracy of tracking together with energy efficiency in target tracking algorithms is extremely challenging. In this article, we propose two algorithms to enhance the adaptive-head clustering algorithm, formerly lunched, namely, the improved adaptive-head and improved prediction-based adaptive head. Particularly, the first algorithm uses dynamic clustering to achieve impressive tracking quality and energy efficiency through optimally choosing the cluster head that participates in the tracking process. On the other hand, the second algorithm incorporates a prediction mechanism to the first proposed algorithm. Our proposed algorithms are simulated using Matlab considering various network conditions. Simulation results show that our proposed algorithms can accurately track a target, even when random moving speeds are considered and consume much less energy, when compared with the previous algorithm for target tracking, which in turn prolong the network lifetime much more.

Journal ArticleDOI
TL;DR: This paper presents two SLA-based task scheduling algorithms, namelySLA-MCT and SLA -Min-Min for heterogeneous multi-cloud environment, and shows that the proposed algorithms properly balance between makespan and gain cost of the services in comparison with other algorithms.
Abstract: Service-level agreement (SLA) is a major issue in cloud computing because it defines important parameters such as quality of service, uptime, downtime, period of service, pricing, and security. However, the service may vary from one cloud service provider (CSP) to another. The collaboration of the CSPs in the heterogeneous multi-cloud environment is very challenging, and it is not well covered in the recent literatures. In this paper, we present two SLA-based task scheduling algorithms, namely SLA-MCT and SLA-Min-Min for heterogeneous multi-cloud environment. The former algorithm is a single-phase scheduling, whereas the latter one is a two-phase scheduling. The proposed algorithms support three levels of SLA determined by the customers. Furthermore, the algorithms incorporate the SLA gain cost for the successful completion of the service and SLA violation cost for the unsuccessful end of the service. We simulate the proposed algorithms using benchmark and synthetic datasets. The experimental results of the proposed SLA-MCT are compared with three single-phase task scheduling algorithms, namely CLS, Execution-MCT, and Profit-MCT, and the results of the proposed SLA-Min-Min are compared with two-phase scheduling algorithms, namely Execution-Min-Min and Profit-Min-Min in terms of four performance metrics, namely makespan, average cloud utilization, gain, and penalty cost of the services. The results clearly show that the proposed algorithms properly balance between makespan and gain cost of the services in comparison with other algorithms.

Journal ArticleDOI
TL;DR: The proposals of cloud intrusion detection system (IDS) and intrusion detection and prevention system frameworks are examined and the cloud IDS requirements and research scope are recommended to achieve desired level of security at virtualization layer of cloud computing.
Abstract: Virtualization plays a vital role in the construction of cloud computing. However, various vulnerabilities are existing in current virtualization implementations, and thus there are various security challenges at virtualization layer. In this paper, we investigate different vulnerabilities and attacks at virtualization layer of cloud computing. We examine the proposals of cloud intrusion detection system (IDS) and intrusion detection and prevention system frameworks. We recommend the cloud IDS requirements and research scope to achieve desired level of security at virtualization layer of cloud computing.

Journal ArticleDOI
TL;DR: The proposed rendezvous-based routing protocol is validated through experiment and compared with the existing protocols using some metrics such as packet delivery ratio, energy consumption, end-to-end latency, network life time.
Abstract: In wireless sensor networks, the sensor nodes find the route towards the sink to transmit data. Data transmission happens either directly to the sink node or through the intermediate nodes. As the sensor node has limited energy, it is very important to develop efficient routing technique to prolong network life time. In this paper we proposed rendezvous-based routing protocol, which creates a rendezvous region in the middle of the network and constructs a tree within that region. There are two different modes of data transmission in the proposed protocol. In Method 1, the tree is directed towards the sink and the source node transmits the data to the sink via this tree, whereas in Method 2, the sink transmits its location to the tree, and the source node gets the sink's location from the tree and transmits the data directly to the sink. The proposed protocol is validated through experiment and compared with the existing protocols using some metrics such as packet delivery ratio, energy consumption, end-to-end latency, network life time.

Journal ArticleDOI
TL;DR: A new list–scheduling algorithm is proposed that schedules the tasks represented in the DAG to the processor that best minimizes the total execution time by taking into consideration the restriction of crossover between processors.
Abstract: Efficient scheduling of tasks in heterogeneous computing systems is of primary importance for high-performance execution of programs. The programs are to be considered as multiple sequences of tasks that are presented as directed acyclic graphs (DAG). Each task has its own execution timeline that incorporates into multiple processors. Moreover, each edge on the graph represents constraints between the sequenced tasks. In this paper, we propose a new list---scheduling algorithm that schedules the tasks represented in the DAG to the processor that best minimizes the total execution time by taking into consideration the restriction of crossover between processors. This objective will be achieved in two major phases: (a) computing priorities of each task that will be executed, and (b) selecting the processor that will handle each task. The first phase, priorities computation, focuses on finding the best execution sequence that minimizes the makespan of the overall execution. In list-scheduling algorithm, the quality of the solution is very sensitive to the priority assigned to the tasks. Therefore, in this paper, we include an enhanced calculation of weight that is used in the ranking equation for determining the priority of tasks. The second phase, processor selection, primarily focuses on allocating a processor that is a best fit for each task to be executed. In this paper, we enhance the processor selection by introducing a randomized decision mechanism based on a threshold which decides whether the task be assigned to the processor with the lowest execution time or to the processor that produces the lowest finish time. This mechanism considers a balanced combination of the local and global optimal results to explore the search space efficiently to optimize the overall makespan. The proposed algorithm is evaluated on different randomly generated DAGs, and results are compared with the well-known existing approaches to show the effectiveness of the proposed algorithm in reducing the makespan of execution. The experiment's results show improvement in the makespan that reaches up to 6---7%.

Journal ArticleDOI
TL;DR: A heuristic workflow scheduling algorithm is proposed that attempts to minimize the execution cost considering a user-defined deadline constraint and demonstrates a great reduction in resource leasing costs while the workflow deadline is met.
Abstract: Workflows are adopted as a powerful modeling technique to represent diverse applications in different scientific fields as a number of loosely coupled tasks. Given the unique features of cloud technology, the issue of cloud workflow scheduling is a critical research topic. Users can utilize services on the cloud in a pay-as-you-go manner and meet their quality of service (QoS) requirements. In the context of the commercial cloud, execution time and especially execution expenses are considered as two of the most important QoS requirements. On the other hand, the remarkable growth of multicore processor technology has led to the use of these processors by Infrastructure as a Service cloud service providers. Therefore, considering the multicore processing resources on the cloud, in addition to time and cost constraints, makes cloud workflow scheduling even more challenging. In this research, a heuristic workflow scheduling algorithm is proposed that attempts to minimize the execution cost considering a user-defined deadline constraint. The proposed algorithm divides the workflow into a number of clusters and then an extendable and flexible scoring approach chooses the best cluster combinations to achieve the algorithm's goals. Experimental results demonstrate a great reduction in resource leasing costs while the workflow deadline is met.

Journal ArticleDOI
TL;DR: A survey on privacy risks and challenges for public cloud computing is provided and the main existing solutions that have made great progress in this area are presented and evaluated.
Abstract: Definitely, cloud computing represents a real evolution in the IT world that provides many advantages for both providers and users. This new paradigm includes several services that allow data storage and processing. However, outsourcing data to the cloud raises many issues related to privacy concerns. In fact, for some organizations and individuals, data privacy present a crucial aspect of their business. Indeed, their sensitive data (health, finance, personal information, etc.) have a very important value, and any infringement of privacy can cause great loss in terms of money and reputation. Therefore, without considering privacy issues, the adoption of cloud computing can be discarded by large spectra of users. In this paper, we provide a survey on privacy risks and challenges for public cloud computing. We present and evaluate the main existing solutions that have made great progress in this area. To better address privacy concerns, we point out considerations and guidelines while giving the remained open issues that require additional investigation efforts to fulfill preserving and enhancing privacy in public cloud.

Journal ArticleDOI
TL;DR: The deep-structure auto-encoder neural networks are applied to detect the anomalies of spectrum, and the time–frequency diagram is acted as the feature of the learning model and a threshold is used to distinguish the anomalies from the normal data.
Abstract: Anomaly detection is a typical task in many fields, as well as spectrum monitoring in wireless communication. Anomaly detection task of spectrum in wireless communication is quite different from other anomaly detection tasks, mainly reflected in two aspects: (a) the variety of anomaly types makes it impossible to get the label of abnormal data. (b) the complexity and the quantity of the electromagnetic environment data increase the difficulty of manual feature extraction. Therefore, a novelty learning model is expected to deal with the task of anomaly detection of spectrum in wireless communication. In this paper, we apply the deep-structure auto-encoder neural networks to detect the anomalies of spectrum, and the time---frequency diagram is acted as the feature of the learning model. Meanwhile, a threshold is used to distinguish the anomalies from the normal data. Finally, we evaluate the performance of our models with different number of hidden layers by our experiments. The results of numerical experiments demonstrate that a model with a deeper architecture achieves relatively better performance in our spectrum anomaly detection task.

Journal ArticleDOI
TL;DR: Two standard benchmark problems in addition to the real world application, namely, multi-objective shape design of tubular linear synchronous motor (TLSM) and TLSM objective functions are checked to demonstrate the effectiveness of the proposed MOFOA for finding the non-dominated solutions.
Abstract: This paper addresses a novel multi-objective fruit fly optimization algorithm (MOFOA) for solving multi-objective optimization problems. The essence of MOFOA lies in its having two characteristic features. For the first feature, a population of random fruit flies initializes the algorithm. During this initialization phase, the dominated fruit fly is replaced by the nearest non-dominated one. Subsequently, the fruit flies undergo evolution by flying randomly around the non-dominated solution or around the reference point, i.e., the best location of the individual objectives. Afterwards, the fruit flies are updated according to the nearest location whether from the reference point or the previous non-dominated location. For the second feature, the weighted sum method is incorporated to update the previous best locations of fruit flies and the reference point to emphasize the convergence of the non-dominated solutions. To prove the capability of the proposed MOFOA, two standard benchmark problems in addition to the real world application, namely, multi-objective shape design of tubular linear synchronous motor (TLSM) are checked. The corresponding TLSM objective functions aims to maximize operating force and to minimize the flux saturation. The outcomes clearly demonstrate the effectiveness of the proposed algorithm for finding the non-dominated solutions.

Journal ArticleDOI
TL;DR: This paper presents the design of a novel, real-time, wireless, multisensory, smart surveillance system with 3D-HEVC features and measures of the proposed protocol have been shown to provide superior results compared to existing transport protocols.
Abstract: This paper presents the design of a novel, real-time, wireless, multisensory, smart surveillance system with 3D-HEVC features. The proposed high-level system architecture of the surveillance system is analyzed. The advantages of HEVC encoding are presented. Methods for synchronization between multiple streams are presented. Available wireless standards are presented and compared. A network-adaptive transmission protocol for a reliable, real-time, multisensory surveillance system is proposed. Adaptive packet frame grouping (APFG) and adaptive quantization are deployed to maximize the quality-of-experience (QoE). Measurements of the proposed protocol have been shown to provide superior results compared to existing transport protocols.

Journal ArticleDOI
TL;DR: A novel algorithm named Hybrid Frequent Itemset Mining (HFIM) is introduced, which utilizes the vertical layout of dataset to solve the problem of scanning the dataset in each iteration and performs better in terms of execution time and space consumption.
Abstract: Frequent itemset mining is one of the data mining techniques applied to discover frequent patterns, used in prediction, association rule mining, classification, etc. Apriori algorithm is an iterative algorithm, which is used to find frequent itemsets from transactional dataset. It scans complete dataset in each iteration to generate the large frequent itemsets of different cardinality, which seems better for small data but not feasible for big data. The MapReduce framework provides the distributed environment to run the Apriori on big transactional data. However, MapReduce is not suitable for iterative process and declines the performance. We introduce a novel algorithm named Hybrid Frequent Itemset Mining (HFIM), which utilizes the vertical layout of dataset to solve the problem of scanning the dataset in each iteration. Vertical dataset carries information to find support of each itemsets. Moreover, we also include some enhancements to reduce number of candidate itemsets. The proposed algorithm is implemented over Spark framework, which incorporates the concept of resilient distributed datasets and performs in-memory processing to optimize the execution time of operation. We compare the performance of HFIM with another Spark-based implementation of Apriori algorithm for various datasets. Experimental results show that the HFIM performs better in terms of execution time and space consumption.

Journal ArticleDOI
TL;DR: This work proposes a DVFS-based heuristic TRP-FS to consolidate virtual clusters on physical servers to save energy while guarantee job SLAs and proves the most efficient frequency that minimizes the energy consumption, and the upper bound of energy saving through DVFS techniques.
Abstract: The energy efficiency of cloud computing has recently attracted a great deal of attention. As a result of raised expectations, cloud providers such as Amazon and Microsoft have started to deploy a new IaaS service, a MapReduce-style virtual cluster, to process data-intensive workloads. Considering that the IaaS provider supports multiple pricing options, we study batch-oriented consolidation and online placement for reserved virtual machines (VMs) and on-demand VMs, respectively. For batch cases, we propose a DVFS-based heuristic TRP-FS to consolidate virtual clusters on physical servers to save energy while guarantee job SLAs. We prove the most efficient frequency that minimizes the energy consumption, and the upper bound of energy saving through DVFS techniques. More interestingly, this frequency only depends on the type of processor. FS can also be used in combination with other consolidation algorithms. For online cases, a time-balancing heuristic OTB is designed for on-demand placement, which can reduce the mode switching by means of balancing server duration and utilization. The experimental results both in simulation and using the Hadoop testbed show that our approach achieves greater energy savings than existing algorithms.

Journal ArticleDOI
TL;DR: An enhanced ransomware prevention system based on abnormal behavior analysis and detection in cloud analysis system—CloudRPS that can defend against ransomware through more in-depth prevention and minimize the possibility of the early intrusion is proposed.
Abstract: Recently, indiscriminate ransomware attacks targeting a wide range of victims for monetary gains have become a worldwide social issue. In the early years, ransomware has used e-mails as attack method. The most common spreading method was through spam mail or harmful websites. In addition, social networking sites or smartphone messages are used. Ransomware can encrypt the user's files and issues a warning message to the user and requests payment through bitcoin, which is a virtual currency that is hard to trace. It is possible to analyze ransomware but this has its limitations as new ransomware is being continuously created and disseminated. In this paper, we propose an enhanced ransomware prevention system based on abnormal behavior analysis and detection in cloud analysis system--CloudRPS. This proposed system can defend against ransomware through more in-depth prevention. It can monitors the network, file, and server in real time. Furthermore, it installs a cloud system to collect and analyze various information from the device and log information to defend against attacks. Finally, the goal of the system is to minimize the possibility of the early intrusion. And it can detect the attack quickly more to prevent at the user's system in case of the ransomware compromises.

Journal ArticleDOI
TL;DR: A comprehensive survey to analyze various cryptographic, biometric and multifactor lightweight solutions for data security in mobile cloud environment and infrastructure and provides a taxonomy of the state-of-the-art data security frameworks.
Abstract: The incessant spurt of research activities to augment capabilities of resource-constrained mobile devices by leveraging heterogeneous cloud resources has created a new research impetus called mobile cloud computing. However, this rapid relocation to the cloud has fueled security and privacy concerns as users' data leave owner's protection sphere and enter the cloud. Significant efforts have been devoted by academia and research community to study and build secure frameworks in cloud environment, but there exists a research gap for comprehensive study of security frameworks in mobile cloud computing environment. Therefore, we aim to conduct a comprehensive survey to analyze various cryptographic, biometric and multifactor lightweight solutions for data security in mobile cloud. This survey highlights the current security issues in mobile cloud environment and infrastructure, investigates various data security frameworks and provides a taxonomy of the state-of-the-art data security frameworks and deep insight into open research issues for ensuring security and privacy of data in mobile cloud computing platform.