scispace - formally typeset
Search or ask a question

Showing papers in "Computing in 2016"


Journal ArticleDOI
TL;DR: The main aim of this paper is to identify open challenges associated with energy efficient resource allocation and outline the problem and existing hardware and software-based techniques available for this purpose based on the energy-efficient research dimension taxonomy.
Abstract: In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subject that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.

303 citations


Journal ArticleDOI
TL;DR: A novel algorithm is proposed that balances the deviations with respect to all these perspectives based on a customizable cost function and may help to circumvent misleading results as generated by classical single-perspective or staged approaches.
Abstract: Organizations maintain process models that describe or prescribe how cases (e.g., orders) are handled. However, reality may not agree with what is modeled. Conformance checking techniques reveal and diagnose differences between the behavior that is modeled and what is observed. Existing conformance checking approaches tend to focus on the control-flow in a process, while abstracting from data dependencies, resource assignments, and time constraints. Even in those situations when other perspectives are considered, the control-flow is aligned first, i.e., priority is given to this perspective. Data dependencies, resource assignments, and time constraints are only considered as "second-class citizens", which may lead to misleading conformance diagnostics. For example, a data attribute may provide strong evidence that the wrong activity was executed. Existing techniques will still diagnose the data-flow as deviating, whereas our approach will indeed point out that the control-flow is deviating. In this paper, a novel algorithm is proposed that balances the deviations with respect to all these perspectives based on a customizable cost function. Evaluations using both synthetic and real data sets show that a multi-perspective approach is indeed feasible and may help to circumvent misleading results as generated by classical single-perspective or staged approaches.

176 citations


Journal ArticleDOI
TL;DR: An overview of a collection of IoT solutions developed in partnerships with other prominent IoT innovators and refer to them collectively as IoT platform for addressing technical challenges and help springboard IoT to its potential is presented.
Abstract: The internet of things (IoT) is the latest web evolution that incorporates billions of devices (such as cameras, sensors, RFIDs, smart phones, and wearables), that are owned by different organizations and people who are deploying and using them for their own purposes. Federations of such IoT devices (we refer to as IoT things) can deliver the information needed to solve internet-scale problems that have been too difficult to obtain and harness before. To realize this unprecedented IoT potential, we need to develop IoT solutions for discovering the IoT devices each application needs, collecting and integrating their data, and distilling the high value information each application needs. We also need to provide solutions that permit doing these tasks in real-time, on the move, in the cloud, and securely. In this paper we present an overview of a collection of IoT solutions (which we have developed in partnerships with other prominent IoT innovators and refer to them collectively as IoT platform) for addressing these technical challenges and help springboard IoT to its potential. We also describe a variety of IoT applications that have utilized the proposed IoT platform to provide smart IoT services in the areas of smart farming, smart grids, and smart manufacturing. Finally, we discuss future research and a vision of the next generation IoT infrastructure.

126 citations


Journal ArticleDOI
TL;DR: A dynamic energy-efficient virtual machine (VM) migration and consolidation algorithm based on a multi-resource energy- efficient model that can minimize energy consumption with Quality of Service guarantee and shows better energy efficiency in data center for cloud computing.
Abstract: In this paper, we developed a dynamic energy-efficient virtual machine (VM) migration and consolidation algorithm based on a multi-resource energy-efficient model. It can minimize energy consumption with Quality of Service guarantee. In our algorithm, we designed a method of double threshold with multi-resource utilization to trigger the migration of VMs. The Modified Particle Swarm Optimization method is introduced into the consolidation of VMs to avoid falling into local optima which is a common defect in traditional heuristic algorithms. Comparing with the popular traditional heuristic algorithm Modified Best Fit Decrease, our algorithm reduced the number of active physical nodes and the amount of VMs migrations. It shows better energy efficiency in data center for cloud computing.

122 citations


Journal ArticleDOI
TL;DR: A brief overview of distributed systems is provided: what they are, their general design goals, and some of the most common types.
Abstract: Distributed systems are by now commonplace, yet remain an often difficult area of research. This is partly explained by the many facets of such systems and the inherent difficulty to isolate these facets from each other. In this paper we provide a brief overview of distributed systems: what they are, their general design goals, and some of the most common types.

102 citations


Journal ArticleDOI
TL;DR: A multimedia service selection method based on weighted Principal Component Analysis (PCA), i.e., Weighted PCA-based Multimedia Service Selection Method (W_PCA_MSSM), which could reduce the number of QoS criteria for evaluation, by which the service selection process is simplified.
Abstract: Cloud computing has rendered its ever-increasing advantages in flexible service provisions, which attracts the attentions from large-scale enterprise applications to small-scale smart uses. For example, more and more multimedia services are moving towards cloud to better accommodate people's daily uses on various smart devices that support cloud, some of which are similar or equivalent in their functionality (e.g., more than 1,000 video services that share similar "video-play" functionality are present in APP Store). In this situation, it is necessary to discriminate these functional-equivalent multimedia services, based on their Quality of Service (QoS) information. However, due to the abundant information of multimedia content, dozens of QoS criteria are often needed to evaluate a multimedia service, which places a heavy burden on users' multimedia service selection. Besides, the QoS criteria of multimedia services are usually not independent, but correlated, which cannot be accommodated very well by the traditional selection methods, e.g., traditional simple weighting methods. In view of these challenges, we put forward a multimedia service selection method based on weighted Principal Component Analysis (PCA), i.e., Weighted PCA-based Multimedia Service Selection Method (W_PCA_MSSM). The advantage of our proposal is two-fold. First, weighted PCA could reduce the number of QoS criteria for evaluation, by which the service selection process is simplified. Second, PCA could eliminate the correlations between different QoS criteria, which may bring a more accurate service selection result. Finally, the feasibility of W_PCA_MSSM is validated, by a set of experiments deployed on real-world service quality set QWS Dataset.

83 citations


Journal ArticleDOI
TL;DR: The proposed lightweight and efficient authentication scheme (LESPP) with strong privacy preservation for secure VANET communication is feasible and has an outstanding performance of nearly 0 ms network delay and 0 % packet loss ratio, which are especially appropriate for realtime emergency event reporting applications.
Abstract: Authentication in vehicular ad-hoc network (VANET) is still a research challenge, as it requires not only secure and efficient authentication, but also privacy preservation. In this paper, we proposed a lightweight and efficient authentication scheme (LESPP) with strong privacy preservation for secure VANET communication. The proposed scheme utilizes self-generated pseudo identity to guarantee both privacy preservation and conditional traceability, and it only requires a lightweight symmetric encryption and message authentication code (MAC) generation for message signing and a fast MAC re-generation for verification. Compared with currently existing public key based schemes, the proposed scheme significantly reduces computation cost by $$10^2$$102---$$10^3$$103 times and decreases communication overhead by 41.33---77.60 %, thus achieving resilience to denial of service (DoS) attack. In LESPP, only key management center can expose a vehicle's real identity from its pseudo identity, therefore, LESPP provides strong privacy preservation so that the adversaries cannot trace any vehicles, even if all roadside units are compromised. Furthermore, vehicles in LESPP need not maintain certificate revocation list (CRL), so any CRL related overhead is avoided. Extensive simulations reveal that the novel scheme is feasible and has an outstanding performance of nearly 0 ms network delay and 0 % packet loss ratio, which are especially appropriate for realtime emergency event reporting applications.

69 citations


Journal ArticleDOI
TL;DR: An extensive evaluation shows that the new mechanism to preserve privacy while leveraging user profiles in distributed recommender systems provides a good trade-off between privacy and accuracy, with little overhead and high resilience.
Abstract: We propose a new mechanism to preserve privacy while leveraging user profiles in distributed recommender systems. Our mechanism relies on two contributions: (i) an original obfuscation scheme, and (ii) a randomized dissemination protocol. We show that our obfuscation scheme hides the exact profiles of users without significantly decreasing their utility for recommendation. In addition, we precisely characterize the conditions that make our randomized dissemination protocol differentially private. We compare our mechanism with a non-private as well as with a fully private alternative. We consider a real dataset from a user survey and report on simulations as well as planetlab experiments. We dissect our results in terms of accuracy and privacy trade-offs, bandwidth consumption, as well as resilience to a censorship attack. In short, our extensive evaluation shows that our twofold mechanism provides a good trade-off between privacy and accuracy, with little overhead and high resilience.

64 citations


Journal ArticleDOI
TL;DR: This paper focuses on WebRTC-based video conferencing system which allows online meetings between remotely located care coordinators and patients at their home which is a part of a large tele-home monitoring project that is being carried out at six locations in five different states in Australia.
Abstract: Existing video conferencing systems that are often used in telehealth services have been criticized for a number of reasons: (a) they are often too expensive to purchase and maintain, (b) they use proprietary technologies that are incompatible to each other, and (c) they require fairly skilled IT personnel to maintain the system. There is a need for less expensive, compatible, and easy-to-use video conferencing system. The web real-time communication (WebRTC) promises to deliver such a solution by enabling web browsers with real-time communications capabilities via simple JavaScript APIs. Utilizing WebRTC, users can conduct video/audio calls and data sharing through web browsers without having to purchase or download extra software. Though the promise and prospective of WebRTC have been agreed on, there have not been many cases of real life applications (in particular in telehealth) that utilizes the WebRTC. In this paper, we present our practical experience in the design and implementation of a video conferencing system for telehealth based on WebRTC. Our video conferencing system is a part of a large tele-home monitoring project that is being carried out at six locations in five different states in Australia. One of the aims of the project is to evaluate whether high-bandwidth enabled telehealth services, delivered through tele-home monitoring, can be cost effective, and improve healthcare outcomes and access to care. This paper however focuses on WebRTC-based video conferencing system which allows online meetings between remotely located care coordinators and patients at their home. We discuss the underlying issues, detailed design and implementation, and current limitations of using WebRTC in a real life application.

63 citations


Journal ArticleDOI
TL;DR: SoCloud as discussed by the authors is a service-oriented component-based Platform as a Service (PaaS) for managing portability, elasticity, provisioning, and high availability across multiple clouds.
Abstract: Multi-cloud computing is a promising paradigm to support very large scale world wide distributed applications. Multi-cloud computing is the usage of multiple, independent cloud environments, which assumed no priori agreement between cloud providers or third party. However, multi-cloud computing has to face several key challenges such as portability, provisioning, elasticity, and high availability. Developers will not only have to deploy applications to a specific cloud, but will also have to consider application portability from one cloud to another, and to deploy distributed applications spanning multiple clouds. This article presents soCloud a service-oriented component-based Platform as a Service for managing portability, elasticity, provisioning, and high availability across multiple clouds. soCloud is based on the OASIS Service Component Architecture standard in order to address portability. soCloud provides services for managing provisioning, elasticity, and high availability across multiple clouds. soCloud has been deployed and evaluated on top of ten existing cloud providers: Windows Azure, DELL KACE, Amazon EC2, CloudBees, OpenShift, dotCloud, Jelastic, Heroku, Appfog, and an Eucalyptus private cloud.

54 citations


Journal ArticleDOI
TL;DR: A semantic based cloud environment is proposed to facilitate the analyzing and searching process of surveillance video data, and an architecture integrating ontology building, semantic annotation, and semantic search is proposedto leverage the semantic description of the video data to find them from concept-based level.
Abstract: Recently, the number of surveillance video data has grown enormously since the popularity of the smart cities, which brings the difficulties to users for searching and analyzing the content of the videos. These video data are not limited to data anymore, which provide the effective information for criminal investigation systems, intrusion detection system, and many others. However, as the number of available cloud services increases, the problem of data discovery and selection arises. The semantic technology is an effective choice for enhancing the accurate of the data searching and analyzing process. In this paper, a semantic based cloud environment is proposed to facilitate the analyzing and searching process of surveillance video data. An architecture integrating ontology building, semantic annotation, and semantic search is proposed to leverage the semantic description of the video data to find them from concept-based level. A semantic intermediate layer which organizes the video data based on their semantic relations is given. Moreover, the proposed method is used in the intelligent transportation field, which shows the bright prospect of the proposed method in real applications.

Journal ArticleDOI
TL;DR: An adaptive fuzzy threshold-based algorithm has been proposed to detect overloaded and under-loaded hosts and results demonstrate that the proposed algorithm significantly outperforms the other competitive algorithms.
Abstract: Dynamic consolidation of virtual machines (VMs) is an effective technique, which can lead to improvement of energy efficiency and resource utilization in cloud data centers. However, due to varying workloads in applications, consolidating the virtual machines can cause a violation in Service Level Agreement. The main goal of the dynamic VM consolidation is to optimize the energy-performance trade-off. Detecting when a host is being overloaded or underloaded are two substantial sub-problems of dynamic VM consolidation, which directly affects the utilization of resources, Quality of Service, and energy efficiency as well. In this paper, an adaptive fuzzy threshold-based algorithm has been proposed to detect overloaded and under-loaded hosts. The proposed algorithm generates rules dynamically and updates membership functions to adapt to changes in workload. It is validated with real-world PlanetLab workload. Simulation results demonstrate that the proposed algorithm significantly outperforms the other competitive algorithms.

Journal ArticleDOI
TL;DR: In this paper, a hybrid intelligence-aided approach to affect-sensitive e-learning is proposed to improve learner's learning experience and help the learner become better engaged in the learning process.
Abstract: E-Learning has revolutionized the delivery of learning through the support of rapid advances in Internet technology Compared with face-to-face traditional classroom education, e-learning lacks interpersonal and emotional interaction between students and teachers In other words, although a vital factor in learning that influences a human's ability to solve problems, affect has been largely ignored in existing e-learning systems In this study, we propose a hybrid intelligence-aided approach to affect-sensitive e-learning A system has been developed that incorporates affect recognition and intervention to improve the learner's learning experience and help the learner become better engaged in the learning process The system recognizes the learner's affective states using multimodal information via hybrid intelligent approaches, eg, head pose, eye gaze tracking, facial expression recognition, physiological signal processing and learning progress tracking The multimodal information gathered is fused based on the proposed affect learning model The system provides online interventions and adapts the online learning material to the learner's current learning state based on pedagogical strategies Experimental results show that interest and confusion are the most frequently occurring states when a learner interacts with a second language learning system and those states are highly related to learning levels (easy versus difficult) and outcomes Interventions are effective when a learner is disengaged or bored and have been shown to help learners become more engaged in learning

Journal ArticleDOI
TL;DR: Autonomous Cloud Intrusion Response System (ACIRS) continuously monitors and analyzes system events and computes security and risk parameters to provide risk assessment and mitigation capabilities with a scalable and elastic architecture with no central coordinator.
Abstract: Cloud computing delivers on-demand resources over the Internet on a pay-for-use basis, intruders may exploit clouds for their advantage. This paper presents Autonomous Cloud Intrusion Response System (ACIRS), a proper defense strategy for cloud systems. ACIRS continuously monitors and analyzes system events and computes security and risk parameters to provide risk assessment and mitigation capabilities with a scalable and elastic architecture with no central coordinator. It detects masquerade, host based and network based attacks and selects the appropriate response to mitigate these attacks. ACIRS is superior to NICE (Network Intrusion Detection and Countermeasure Selection system) in reducing the risk by 38 %. This paper describes the components, architecture, and advantages of ACIRS.

Journal ArticleDOI
TL;DR: This paper attempts to explore a reusable GPU-based remote sensing image parallel processing model and to establish a set of parallel programming templates, which provides programmers with a more simple and effective way for programming parallelRemote sensing image processing algorithms.
Abstract: Remote sensing image processing is characterized with features of massive data processing, intensive computation, and complex processing algorithms. These characteristics make the rapid processing of remote sensing images very difficult and inefficient. The rapid development of general-purpose graphic process unit (GPGPU) computing technology has resulted in continuous improvement in GPU computing performance. Its strong floating point calculating capability, high intensive computation, small volume, and excellent performance-cost ratio provide an effective solution to the problems faced in remote sensing image processing. However, current usage of GPU in remote sensing image processing applications has been limited to specific parallel algorithms and their optimization of processes, rather than formed well-established models and methods. This has introduced serious problems to the development of remote sensing image processing algorithms on GPU architectures. For example, GPU parallel strategies and algorithms are highly coupled and non-reusable. The processing system is closely associated with the GPU hardware so that programming for remote sensing algorithms on GPU is nothing but easy. In this paper, we attempt to explore a reusable GPU-based remote sensing image parallel processing model and to establish a set of parallel programming templates, which provides programmers with a more simple and effective way for programming parallel remote sensing image processing algorithms.

Journal ArticleDOI
TL;DR: A queuing based approach for task management and a heuristic algorithm for resource management are proposed that provide better performance as compared to the existing state-of-the-art approaches.
Abstract: In recent years, multimedia cloud computing is becoming a promising technology that can effectively process multimedia services and provide quality of service (QoS) provisioning for multimedia applications from anywhere, at any time and on any device at lower costs. However, there are two major challenges exist in this emerging computing paradigm: one is task management, which maps multimedia tasks to virtual machines, and the other is resource management, which maps virtual machines (VMs) to physical servers. In this study, we aim at providing an efficient solution that jointly addresses these challenges. In particular, a queuing based approach for task management and a heuristic algorithm for resource management are proposed. By adopting allocation deadline in each VM request, both task manager and VM allocator receive better chances to optimize the cost while satisfying the constraints on the quality of multimedia service. Various simulations were conducted to validate the efficiency of the proposed task and resource management approaches. The results showed that the proposed solutions provided better performance as compared to the existing state-of-the-art approaches.

Journal ArticleDOI
TL;DR: This special issue of Springer Computing deals with the intersection of big media data and it needs for exploiting elastic cloud computing services for efficiently processing and analysing such big data to support emerging media-optimised applications.
Abstract: Welcome tothespecial issueofSpringer Computing onSoftware Tools and Technologies for Delivering Smart Media-Optimized Applications in the Cloud. This special issue deals with the intersection of big media data and it needs for exploiting elastic cloud computing services for efficiently processing and analysing such big data to support emerging media-optimised applications.

Journal ArticleDOI
TL;DR: This paper outlines the approach where formally analyzable models may be automatically extracted from BIM depending on the analysis required, and then checked against formally specified requirements, both regarding static and dynamic properties of the design, prior to the construction phase (at design time).
Abstract: We increasingly live in cyber-physical spaces: spaces that are both physical and digital, and where the two aspects are intertwined. Cyber-physical spaces may exhibit a range of behaviors, from smart control of heating, ventilation, and light to visionary multi-functional living spaces that can be spatially re-organized in a dynamic way. In contrast to traditional physical environments, cyber-physical spaces often exhibit dynamic behaviors: they can change over time and react to changes occurring in space. Current design of spaces, however, does not normally accommodate the cyber aspects of modern spatial environments and does not capture their dynamic behavior. Spatial design, although done with CAD tools and following certain international processes and standards, such as Building Information Modelling (BIM), largely produces syntactic descriptions of spaces which lack dynamic semantics. As a consequence, designs cannot be automatically (and formally) analyzed with respect to various requirements emerging from dynamic cyber-physical spaces; safety, security or reliability requirements being typical examples of this. This paper will show an avenue for research which can be characterized as rethinking the design of spatial environments, i.e., dynamic cyber-physical spaces, from a software engineering perspective. We outline our approach where formally analyzable models may be automatically extracted from BIM depending on the analysis required, and then checked against formally specified requirements, both regarding static and dynamic properties of the design, prior to the construction phase (at design time). To realize automated operational management, these models can also be used during operation to continuously check satisfaction of the requirements when changes occur, and possibly enforce their satisfaction through self-adaptive strategies (at run-time).

Journal ArticleDOI
TL;DR: The aim of this paper is to study energy efficiency issues within data centers from the Information System perspective using a model-based approach that integrates the application and infrastructure capabilities and includes an evolution mechanism able to evaluate past decisions feedback in order to adjust the model according to the current underlying environment.
Abstract: The problem of Information Technology energy consumption has gained much attention due to the always increasing use of IT both for business and for personal reasons. In particular, data centers are now playing a much more important role in the modern society, where the information is available all the time and everywhere. In this context, the aim of this paper is to study energy efficiency issues within data centers from the Information System perspective. The proposed approach integrates the application and infrastructure capabilities, in which the enactment of adaptation mechanisms is aligned with the business process. Based on both energy and quality dimensions of service-based applications, a model-based approach supports the formulation of new constrained optimization problem that takes into consideration over-constrained solutions where the goal is to obtain the better trade-off between energy and quality requirements. These ideas are combined within a framework where time-based analysis allow the identification of potential system threats and drive the selection of adaptation actions improving overall energy and quality requirements, represented by indicators satisfaction. In addition, the framework includes an evolution mechanism that is able to evaluate past decisions feedback in order to adjust the model according to the current underlying environment. Finally, the benefits of the approach are analyzed in an experimental setting.

Journal ArticleDOI
TL;DR: A semi-automated approach to synthesize an object-centric design from a business process model that specifies the flow of multiple stateful objects between activities, which can be refined into a complete specification of a data-centric BPM system.
Abstract: Data-centric business process models couple data and control flow to specify flexible business processes. However, it can be difficult to predict the actual behavior of a data-centric model, since the global process is typically distributed over several data elements and possibly specified in a declarative way. We therefore envision a data-centric process modeling approach in which the default behavior of the process is first specified in a classical, imperative process notation, which is then transformed to a declarative, data-centric process model that can be further refined into a complete model. To support this vision, we define a semi-automated approach to synthesize an object-centric design from a business process model that specifies the flow of multiple stateful objects between activities. The object-centric design specifies in an imperative way the life cycles of the objects and the object interactions. Next, we define a mapping from an object-centric design to a declarative Guard-Stage-Milestone schema, which can be refined into a complete specification of a data-centric BPM system. The synthesis approach has been implemented and tested using a graph transformation tool.

Journal ArticleDOI
TL;DR: A high performance and load balancing, and able-to-be-replicated system that provides data storage for private cloud users through a virtualization system that extends and enhances the functionality of the Hadoop distributed system.
Abstract: In the past, people have focused on cluster computing and grid computing. Now, however, this focus has shifted to cloud computing. Irrespective of what techniques are used, there are always storage requirements. The challenge people face in this area is the huge amount of data to be stored, and its complexity. People are now using many cloud applications. As a result, service providers must serve increasingly more people, causing more and more connections involving substantially more data. These problems could have been solved in the past, but in the age of cloud computing, they have become more complex. This paper focuses on cloud computing infrastructure, and especially data services. The goal of this paper is to implement a high performance and load balancing, and able-to-be-replicated system that provides data storage for private cloud users through a virtualization system. This system extends and enhances the functionality of the Hadoop distributed system. The proposed approach also implements a resource monitor of machine status factors such as CPU, memory, and network usage to help optimize the virtualization system and data storage system. To prove and extend the usability of this design, a synchronize app was also developed running on Android based on our distributed data storage.

Journal ArticleDOI
TL;DR: The objective is to fully utilize PCM to reduce the energy consumption while ensuring that the real-time performance of applications are guaranteed, and a two-phase approach to solve hybrid main memory address mapping problem.
Abstract: In embedded systems, especially battery-driven mobile devices, energy is one of the most critical performance metrics. Due to its high density and low standby power, phase change memory (PCM), an emerging non-volatile memory device, is becoming a promising dynamic random access memory (DRAM) alternative. Recent studies have proposed the hybrid main memory architecture integrating both PCM and DRAM to fully take advantage of the properties of both memories. However, the low power performance of PCM in the hybrid main memory architecture has not been fully explored. Therefore, it becomes an interesting problem to utilize PCM and DRAM as hybrid main memory for energy optimization in embedded systems. In this paper, we present an energy optimization technique for hybrid main memory architecture. The objective is to fully utilize PCM to reduce the energy consumption while ensuring that the real-time performance of applications are guaranteed. We propose a two-phase approach to solve hybrid main memory address mapping problem. In the first phase, we calculate energy and time cost for each address based on the task models. Then the applications can be modeled as data-flow graph nodes, and different access times will associate with different energy consumption. In the second phase, for different memory types and the given timing constraint, we formulate the scheduling problem as an integer linear programming (ILP) model and obtain an optimal solution. The ILP model can map a proper memory type for each address such that the total energy consumption can be minimized while the timing constraint is satisfied. In addition, we propose a heuristic approach to efficiently obtain a near-optimal solution. We conduct experiments on an ARM-based simulator. The experimental results show that our method can effectively reduce the energy consumption with the least system cost compared with the previous work.

Journal ArticleDOI
TL;DR: An adaptive resource management algorithm is extended with a new resource merge strategy in order to prevent average resource size from shrinking and coordinate resources among the component services of a workflow so that unnecessary resource allocations and terminations can be avoided.
Abstract: A Cloud platform offers on-demand provisioning of virtualized resources and pay-per-use charge model to its hosted services to satisfy their fluctuating resource needs. Resource scaling in cloud is often carried out by specifying static rules or thresholds. As business processes and scientific jobs become more intricate and involve more components, traditional reactive or rule-based resource management methods are not able to meet the new requirements. In this paper, we extend our previous work on dynamically managing virtualized resources for service workflows in a cloud environment. Extensive experimental results of an adaptive resource management algorithm are reported. The algorithm makes resource management decisions based on predictive results and high level user specified thresholds. It is also able to coordinate resources among the component services of a workflow so that unnecessary resource allocations and terminations can be avoided. Based on observations from previous experiments, the algorithm is extended with a new resource merge strategy in order to prevent average resource size from shrinking. Simulation results from synthetic workload data demonstrated the effectiveness of the extension.

Journal ArticleDOI
TL;DR: This paper summarizes the findings of an initial work on the construction of a price index based on a hedonic pricing method, taking into account different factors of IaaS cloud computing services, including two of the most important players in the cloud market, Google and Microsoft Azure.
Abstract: Cloud computing, as an innovative business model, has experienced rapid diffusion across the international business world, offering many benefits to both the demand and the supply side of the ICT market. In particular, the public cloud approach receives more attention and the Infrastructure as a Service (IaaS) model is expected to be the fastest growing model of public cloud computing, as it is considered to be a very good solution for companies needing the control of fundamental computing resources, such as memory, computing power and storage capacity. Currently, the battle for a dominant market share grows the competition among cloud providers and leads to the development of new pricing schemes, in order to meet the market demand. However, the choice of the cheapest cloud hosting provider depends exclusively on the clients' needs and this is why prices for cloud services are a result of a multidimensional function shaped by the service's characteristics. Into that context, this paper summarizes the findings of an initial work on the construction of a price index based on a hedonic pricing method, taking into account different factors of IaaS cloud computing services, including two of the most important players in the cloud market, Google and Microsoft Azure. The aim of this study is to provide price indices both on a continent level and globally, in an effort to investigate differences in pricing policies in different marketplaces. Comparing the results leads to important conclusions related to pricing policies of IaaS cloud services.

Journal ArticleDOI
TL;DR: The implementation results demonstrated that the proposed methodology significantly reduces the processing time of data centers and the response time of customer applications by minimizing VMs migration.
Abstract: Cloud computing is one of the most attractive cost effective technologies for provisioning information technology (IT) resources to common IT consumers. These resources are provided as service through internet in pay per usage manner, which are mainly classified into application, platform and infrastructure. Cloud provides its services through data centers that possess high configuration servers. The conservation of data centers energy give benefits to both cloud providers and consumers in terms of service time and cost. One of the fundamental services of cloud is infrastructure as a service that provides virtual machines (VMs) as a computing resource to consumers. The VMs are created in data center servers as the machine instances, which could work as a dedicated computer system for consumers. As cloud provides the feature of elasticity, the consumers can change their resource demand during service. This characteristics leads VMs migration is unavoidable in cloud environment. The increased down time of VMs in migration affects the efficiency of cloud service. The minimization of VMs migration reduces the processing time that ultimately saves the energy of data centers. The proposed methodology in this work utilizes genetically weight optimized artificial neural network to predict the near future availability of data center servers. Based on the future availability of resources the VMs management activities are performed. The implementation results demonstrated that the proposed methodology significantly reduces the processing time of data centers and the response time of customer applications by minimizing VMs migration.

Journal ArticleDOI
TL;DR: The objective of this paper is to demonstrate the fidelity of implementing WirelessHART in surface acoustic wave (SAW) sensor network, which is a Dense Reader Environment intended for industrial monitoring.
Abstract: The use of wireless technology for industrial automation and process control has become a favoured choice as it reduces the hassle of cable installation and maintenance cost. The WirelessHART protocol is the first open and interoperable industrial wireless sensor network standard, that is based on Time Division Multiple Access and channel hopping as a Medium Access Control protocol. Industrial wireless sensor networks must support reliable and real-time data collection in harsh environments. The objective of this paper is to demonstrate the fidelity of implementing WirelessHART in surface acoustic wave (SAW) sensor network, which is a Dense Reader Environment intended for industrial monitoring. Comparison between the existing industrial monitoring applications has been deliberated, for which their future perspectives have been extensively discussed. Three possible network topologies for WirelessHART network are simulated in proposing the optimal network design for the SAW sensor network. In fact, the proposed system configurations are also tested with RF transmitters and receivers for better understanding on WirelessHART and its application in a dense interrogators network.

Journal ArticleDOI
TL;DR: The approach defines techniques for composing data-intensive, scientific workflows in more complex simulations in a generic, domain-independent way and thus provides means for collaborative and integrated data management using the workflow/process-based paradigm.
Abstract: Current systems for enacting scientific experiments, and simulation workflows in particular, do not support multi-scale and multi-field problems if they are not coupled on the level of the mathematical model. To address this deficiency, we present an approach enabling the trial-and-error modeling and execution of multi-scale and/or multi-field simulations in a top-down and bottom-up manner which is based on the notion of choreographies. The approach defines techniques for composing data-intensive, scientific workflows in more complex simulations in a generic, domain-independent way and thus provides means for collaborative and integrated data management using the workflow/process-based paradigm. We contribute a life cycle definition of such simulations and present in detail concepts and techniques that support all life cycle phases. Furthermore, requirements on a respective software system and choreography language supporting multi-scale and/or multi-field simulations are identified, and an architecture and its realization are presented.

Journal ArticleDOI
TL;DR: This paper proposes a novel Cloud Computing and P2P hybrid architecture for multimedia information retrieval on VoD services that supports random seeking while providing scalability and efficiency, and separates cloud nodes and peers responsibilities to manage the video metadata and its segments, respectively.
Abstract: Recent research in Cloud Computing and Peer-to-Peer systems for Video-on-Demand (VoD) has focused on multimedia information retrieval, using cloud nodes as video streaming servers and peers as a way to distribute and share the video segments. A key challenge faced by these systems is providing an efficient way to retrieve the information segments descriptor, composed of its metadata and video segments, distributed among the cloud nodes and the Peer-to-Peer (P2P) network. In this paper, we propose a novel Cloud Computing and P2P hybrid architecture for multimedia information retrieval on VoD services that supports random seeking while providing scalability and efficiency. The architecture comprises Cloud and P2P layers. The Cloud layer is responsible for video segment metadata retrieval, using ontologies to improve the relevance of the retrieved information, and for distributing the metadata structures among cloud nodes. The P2P layer is responsible for finding peers that have the physical location of a segment. In this layer, we use trackers, which manage and collect the segments shared among other peers. We also use two Distributed Hash Tables, one to find these trackers and the other to store the information collected in case the tracker leaves the network and another peer needs to replace it. Unlike previous work, our architecture separates cloud nodes and peers responsibilities to manage the video metadata and its segments, respectively. Also, we show via simulations, the possibility of converting any peer to act as a tracker, while maintaining system scalability and performance, avoiding using centralized and powerful servers.

Journal ArticleDOI
TL;DR: A power management technique that maintains the quality of service (QoS) levels specified with service level agreements expressed as a threshold for a percentile of the response time and provides self-healing by identifying when servers fail and automatically provisioning new servers.
Abstract: The increasing use of server clusters has made their energy consumption an important issue. To address it, several power management techniques are being developed. In order to be useful, these techniques must address the performance and availability implications of reducing energy consumption. This paper presents a power management technique that maintains the quality of service (QoS) levels specified with service level agreements expressed as a threshold for a percentile of the response time. In addition, it provides self-healing by identifying when servers fail and automatically provisioning new servers. The technique is based on balancing the load so that it is concentrated in a small number of servers. For this, it only requires two utilization thresholds and models of performance and power consumption for the application executed in the server. It works in heterogeneous servers and provides overload protection. Several experiments carried out on a prototype show that the technique reduces energy consumption (up to 57.59 % compared to an always-on policy) while providing self-healing and maintaining the QoS.

Journal ArticleDOI
TL;DR: A workflow scheduling algorithm is proposed to schedule large scientific workflows that are to be executed on IaaS clouds and the metaheuristic Catfish particle swarm optimization (C-PSO) technique is used to select the best schedule with the least makespan and execution cost.
Abstract: Cloud computing is a technology wherein a network of remote servers is used to process large amount of data in real-time. The servers and data sources may be located in geographically distant regions. Scheduling of workflows is one of the major challenging issues in cloud computing. Workflows are used to express a wide variety of applications including scientific computing and multi-tier web applications. The Workflow scheduling problem is known to be NP-complete. No known traditional scheduling algorithm is able to provide an optimal solution in polynomial time for NP-complete problems. So, researchers rely on heuristics and meta-heuristics to achieve the most efficient solution. In this paper, a workflow scheduling algorithm is proposed to schedule large scientific workflows that are to be executed on IaaS clouds. The workflow scheduling algorithm generates a schedule with the task-to-resource mapping. The metaheuristic Catfish particle swarm optimization (C-PSO) technique is used to select the best schedule with the least makespan and execution cost. The performance of C-PSO is then compared with traditional PSO. The algorithm is simulated on the WorkFlowSim Simulator, an extension of CloudSim simulator. The solution is tested for different types of scientific workflows like Montage, Epigenome, CyberShake and Inspiral. It is observed from the experimental results that C-PSO gives better performance than traditional PSO in terms of execution cost and makespan.