The Journal of Supercomputing
Springer Science+Business Media
About: The Journal of Supercomputing is an academic journal published by Springer Science+Business Media. The journal publishes majorly in the area(s): Computer science & Cloud computing. It has an ISSN identifier of 0920-8542. Over the lifetime, 6191 publications have been published receiving 74610 citations. The journal is also known as: Journal of Supercomputing.
Papers published on a yearly basis
TL;DR: This survey gives an overview of wireless sensor networks and their application domains including the challenges that should be addressed in order to push the technology further and identifies several open research issues that need to be investigated in future.
Abstract: Wireless sensor network (WSN) has emerged as one of the most promising technologies for the future. This has been enabled by advances in technology and availability of small, inexpensive, and smart sensors resulting in cost effective and easily deployable WSNs. However, researchers must address a variety of challenges to facilitate the widespread deployment of WSN technology in real-world domains. In this survey, we give an overview of wireless sensor networks and their application domains including the challenges that should be addressed in order to push the technology further. Then we review the recent technologies and testbeds for WSNs. Finally, we identify several open research issues that need to be investigated in future. Our survey is different from existing surveys in that we focus on recent developments in wireless sensor network technologies. We review the leading research projects, standards and technologies, and platforms. Moreover, we highlight a recent phenomenon in WSN research that is to explore synergy between sensor networks and other technologies and explain how this can help sensor networks achieve their full potential. This paper intends to help new researchers entering the domain of WSNs by providing a comprehensive survey on recent developments.
TL;DR: A simulation environment for energy-aware cloud computing data centers is presented and the effectiveness of the simulator in utilizing power management schema, such as voltage scaling, frequency scaling, and dynamic shutdown that are applied to the computing and networking components are demonstrated.
Abstract: Cloud computing data centers are becoming increasingly popular for the provisioning of computing resources. The cost and operating expenses of data centers have skyrocketed with the increase in computing capacity. Several governmental, industrial, and academic surveys indicate that the energy utilized by computing and communication units within a data center contributes to a considerable slice of the data center operational costs. In this paper, we present a simulation environment for energy-aware cloud computing data centers. Along with the workload distribution, the simulator is designed to capture details of the energy consumed by data center components (servers, switches, and links) as well as packet-level communication patterns in realistic setups. The simulation results obtained for two-tier, three-tier, and three-tier high-speed data center architectures demonstrate the effectiveness of the simulator in utilizing power management schema, such as voltage scaling, frequency scaling, and dynamic shutdown that are applied to the computing and networking components.
TL;DR: Superblocks as discussed by the authors enable the optimizer and scheduler to extract more ILP along the important execution paths by systematically removing constraints due to the unimportant paths, which is useful for control-intensive programs.
Abstract: A compiler for VLIW and superscalar processors must expose sufficient instruction-level parallelism (ILP) to effectively utilize the parallel hardware. However, ILP within basic blocks is extremely limited for control-intensive programs. We have developed a set of techniques for exploiting ILP across basic block boundaries. These techniques are based on a novel structure called the superblock. The superblock enables the optimizer and scheduler to extract more ILP along the important execution paths by systematically removing constraints due to the unimportant paths. Superblock optimization and scheduling have been implemented in the IMPACT-I compiler. This implementation gives us a unique opportunity to fully understand the issues involved in incorporating these techniques into a real compiler. Superblock optimizations and scheduling are shown to be useful while taking into account a variety of architectural features.
TL;DR: Two energy-conscious task consolidation heuristics are presented, which aim to maximize resource utilization and explicitly take into account both active and idle energy consumption and demonstrate their promising energy-saving capability.
Abstract: The energy consumption of under-utilized resources, particularly in a cloud environment, accounts for a substantial amount of the actual energy use. Inherently, a resource allocation strategy that takes into account resource utilization would lead to a better energy efficiency; this, in clouds, extends further with virtualization technologies in that tasks can be easily consolidated. Task consolidation is an effective method to increase resource utilization and in turn reduces energy consumption. Recent studies identified that server energy consumption scales linearly with (processor) resource utilization. This encouraging fact further highlights the significant contribution of task consolidation to the reduction in energy consumption. However, task consolidation can also lead to the freeing up of resources that can sit idling yet still drawing power. There have been some notable efforts to reduce idle power draw, typically by putting computer resources into some form of sleep/power-saving mode. In this paper, we present two energy-conscious task consolidation heuristics, which aim to maximize resource utilization and explicitly take into account both active and idle energy consumption. Our heuristics assign each task to the resource on which the energy consumption for executing the task is explicitly or implicitly minimized without the performance degradation of that task. Based on our experimental results, our heuristics demonstrate their promising energy-saving capability.
TL;DR: A QoS constrained resource allocation problem is considered, in which service demanders intend to solve sophisticated parallel computing problem by requesting the usage of resources across a cloud-based network, and a cost of each computational service depends on the amount of computation.
Abstract: As cloud-based services become more numerous and dynamic, resource provisioning becomes more and more challenging. A QoS constrained resource allocation problem is considered in this paper, in which service demanders intend to solve sophisticated parallel computing problem by requesting the usage of resources across a cloud-based network, and a cost of each computational service depends on the amount of computation. Game theory is used to solve the problem of resource allocation. A practical approximated solution with the following two steps is proposed. First, each participant solves its optimal problem independently, without consideration of the multiplexing of resource assignments. A Binary Integer Programming method is proposed to solve the independent optimization. Second, an evolutionary mechanism is designed, which changes multiplexed strategies of the initial optimal solutions of different participants with minimizing their efficiency losses. The algorithms in the evolutionary mechanism take both optimization and fairness into account. It is demonstrated that Nash equilibrium always exists if the resource allocation game has feasible solutions.