scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 2015"


Journal ArticleDOI
TL;DR: An efficient cloud workload management framework in which cloud workloads have been identified, analyzed and clustered through K-means on the basis of weights assigned and their QoS requirements is presented.
Abstract: Cloud computing harmonizes and delivers the ability of resource sharing over different geographical sites. Cloud resource scheduling is a tedious task due to the problem of finding the best match of resource-workload pair. The efficient management of dynamic nature of resource can be done with the help of cloud workloads. Till cloud workload is deliberated as a central capability, the resources cannot be utilized in an effective way. In literature, very few efficient resource scheduling policies for energy, cost and time constraint cloud workloads are reported. This paper presents an efficient cloud workload management framework in which cloud workloads have been identified, analyzed and clustered through K-means on the basis of weights assigned and their QoS requirements. Further scheduling has been done based on different scheduling policies and their corresponding algorithms. The performance of the proposed algorithms has been evaluated with existing scheduling policies through CloudSim toolkit. The experimental results show that the proposed framework gives better results in terms of energy consumption, execution cost and time of different cloud workloads as compared to existing algorithms.

132 citations


Journal ArticleDOI
TL;DR: A novel rate adaptation algorithm, capable of increasing clients’ Quality of Experience (QoE) and achieving fairness in a multiclient setting, is proposed, which can improve fairness up to 80% compared to state-of-the-art HAS heuristics in a scenario with three networks.
Abstract: HTTP Adaptive Streaming (HAS) is quickly becoming the de facto standard for video streaming services. In HAS, each video is temporally segmented and stored in different quality levels. Rate adaptation heuristics, deployed at the video player, allow the most appropriate level to be dynamically requested, based on the current network conditions. It has been shown that today’s heuristics underperform when multiple clients consume video at the same time, due to fairness issues among clients. Concretely, this means that different clients negatively influence each other as they compete for shared network resources. In this article, we propose a novel rate adaptation algorithm called FINEAS (Fair In-Network Enhanced Adaptive Streaming), capable of increasing clients’ Quality of Experience (QoE) and achieving fairness in a multiclient setting. A key element of this approach is an in-network system of coordination proxies in charge of facilitating fair resource sharing among clients. The strength of this approach is threefold. First, fairness is achieved without explicit communication among clients and thus no significant overhead is introduced into the network. Second, the system of coordination proxies is transparent to the clients, that is, the clients do not need to be aware of its presence. Third, the HAS principle is maintained, as the in-network components only provide the clients with new information and suggestions, while the rate adaptation decision remains the sole responsibility of the clients themselves. We evaluate this novel approach through simulations, under highly variable bandwidth conditions and in several multiclient scenarios. We show how the proposed approach can improve fairness up to 80p compared to state-of-the-art HAS heuristics in a scenario with three networks, each containing 30 clients streaming video at the same time.

114 citations


Journal ArticleDOI
TL;DR: This paper model the interference relationships among different D2D and cellular communication links as a novel interference graph with unique attributes and proposes a corresponding joint resource-allocation scheme that can effectively lead to a near-optimal solution at the base station, with low computational complexity.
Abstract: Device-to-device (D2D) communications underlaying cellular networks have been recently considered as a promising means to enhance resource utilization of the cellular network and local user throughput among devices in proximity to each other. In this paper, we investigate the joint resource block assignment and transmit power allocation problem to optimize the network performance in such a scenario. Specifically, we model the interference relationships among different D2D and cellular communication links as a novel interference graph with unique attributes and propose a corresponding joint resource-allocation scheme that can effectively lead to a near-optimal solution at the base station, with low computational complexity. Simulation results confirm that, with markedly reduced complexity, our proposed scheme achieves a network throughput that approaches the one corresponding to the optimal resource-sharing scheme obtained via exhaustive search.

114 citations


Journal ArticleDOI
TL;DR: It is proven that the cheating mechanism benefits a subset of D2D users without hurting the performance of the rest, and the effectiveness of both the stable matching and cheating algorithms in terms of improving both D1D users and the overall throughput in D2d communications is demonstrated.
Abstract: In device-to-device (D2D) communication, mobile users communicate directly without going through the base station. D2D commutation has the advantage of improving spectrum efficiency. But the interference introduced by resource sharing of D2D has become a significant challenge. In this paper, we try to optimize the system throughput while simultaneously meeting the quality of service (QoS) requirements for both D2D users and cellular users (CUs). We implement matching theory to solve the resource allocation problem. We utilize two efficient stable matching algorithms to optimize the social welfare while ensuring the network stability. More importantly, we introduce the idea of cheating in matching to further improve D2D users' throughput. It is proven that the cheating mechanism benefits a subset of D2D users without hurting the performance of the rest. Through the simulation results, we demonstrate the effectiveness of both the stable matching and cheating algorithms in terms of improving both D2D users and the overall throughput in D2D communications.

92 citations


Journal ArticleDOI
TL;DR: This paper analyzes and examines the resource discovery techniques into four main categories: unstructured, structured, super-peer and hybrid, and reviews the major development in these four categories and outlined new challenges.
Abstract: Resource discovery is an important part of any distributed and resource sharing systems, like Peer-to-Peer (P2P) networks. In this paper we provide a comprehensive study and survey of the state of the art resource discovery techniques which have been used in P2P so far. We analyze and examine the resource discovery techniques into four main categories: unstructured, structured, super-peer and hybrid. We reviewed the major development in these four categories and outlined new challenges. This paper also provides a discussion of differences between considered techniques in terms of scalability, dynamicity, reliability, load balancing, response time and robustness in order to provide insights on the identification of open issues and provide guidelines for future researches.

92 citations


Book ChapterDOI
13 Apr 2015
TL;DR: This work exploits a shared resource optimization technique called memory deduplication to mount a powerful known-ciphertext only cache side-channel attack on a popular OpenSSL implementation of AES, working in a more realistic scenario with much weaker assumption.
Abstract: Cloud's unrivaled cost effectiveness and on the fly operation versatility is attractive to enterprise and personal users. However, the cloud inherits a dangerous behavior from virtualization systems that poses a serious security risk: resource sharing. This work exploits a shared resource optimization technique called memory deduplication to mount a powerful known-ciphertext only cache side-channel attack on a popular OpenSSL implementation of AES. In contrast to the other cross-VM cache attacks, our attack does not require synchronization with the target server and is fully asynchronous, working in a more realistic scenario with much weaker assumption. Also, our attack succeeds in just 15 seconds working across cores in the cross-VM setting. Our results show that there is strong information leakage through cache in virtualized systems and the memory deduplication should be approached with caution.

81 citations


Journal ArticleDOI
TL;DR: A novel approach for dealing with random link failures, in which probabilistic survivability guarantees are provided to limit capacity over-provisioning and a simulated annealing heuristic is proposed to solve the problem for largescale networks.
Abstract: This paper presents a scheme in which a dedicated backup network is designed to provide protection from random link failures. Upon a link failure in the primary network, traffic is rerouted through a preplanned path in the backup network. We introduce a novel approach for dealing with random link failures, in which probabilistic survivability guarantees are provided to limit capacity overprovisioning. We show that the optimal backup routing strategy in this respect depends on the reliability of the primary network. Specifically, as primary links become less likely to fail, the optimal backup networks employ more resource sharing among backup paths. We apply results from the field of robust optimization to formulate an ILP for the design and capacity provisioning of these backup networks. We then propose a simulated annealing heuristic to solve this problem for large-scale networks and present simulation results that verify our analysis and approach.

66 citations


Journal ArticleDOI
TL;DR: This work develops a coalition formation process based on the switch rule, through which each cellular user makes an individual and distributed decision to form a Nash-stable partition, and presents a dynamic resource sharing algorithm, which iteratively operates the coalition formation and power control processes.
Abstract: In this work, we propose a dynamic distributed resource sharing scheme which jointly considers mode selection, resource allocation, and power control in a unified framework for general D2D communications First, we model the joint issue of mode selection and resource allocation as a hedonic coalition formation game, while accounting for the tradeoff between the benefits in terms of available rate and the costs in terms of the mutual interference Moreover, we develop a coalition formation process based on the switch rule, through which each cellular user makes an individual and distributed decision to form a Nash-stable partition Second, we view the members of each coalition as a whole, and formulate a power control problem to share the aim of maximizing the sum-rate of cellular links in this coalition To solve this NP-hard problem with online operation, we present a power control process, which employs the local piecewise-linear approach to take a locally and separately approximate optimal outcome Finally, we present a dynamic resource sharing algorithm, which iteratively operates the coalition formation and power control processes

66 citations


Patent
06 Mar 2015
Abstract: Disclosed are an apparatus and method that enables an owner/administrator to manage access to a shared resource based on identity that is established by use of biometric data. For example, access to a shared physical resource can be restricted via use of a biometric locking device. An access management platform can be used to authorize a new user to access the shared resource. Once authorized, the new user can unlock the biometric locking device based on, for example, fingerprint data of his finger. The access management platform can similarly be used to manage access to a virtual shared resource, such as an online account. A virtual locking device, such as a computer that acts as an intermediary between the user and the online account, can be used to restrict access to the online account. The access management platform can enable the user to access the online account based on biometric data.

61 citations


Journal ArticleDOI
TL;DR: This work proposes a social-aware D2D resource sharing scheme that exploits social network properties of community and centrality for the new system design paradigm, which significantly improves the system performance compared to the existing schemes.
Abstract: Device-to-device (D2D) communication is a vital component for the next generation cellular network to bring hop gains, improve spectral reuse, and enhance system capacity. These benefits depend on efficiently solving several technical problems, among which resource allocation that shares spectrum resources between cellular users and D2D pairs is critically challenging. We propose a social-aware D2D resource sharing scheme that exploits social network properties of community and centrality for the new system design paradigm. Extensive simulations with realistic network settings demonstrate the effectiveness of our proposed scheme, which significantly improves the system performance compared to the existing schemes.

52 citations


Proceedings ArticleDOI
09 Mar 2015
TL;DR: This paper proposes XChange, a novel CMP resource allocation mechanism that delivers scalable high throughput and fairness, and shows that XChange is significantly more scalable than the state-of-the-art centralized allocation scheme the authors compare against.
Abstract: Efficiently allocating shared on-chip resources across cores is critical to optimize execution in chip multiprocessors (CMPs). Techniques proposed in the literature often rely on global, centralized mechanisms that seek to maximize system throughput. Global optimization may hurt scalability: as more cores are integrated on a die, the search space grows exponentially, making it harder to achieve optimal or even acceptable operating points at run-time without incurring significant overheads. In this paper, we propose XChange, a novel CMP resource allocation mechanism that delivers scalable high throughput and fairness. Through XChange, the CMP functions as a market, where each shared resource is assigned a price which changes over time, and each core seeks to maximize its own utility, by bidding for these shared resources. Because each core works largely independently, the resource allocation becomes a scalable, mostly distributed decision-making process. In addition, by distributing the resources proportionally to the bids, the system avoids unfairness, treating each core in an unbiased manner. Our evaluation shows that, using detailed simulations of a 64-core CMP configuration running a variety of multipro-grammed workloads, the proposed XChange mechanism improves system throughput (weighted speedup) by about 21% on average, and fairness (harmonic speedup) by about 24% on average, compared with equal-share on-chip cache and power distribution. On both metrics, that is at least about twice as much improvement over equal-share as a state-of-the-art centralized allocation scheme. Furthermore, our results show that XChange is significantly more scalable than the state-of-the-art centralized allocation scheme we compare against.

Proceedings ArticleDOI
13 Apr 2015
TL;DR: This paper proposes a general model for capturing security constraints between tasks in a real-time system, and expands upon a mechanism to enforce these constraints viz., cleaning up of shared resource state, and provides schedulability conditions based on fixed priority scheduling with both preemptive and non-preemptive tasks.
Abstract: Traditionally real-time systems and security have been considered as separate domains. Recent attacks on various systems with real-time properties have shown the need for a redesign of such systems to include security as a first class principle. In this paper, we propose a general model for capturing security constraints between tasks in a real-time system. This model is then used in conjunction with real-time scheduling algorithms to prevent the leakage of information via storage channels on implicitly shared resources. We expand upon a mechanism to enforce these constraints viz., cleaning up of shared resource state, and provide schedulability conditions based on fixed priority scheduling with both preemptive and non-preemptive tasks. We perform extensive evaluations, both theoretical and experimental, the latter on a hardware-in-the-loop simulator of an unmanned aerial vehicle (UAV) that executes on a demonstration platform.

Journal ArticleDOI
TL;DR: A detailed analysis of the performance of the internal network of Amazon EC2, performed by adopting a non-cooperative experimental evaluation approach, to provide a quantitative assessment of the networking performance as a function of the several variables available, such as geographic region, resource price or size.

Patent
17 Feb 2015
TL;DR: In this paper, the authors present a system and methods for managing connected devices and associated network connections, where trust, privacy, safety, and security of information communicated between connected devices may be established in part through use of security associations and/or shared group tokens.
Abstract: This disclosure relates to systems and methods for managing connected devices and associated network connections. In certain embodiments, trust, privacy, safety, and/or security of information communicated between connected devices may be established in part through use of security associations and/or shared group tokens. In some embodiments, these security associations may be used to form an explicit private network associated with the user. A user may add and/or manage devices included in the explicit private network through management of various security associations associated with the network's constituent devices.

Journal ArticleDOI
Vivek Narasayya1, Ishai Menache1, Mohit Singh1, Feng Li1, Manoj Syamala1, Surajit Chaudhuri1 
01 Feb 2015
TL;DR: An SLA framework is developed that defines and enforces accountability of the service provider to the tenant even when buffer pool memory is not statically reserved on behalf of the tenant, and a novel buffer pool page replacement algorithm (MT-LRU) is presented that builds upon theoretical concepts from weighted online caching.
Abstract: Relational database-as-a-service (DaaS) providers need to rely on multi-tenancy and resource sharing among tenants, since statically reserving resources for a tenant is not cost effective. A major consequence of resource sharing is that the performance of one tenant can be adversely affected by resource demands of other co-located tenants. One such resource that is essential for good performance of a tenant's workload is buffer pool memory. In this paper, we study the problem of how to effectively share buffer pool memory in multi-tenant relational DaaS. We first develop an SLA framework that defines and enforces accountability of the service provider to the tenant even when buffer pool memory is not statically reserved on behalf of the tenant. Next, we present a novel buffer pool page replacement algorithm (MT-LRU) that builds upon theoretical concepts from weighted online caching, and is designed for multi-tenant scenarios involving SLAs and overbooking. MT-LRU generalizes the LRU-K algorithm which is commonly used in relational database systems. We have prototyped our techniques inside a commercial DaaS engine and extensive experiments demonstrate the effectiveness of our solution.

Proceedings ArticleDOI
01 Dec 2015
TL;DR: A novel distributed algorithm is proposed in this paper for a network of consumers coupled by energy resource sharing constraints, which aims at minimizing the aggregated electricity costs.
Abstract: A novel distributed algorithm is proposed in this paper for a network of consumers coupled by energy resource sharing constraints, which aims at minimizing the aggregated electricity costs. Each consumers is equipped with an energy management system that schedules the shiftable loads accounting for user preferences, while an aggregator entity coordinates the consumers demand and manages the interaction with the grid and the shared energy storage system (ESS) via a distributed strategy. The proposed distributed coordination algorithm requires the computation of Mixed Integer Linear Programs (MILPs) at each iteration. The proposed approach guarantees constraints satisfaction, cooperation among consumers, and fairness in the use of the shared resources among consumers. The strategy requires limited message exchange between each consumer and the aggregator, and no messaging among the consumers, which protects consumers privacy. Performance of the proposed distributed algorithm in comparison with a centralized one is illustrated using numerical experiments.

Journal ArticleDOI
TL;DR: The results show that the introduced sharing approach controls inventory and resource utilization, but it can cause shifts and fluctuations in performance depending on the information that is exchanged.

Journal ArticleDOI
TL;DR: A Nash bargaining game approach is presented to process the resource trading activity among cloud service providers in cloud-based SDWNs and indicates that cooperation is able to generate more benefits than competition.
Abstract: Software-defined wireless networking (SDWN) is an emerging paradigm in the era of the Internet of Things. In cloud-based SDWNs, resource management is seperated from the geo-distributed cloud, forming a virtual network topology in the control plane. Thus, a centralized software program is able to control and program the behavior of the entire network. In this article, we focus on resource management in cloud-based SDWNs, and discuss the competition and cooperation between cloud service providers. We present a Nash bargaining game approach to process the resource trading activity among cloud service providers in cloud-based SDWNs. Utility functions have been specifically considered to incorporate operation cost and resource utilization. Illustrative results indicate that cooperation is able to generate more benefits than competition. Moreover, resource sharing among cloud service providers has great significance in efficiently utilizing limited resources and improving quality of service.

Journal ArticleDOI
TL;DR: Four algorithms are proposed to address co- primary multi-operator radio resource sharing under heterogeneous traffic in both centralized and distributed scenarios and demonstrate the importance of coordination among co-primary operators for an optimal resource sharing.
Abstract: To tackle the challenge of providing higher data rates within limited spectral resources we consider the case of multiple operators sharing a common pool of radio resources. Four algorithms are proposed to address co-primary multi-operator radio resource sharing under heterogeneous traffic in both centralized and distributed scenarios. The performance of these algorithms is assessed through extensive system-level simulations for two indoor small cell layouts. It is assumed that the spectral allocations of the small cells are orthogonal to the macro network layer and thus, only the small cell traffic is modeled. The main performance metrics are user throughput and the relative amount of shared spectral resources. The numerical results demonstrate the importance of coordination among co-primary operators for an optimal resource sharing. Also, maximizing the spectrum sharing percentage generally improves the achievable throughput gains over non-sharing.

Proceedings ArticleDOI
08 Jun 2015
TL;DR: This work analyzes and transforms the V2X latency and reliability requirements into mathematical forms that are computable using only slowly varying channel information, and proposes a problem formulation fulfilling the requirements of V2x, where resource sharing can take place not only between vehicles and cellular users but also among different vehicles.
Abstract: Deploying direct device-to-device (D2D) links is considered an enabler for V2X applications, with intra-cell interference and stringent latency and reliability requirements as challenging issues.We investigate the radio resource management problem for D2D-based safety-critical V2X communications. Firstly, we analyze and transform the V2X latency and reliability requirements into mathematical forms that are computable using only slowly varying channel information. Secondly, we propose a problem formulation fulfilling the requirements of V2X, where resource sharing can take place not only between vehicles and cellular users but also among different vehicles. Moreover, a Resource Block Sharing and Power Allocation (RBSPA) algorithm is proposed to solve this problem. Finally, simulations are presented that indicate promising performance of the proposed RBSPA scheme.

Patent
01 Apr 2015
TL;DR: In this article, the first node in a V2VANET can claim a communication channel in the network based on a position relative to the position of the second node relative to a reference location.
Abstract: Methods and systems relating to de-centralized communication resource sharing and access for mobile nodes, such as vehicles, in a vehicle to vehicle ad hoc network are provided. A method includes receiving, at a first node, information indicating a position of a second node in the network. The first node may claim a communication channel in the network based on a position of the first node relative to the position of the second node. The relative positions of the nodes may be based on the distance of each node to a reference location. The nodes may be in a first zone in a virtual grid in the network, and the claimed communication channels may be channels of the first zone. Channels from other zones may also be claimed by nodes in the first zone as secondary channels.

Journal ArticleDOI
TL;DR: A new concept of resource for file replication, which considers both node storage and meeting frequency is introduced and a distributed file replication protocol is proposed to realize the proposed rule.
Abstract: File sharing applications in mobile ad hoc networks (MANETs) have attracted more and more attention in recent years. The efficiency of file querying suffers from the distinctive properties of such networks including node mobility and limited communication range and resource. An intuitive method to alleviate this problem is to create file replicas in the network. However, despite the efforts on file replication, no research has focused on the global optimal replica creation with minimum average querying delay. Specifically, current file replication protocols in mobile ad hoc networks have two shortcomings. First, they lack a rule to allocate limited resources to different files in order to minimize the average querying delay. Second, they simply consider storage as available resources for replicas, but neglect the fact that the file holders’ frequency of meeting other nodes also plays an important role in determining file availability. Actually, a node that has a higher meeting frequency with others provides higher availability to its files. This becomes even more evident in sparsely distributed MANETs, in which nodes meet disruptively. In this paper, we introduce a new concept of resource for file replication, which considers both node storage and meeting frequency. We theoretically study the influence of resource allocation on the average querying delay and derive a resource allocation rule to minimize the average querying delay. We further propose a distributed file replication protocol to realize the proposed rule. Extensive trace-driven experiments with synthesized traces and real traces show that our protocol can achieve shorter average querying delay at a lower cost than current replication protocols.

Journal ArticleDOI
TL;DR: A novel particle swarm optimization-based hyper-heuristic resource scheduling algorithm has been designed and used to schedule jobs effectively on available resources without violating any of the security norms.
Abstract: Grid computing being immensely based on the concept of resource sharing has always been closely associated with a lot many challenges Growth of Resource provisioning-based scheduling in large-scale distributed environments like Grid computing brings in new requirement challenges that are not being considered in traditional distributed computing environments Resources being the backbone of the system, their efficient management plays quite an important role in its execution environment Many constraints such as heterogeneity and dynamic nature of resources need to be taken care as steps toward managing Grid resources efficiently The most important challenge in Grids being the job---resource mapping as per the users' requirement in the most secure way The mapping of the jobs to appropriate resources for execution of the applications in Grid computing is found to be an NP-complete problem Novel algorithm is required to schedule the jobs on the resources to provide reduced execution time, increased security and reliability The main aim of this paper is to present an efficient strategy for secure scheduling of jobs on appropriate resources A novel particle swarm optimization-based hyper-heuristic resource scheduling algorithm has been designed and used to schedule jobs effectively on available resources without violating any of the security norms Performance of the proposed algorithm has also been evaluated through the GridSim toolkit We have compared our resource scheduling algorithm with existing common heuristic-based scheduling algorithms experimentally The results thus obtained have shown a better performance by our algorithm than the existing algorithms, in terms of giving more reduced cost and makespan of user's application being submitted to the Grids

Journal ArticleDOI
TL;DR: A VM resource allocation scheme with a minimized processing overhead for task execution and the best one is combining LWF with both AAPSM and RAPSM outperforms other solutions in the competitive situation.
Abstract: By leveraging virtual machine (VM) technology, we optimize cloud system performance based on refined resource allocation, in processing user requests with composite services. Our contribution is three-fold. (1) We devise a VM resource allocation scheme with a minimized processing overhead for task execution. (2) We comprehensively investigate the best-suited task scheduling policy with different design parameters. (3) We also explore the best-suited resource sharing scheme with adjusted divisible resource fractions on running tasks in terms of Proportional-share model (PSM), which can be split into absolute mode (called AAPSM) and relative mode (RAPSM). We implement a prototype system over a cluster environment deployed with 56 real VM instances, and summarized valuable experience from our evaluation. As the system runs in short supply, lightest workload first (LWF) is mostly recommended because it can minimize the overall response extension ratio (RER) for both sequential-mode tasks and parallel-mode tasks. In a competitive situation with over-commitment of resources, the best one is combining LWF with both AAPSM and RAPSM. It outperforms other solutions in the competitive situation, by 16 $\;+\;$ % w.r.t. the worst-case response time and by 7.4 $\;+\;$ % w.r.t. the fairness.

Proceedings ArticleDOI
01 Sep 2015
TL;DR: It is shown that the impact of virtualization overhead depends on the workloads, and thatvirtualization overhead is an important factor to consider in cloud resource provisioning, and the high accuracy of the model in predicting PM resource consumptions in the cloud datacenter is shown.
Abstract: Virtualization is a key technology for cloud data centers to implement infrastructure as a service (IaaS) and to provide flexible and cost-effective resource sharing. It introduces an additional layer of abstraction that produces resource utilization overhead. Disregarding this overhead may cause serious reduction of the monitoring accuracy of the cloud providers and may cause degradation of the VM performance. However, there is no previous work that comprehensively investigates the virtualization overhead. In this paper, we comprehensively measure and study the relationship between the resource utilizations of virtual machines (VMs) and the resource utilizations of the device driver domain, hypervisor and the physical machine (PM) with diverse workloads and scenarios in the Xen virtualization environment. We examine data from the real-world virtualized deployment to characterize VM workloads and assess their impact on the resource utilizations in the system. We show that the impact of virtualization overhead depends on the workloads, and that virtualization overhead is an important factor to consider in cloud resource provisioning. Based on the measurements, we build a regression model to estimate the resource utilization overhead of the PM resulting from providing virtualized resource to the VMs and from managing multiple VMs. Finally, our trace-driven real-world experimental results show the high accuracy of our model in predicting PM resource consumptions in the cloud datacenter, and the importance of considering the virtualization overhead in cloud resource provisioning.

Journal ArticleDOI
TL;DR: A method to properly choose a cellular user that shares radio resource with D2D users in the uplink to mitigate the interference from the cellular user to the D1D receivers and results show that by applying this method the reliability of D2d communication improves significantly without degrading the performance of the cellular connection.
Abstract: Recently, there has been growing interest in device-to-device (D2D) communications in a cellular network such as LTE-advanced. However, enabling D2D links in a cellular network presents a challenge in radio resource management due to the interference between cellular and D2D links. Some studies have considered cellular users as the primary and proposed methods to protect them from the additional interference from D2D links. However, considering that the D2D function is suitable for short-range and high-rate links, and local multimedia services, it is also important to guarantee these D2D links reliable. Thus, in this paper, we propose a method to properly choose a cellular user that shares radio resource with D2D users in the uplink to mitigate the interference from the cellular user to the D2D receivers. Numerical results show that by applying our method the reliability of D2D communication improves significantly without degrading the performance of the cellular connection. In addition, we derive a closed-form expression for the conditional outage probabilities of D2D links in the case when more than one D2D pair share the same resource with one cellular user, and discuss how the base station can choose a cellular user to optimize the performance of the D2D links.

Proceedings Article
08 Jul 2015
TL;DR: SAM as discussed by the authors is a sharing-aware Mapper that uses the aggregated coherence and bandwidth event counts to separate traffic caused by data sharing from that due to memory accesses.
Abstract: Modern multicore platforms suffer from inefficiencies due to contention and communication caused by sharing resources or accessing shared data. In this paper, we demonstrate that information from low-cost hard-ware performance counters commonly available on modern processors is sufficient to identify and separate the causes of communication traffic and performance degradation. We have developed SAM, a Sharing-Aware Mapper that uses the aggregated coherence and bandwidth event counts to separate traffic caused by data sharing from that due to memory accesses. When these counts exceed pre-determined thresholds, SAM effects task to core assignments that colocate tasks that share data and distribute tasks with high demand for cache capacity and memory bandwidth. Our new mapping policies automatically improve execution speed by up to 72% for individual parallel applications compared to the default Linux scheduler, while reducing performance disparities across applications in multiprogrammed workloads.

Journal ArticleDOI
TL;DR: The COMBO European project is studying architectural options for structural convergence leveraging on the aforementioned technological triggers and in particular on the evolution of optical access and aggregation technologies, described and analyzed with respect to anticipated needs for integrated fixed and mobile networks.
Abstract: The current level of pooling and sharing between fixed and mobile infrastructures is not sufficient to allow the most efficient use of network resources, whether fixed, mobile, or Wi-Fi. In the perspective of 5G networks, efficient resource sharing of fixed and mobile infrastructures, called structural convergence, will be essential in the path toward an integrated fixed and mobile network. This structural convergence is being triggered by different architecture trends, e.g., baseband unit hostelling and mobile fronthaul technologies, heterogeneous radio access networks, and, most importantly, a unified optical access and aggregation network. The COMBO European project is studying architectural options for structural convergence leveraging on the aforementioned technological triggers and in particular on the evolution of optical access and aggregation technologies. These architectural options are described and analyzed with respect to anticipated needs for integrated fixed and mobile networks.

Journal ArticleDOI
TL;DR: A distributed approach aimed at improving the quality of service in dynamic grid federations is presented, designed to design a fully decentralized, greedy procedure, aimed at controlling the grid formation process.
Abstract: In this paper, a distributed approach aimed at improving the quality of service in dynamic grid federations is presented. Virtual organizations VO are grouped into large-scale federations in which the original goals and scheduling mechanisms are left unchanged, while grid nodes can be quickly instructed to join or leave any VO at any time. Moreover, an agent-oriented framework is designed to observe and characterize past behaviors of nodes in terms of resource sharing and consumption, as well as to determine the trust relationships occurring between each pair of nodes. By combining trust and historical behaviors into a unified convenience measure, software agents are able to evaluate the i advantages of node's membership with VOs and ii whether a specific set of nodes is able to meet the actual requirements, in terms of resource sharing and consumption of a specific VO. The convenience measure has been exploited to design a fully decentralized, greedy procedure, aimed at controlling the grid formation process. Extensive simulations have shown that the coordinated and decentralized process of grid formation provides a powerful means to improve the overall quality of service of the grid federation. Copyright © 2015 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, four algorithms are proposed to address co-primary multi-operator radio resource sharing under heterogeneous traffic in both centralized and distributed scenarios, and the performance of these algorithms is assessed through extensive system-level simulations for two indoor small cell layouts.
Abstract: To tackle the challenge of providing higher data rates within limited spectral resources we consider the case of multiple operators sharing a common pool of radio resources. Four algorithms are proposed to address co-primary multi-operator radio resource sharing under heterogeneous traffic in both centralized and distributed scenarios. The performance of these algorithms is assessed through extensive system-level simulations for two indoor small cell layouts. It is assumed that the spectral allocations of the small cells are orthogonal to the macro network layer and thus, only the small cell traffic is modeled. The main performance metrics are user throughput and the relative amount of shared spectral resources. The numerical results demonstrate the importance of coordination among co-primary operators for an optimal resource sharing. Also, maximizing the spectrum sharing percentage generally improves the achievable throughput gains over non-sharing.