scispace - formally typeset
Search or ask a question
Topic

Service-level agreement

About: Service-level agreement is a research topic. Over the lifetime, 4358 publications have been published within this topic receiving 75333 citations. The topic is also known as: SLA.


Papers
More filters
Journal ArticleDOI
TL;DR: Simulation results indicate that this design increases application‐level throughput of data applications such as large FTP transfers; achieves low packet delays and response times for Telnet and WWW traffic; and detects bandwidth theft attacks and service violations.
Abstract: Increased performance, fairness, and security remain important goals for service providers. In this work, we design an integrated distributed monitoring, traffic conditioning, and flow control system for higher performance and security of network domains. Edge routers monitor (using tomography techniques) a network domain to detect quality of service (QoS) violations--possibly caused by underprovisioning--as well as bandwidth theft attacks. To bound the monitoring overhead, a router only verifies service level agreement (SLA) parameters such as delay, loss, and throughput when anomalies are detected. The marking component of the edge router uses TCP flow characteristics to protect 'fragile' flows. Edge routers may also regulate unresponsive flows, and may propagate congestion information to upstream domains. Simulation results indicate that this design increases application-level throughput of data applications such as large FTP transfers; achieves low packet delays and response times for Telnet and WWW traffic; and detects bandwidth theft attacks and service violations.

17 citations

Journal ArticleDOI
TL;DR: A distributed edge-to-edge model for monitoring service level agreement (SLA) violations and tracing abusive traffic to its origins and can trace user violations back to their source machines in real time is proposed.
Abstract: Bandwidth abuse is a critical Internet service violation. However, its origins are difficult to detect and trace given similarities between abusive and normal traffic. So far, there is no capable and scalable mechanism to deal with bandwidth abuse. This paper proposes a distributed edge-to-edge model for monitoring service level agreement (SLA) violations and tracing abusive traffic to its origins. The mechanism of policing misbehaving user traffic at a single random early detection (RED) gateway is used in the distributed monitoring of SLA violations, including violations carried out through several gateways. Each RED gateway reports misbehaving users who have been sent notifications of traffic policing to an SLA monitoring unit. Misbehaving users are considered suspicious users and their consumed bandwidth shares are aggregated at every gateway to be compared with SLA-specified ratios. Bandwidth is abused when SLA-specified ratios are exceeded. By reporting bandwidth abuse, illegitimate users can be isolated from legitimate ones and source hosts of abusive traffic may be traced. Approximate simulation results show that the proposed model can detect any SLA violation and identify abusive users. In addition, the proposed model can trace user violations back to their source machines in real time.

17 citations

Journal ArticleDOI
TL;DR: Some of the most commonly used scheduling algorithms for bag‐of‐tasks applications are enhanced, by utilizing approximate computations, and the impact of different levels of variability in the computational demands of the applications on the performance of the examined heuristics is investigated.
Abstract: Summary Software as a Service (SaaS) cloud computing has emerged as an attractive platform to tackle various problems of the traditional software distribution model, such as the requirement to acquire and maintain expensive hardware and software infrastructure. SaaS, however, involves many challenges, mainly due to the heterogeneity and multitenancy of the underlying host environment, as well as the nature of the applications executed on such platforms. Applications are usually bags-of-tasks, consisting of independent component tasks that can be executed in any order, featuring different degrees of variability in their computational demands. Furthermore, according to the service level agreement between the cloud provider and the end-users, the execution of such applications must typically complete within a deadline, providing results of acceptable quality. Consequently, one of the most important aspects of SaaS cloud computing is the effective scheduling of multiple parallel applications, avoiding any service level agreement violations. Towards this direction, our contribution in this paper is twofold: (1) We enhance some of the most commonly used scheduling algorithms for bag-of-tasks applications, by utilizing approximate computations, and (2) we investigate the impact of different levels of variability in the computational demands of the applications on the performance of the examined heuristics.

17 citations

Journal ArticleDOI
TL;DR: A multi-objective optimization problem for job scheduling and VM placement is formulated with respect to parameters such as service level agreement (SLA), energy cost, carbon footprint rate (CFR), and availability of RES and is solved using an enhanced heuristic approach based on a greedy strategy.
Abstract: For a number of years, due to an exponential increase in the demand for an eco-friendly environment, there has been a rapid increase in the green city revolution across the globe. Subsequently, load shifting of major energy consumers from conventional power grids to renewable energy sources (RES) has become inevitable. Towards this end, cloud data centers (DCs) have emerged as significant consumers of energy that solely rely on power grids to fuel their day-to-day operations. Nevertheless, their energy consumption has increased significantly which in turn has substantially raised the global carbon footprint rate. These challenges can be best addressed by the judicious utilization of RES which have well established advantages like reduced operational costs and carbon emissions. Keeping in view of the above facts, the ultimate goal of the proposed work is to design a comprehensive workload classification; and job scheduling and Vitual machine placement architecture for cloud DCs powered by RES and power grids. For this, a multi-objective optimization scheme is proposed which operates in two phases. In phase I, a random forest-based wrapper scheme known as Boruta, is used for relevant feature set selection for the incoming workload. This is followed by classification of the workload using a locality sensitive hashing-based support vector machines approach. In phase II, a multi-objective optimization problem for job scheduling and VM placement is formulated with respect to parameters such as service level agreement (SLA), energy cost, carbon footprint rate (CFR), and availability of RES. It is further solved using an enhanced heuristic approach based on a greedy strategy. Our experimental evaluations show an average improvement of approximately 31% in energy utilization, 28% in energy cost, and 36% in CFR, with a slight degradation in SLA assurance (about 2%) compared with the existing schemes.

17 citations

Proceedings ArticleDOI
11 Jun 2006
TL;DR: A new concept Quality of Resilience (QoR) presented in this paper is based on the distinction in the reliability-related Quality of Service (QoS) parameters of the short-termquality factors and the long-term quality factors.
Abstract: A new concept Quality of Resilience (QoR) presented in this paper is based on the distinction in the reliability-related Quality of Service (QoS) parameters of the short-term quality factors and the long-term quality factors. The former parameters are called availability parameters and the latter are called QoR parameters. In one hand by dividing the service duration time into intervals, the service is considered available during a time interval, if the Service Level Agreement (SLA) between the user and the network operator is satisfied. In the other hand the long-term characteristics of the service are derived from the service downtime distribution. With the downtime histograms the asymptotic characteristics of the service can be represented both at the transport and service layers. Since the resilience mechanism implemented into the network match the transport layer downtime histograms, this new characterization of the QoS helps to measure the impacts of a given recovery scheme on the next generation services.

17 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
92% related
Network packet
159.7K papers, 2.2M citations
88% related
Wireless network
122.5K papers, 2.1M citations
88% related
Wireless sensor network
142K papers, 2.4M citations
88% related
Scheduling (computing)
78.6K papers, 1.3M citations
87% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202339
2022106
2021183
2020233
2019237
2018255