scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Multi-objective optimization of IT service availability and costs

01 Mar 2016-Reliability Engineering & System Safety (Elsevier)-Vol. 147, Iss: 147, pp 142-155
TL;DR: A Petri net Monte Carlo simulation is developed that estimates the availability and costs of a specific design of an IT service redundancy allocation problem and two meta-heuristics, namely a genetic algorithm and tabu search, are adapted.
About: This article is published in Reliability Engineering & System Safety.The article was published on 2016-03-01. It has received 30 citations till now. The article focuses on the topics: Service level objective & Service design.
Citations
More filters
Journal ArticleDOI
TL;DR: An approach that combines reliability analyses and multi-criteria decision methods to optimize maintenance activities of complex systems and is applied to a real-world case study, showing that the obtained results represent a significant driver in planning maintenance activities.

150 citations

23 Apr 2007
TL;DR: It is found that temperature and activity levels were much less correlated with drive failures than previously reported, and models based on SMART parameters alone are unlikely to be useful for predicting individual drive failures.
Abstract: It is estimated that over 90% of all new information produced in the world is being stored on magnetic media, most of it on hard disk drives. Despite their importance, there is relatively little published work on the failure patterns of disk drives, and the key factors that affect their lifetime. Most available data are either based on extrapolation from accelerated aging experiments or from relatively modest sized field studies. Moreover, larger population studies rarely have the infrastructure in place to collect health signals from components in operation, which is critical information for detailed failure analysis. We present data collected from detailed observations of a large disk drive population in a production Internet services deployment. The population observed is many times larger than that of previous studies. In addition to presenting failure statistics, we analyze the correlation between failures and several parameters generally believed to impact longevity. Our analysis identifies several parameters from the drive's self monitoring facility (SMART) that correlate highly with failures. Despite this high correlation, we conclude that models based on SMART parameters alone are unlikely to be useful for predicting individual drive failures. Surprisingly, we found that temperature and activity levels were much less correlated with drive failures than previously reported.

46 citations

Journal ArticleDOI
TL;DR: This paper uses stochastic models to calculate dependability-related metrics for different cloud infrastructures, and uses a Multiple-Criteria Decision-Making (MCDM) method to rank the best cloud inf infrastructure, taking customer service constraints such as reliability, downtime, and cost into consideration.
Abstract: Cloud computing is a paradigm that provides services through the Internet. The paradigm has been influenced by previously available technologies (for example cluster, peer-to-peer, and grid computing) and has now been adopted by almost all large organizations. Companies such as Google, Amazon, Microsoft and Facebook have made significant investments in cloud computing, and now provide services with high levels of dependability. The efficient and accurate assessment of cloud-based infrastructure is fundamental in guaranteeing both business continuity and uninterrupted public services, as much as is possible. This paper presents an approach for selecting cloud computing infrastructures, in terms of dependability and cost that best suits both company and customer needs. We use stochastic models to calculate dependability-related metrics for different cloud infrastructures. We then use a Multiple-Criteria Decision-Making (MCDM) method to rank the best cloud infrastructures, taking customer service constraints such as reliability, downtime, and cost into consideration. A case study demonstrates the practicability and usefulness of the proposed approach.

35 citations


Cites background from "Multi-objective optimization of IT ..."

  • ...Parameters such as reliability, capacity-oriented availability and cost are relevant factors in the negotiation of such services [6, 10]....

    [...]

Journal ArticleDOI
TL;DR: A general model for multi-state deteriorating systems with condition based preventive maintenance is introduced and multi-objective optimization problems are formulated and solve in order to distinguish preventive maintenance policies that optimize simultaneously both the dependability and performance measures.

27 citations


Additional excerpts

  • ...Thus, multi-objective optimization methods are required [28,2,13]....

    [...]

Journal ArticleDOI
Chun Su1, Yang Liu1
TL;DR: A multi-objective maintenance optimisation (MOMO) model is proposed for electromechanical products, where both the soft failure and hard failure are considered, and minimal repair is performed accordingly.
Abstract: Maintenance optimisation is a multi-objective problem in nature, and it usually needs to achieve a trade-off among the conflicting objectives. In this study, a multi-objective maintenance optimisat...

24 citations


Cites methods from "Multi-objective optimization of IT ..."

  • ...Furthermore, Bosse, Splieth, and Turowski (2016) used a genetic algorithm (GA) and tabu search algorithm to optimise the system’s availability and cost....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The objective is to describe the performance of design-science research in Information Systems via a concise conceptual framework and clear guidelines for understanding, executing, and evaluating the research.
Abstract: Two paradigms characterize much of the research in the Information Systems discipline: behavioral science and design science The behavioral-science paradigm seeks to develop and verify theories that explain or predict human or organizational behavior The design-science paradigm seeks to extend the boundaries of human and organizational capabilities by creating new and innovative artifacts Both paradigms are foundational to the IS discipline, positioned as it is at the confluence of people, organizations, and technology Our objective is to describe the performance of design-science research in Information Systems via a concise conceptual framework and clear guidelines for understanding, executing, and evaluating the research In the design-science paradigm, knowledge and understanding of a problem domain and its solution are achieved in the building and application of the designed artifact Three recent exemplars in the research literature are used to demonstrate the application of these guidelines We conclude with an analysis of the challenges of performing high-quality design-science research in the context of the broader IS community

10,264 citations

Journal ArticleDOI
TL;DR: The designed methodology effectively satisfies the three objectives of design science research methodology and has the potential to help aid the acceptance of DS research in the IS discipline.
Abstract: The paper motivates, presents, demonstrates in use, and evaluates a methodology for conducting design science (DS) research in information systems (IS). DS is of importance in a discipline oriented to the creation of successful artifacts. Several researchers have pioneered DS research in IS, yet over the past 15 years, little DS research has been done within the discipline. The lack of a methodology to serve as a commonly accepted framework for DS research and of a template for its presentation may have contributed to its slow adoption. The design science research methodology (DSRM) presented here incorporates principles, practices, and procedures required to carry out such research and meets three objectives: it is consistent with prior literature, it provides a nominal process model for doing DS research, and it provides a mental model for presenting and evaluating DS research in IS. The DS process includes six steps: problem identification and motivation, definition of the objectives for a solution, design and development, demonstration, evaluation, and communication. We demonstrate and evaluate the methodology by presenting four case studies in terms of the DSRM, including cases that present the design of a database to support health assessment methods, a software reuse measure, an Internet video telephony application, and an IS planning method. The designed methodology effectively satisfies the three objectives and has the potential to help aid the acceptance of DS research in the IS discipline.

5,420 citations

Book ChapterDOI
18 Sep 2000
TL;DR: Simulation results on five difficult test problems show that the proposed NSGA-II, in most problems, is able to find much better spread of solutions and better convergence near the true Pareto-optimal front compared to PAES and SPEA--two other elitist multi-objective EAs which pay special attention towards creating a diverse Paretimal front.
Abstract: Multi-objective evolutionary algorithms which use non-dominated sorting and sharing have been mainly criticized for their (i) O(MN3) computational complexity (where M is the number of objectives and N is the population size), (ii) non-elitism approach, and (iii) the need for specifying a sharing parameter. In this paper, we suggest a non-dominated sorting based multi-objective evolutionary algorithm (we called it the Non-dominated Sorting GA-II or NSGA-II) which alleviates all the above three difficulties. Specifically, a fast non-dominated sorting approach with O(MN2) computational complexity is presented. Second, a selection operator is presented which creates a mating pool by combining the parent and child populations and selecting the best (with respect to fitness and spread) N solutions. Simulation results on five difficult test problems show that the proposed NSGA-II, in most problems, is able to find much better spread of solutions and better convergence near the true Pareto-optimal front compared to PAES and SPEA--two other elitist multi-objective EAs which pay special attention towards creating a diverse Pareto-optimal front. Because of NSGA-II's low computational requirements, elitist approach, and parameter-less sharing approach, NSGA-II should find increasing applications in the years to come.

4,878 citations

Book ChapterDOI
01 Jan 2014
TL;DR: An overview of the International Organization for Standardization (ISO) can be found in this paper, where the authors describe the ISO standards most relevant in a clinical laboratory service setting, as well as the process for obtaining and maintaining ISO certification.
Abstract: This chapter provides an overview of the International Organization for Standardization (ISO). Operating since 1947, the International Organization for Standardization (ISO) is a nongovernmental association consisting of representatives from over 150 countries, one member per country. The increased credibility associated with ISO certification leads to many advantages that include decreased operating expenses stemming from scrap and rework, and enhanced management control through management review participation. The chapter describes the ISO standards most relevant in a clinical laboratory service setting. The quality standards in the ISO 9000 family focus on quality management and include quality-management system (QMS) requirements that are general for the manufacturing and service industries. The ISO 9001 standard requires extensive interpretation, while ISO 15189 is an international standard specifically developed for medical laboratories, although it may be of relevance to such disciplines as clinical physiology and medical imaging. The chapter describes the process for obtaining and maintaining ISO certification. ISO certification can be an attractive credential for a clinical laboratory. The College of American Pathologists (CAP) continues to play a role in the development of the ISO 15189 standard and, since 2008, has been a certifying body for this standard. The certification process is followed by ongoing maintenance of the QMS by the laboratory, as well as surveillance audits performed by the certifying body.

3,992 citations