scispace - formally typeset
Search or ask a question
Author

Solomon Tesfamariam

Bio: Solomon Tesfamariam is an academic researcher from University of British Columbia. The author has contributed to research in topics: Seismic risk & Shear wall. The author has an hindex of 37, co-authored 200 publications receiving 4358 citations. Previous affiliations of Solomon Tesfamariam include University of Bristol & University of Ottawa.


Papers
More filters
Journal ArticleDOI
TL;DR: A comprehensive review on the application of multi-criteria decision-making (MCDM) literature in the field of infrastructure management is presented in this paper, which identifies trends and new developments in MCDM methods.
Abstract: In infrastructure management, multi-criteria decision-making (MCDM) has emerged as a decision support tool to integrate various technical information and stakeholder values. Different MCDM techniques and tools have been developed. This paper presents a comprehensive review on the application of MCDM literature in the field of infrastructure management. Approximately 300 published papers were identified that report MCDM applications in the field of infrastructure management during 1980–2012. The reviewed papers are classified into application to the type of infrastructure (e.g. bridges and pipes), and prevalent decision or intervention (e.g. repair and rehabilitate). In addition, the papers were also classified according to MCDM methods used in the analysis. The paper provides taxonomy of those articles and identifies trends and new developments in MCDM methods. The results suggest that there is a significant growth in MCDM applications in infrastructure management applications of MCDM over the last decade...

309 citations

Journal ArticleDOI
TL;DR: In this article, a fuzzy logic is employed to derive fuzzy probabilities of basic events in fault tree and to estimate fuzzy probabilities ( likelihood ) of output event consequences, which can help professionals to decide whether and where to take preventive or corrective actions and help informed decision-making in the risk management process.
Abstract: Vast amounts of oil & gas (O&G) are consumed around the world everyday that are mainly transported and distributed through pipelines. Only in Canada, the total length of O&G pipelines is approximately 100,000 km, which is the third largest in the world. Integrity of these pipelines is of primary interest to O&G companies, consultants, governmental agencies, consumers and other stakeholder due to adverse consequences and heavy financial losses in case of system failure. Fault tree analysis (FTA) and event tree analysis (ETA) are two graphical techniques used to perform risk analysis, where FTA represents causes (likelihood) and ETA represents consequences of a failure event. ‘Bow-tie’ is an approach that integrates a fault tree (on the left side) and an event tree (on the right side) to represent causes, threat (hazards) and consequences in a common platform. Traditional ‘bow-tie’ approach is not able to characterize model uncertainty that arises due to assumption of independence among different risk events. In this paper, in order to deal with vagueness of the data, the fuzzy logic is employed to derive fuzzy probabilities ( likelihood ) of basic events in fault tree and to estimate fuzzy probabilities ( likelihood ) of output event consequences. The study also explores how interdependencies among various factors might influence analysis results and introduces fuzzy utility value (FUV) to perform risk assessment for natural gas pipelines using triple bottom line (TBL) sustainability criteria, namely, social, environmental and economical consequences. The present study aims to help owners of transmission and distribution pipeline companies in risk management and decision-making to consider multi-dimensional consequences that may arise from pipeline failures. The research results can help professionals to decide whether and where to take preventive or corrective actions and help informed decision-making in the risk management process. A simple example is used to demonstrate the proposed approach.

239 citations

Journal ArticleDOI
TL;DR: The traditional AHP is modified to fuzzy AHP using fuzzy arithmetic operations and the concept of risk attitude and associated confidence of a decision maker on the estimates of pairwise comparisons are discussed.
Abstract: Environmental risk management is an integral part of risk analyses. The selection of different mitigating or preventive alternatives often involve competing and conflicting criteria, which requires sophisticated multi-criteria decision-making (MCDM) methods. Analytic hierarchy process (AHP) is one of the most commonly used MCDM methods, which integrates subjective and personal preferences in performing analyses. AHP works on a premise that decision-making of complex problems can be handled by structuring the complex problem into a simple and comprehensible hierarchical structure. However, AHP involves human subjectivity, which introduces vagueness type uncertainty and necessitates the use of decision-making under uncertainty. In this paper, vagueness type uncertainty is considered using fuzzy-based techniques. The traditional AHP is modified to fuzzy AHP using fuzzy arithmetic operations. The concept of risk attitude and associated confidence of a decision maker on the estimates of pairwise comparisons are also discussed. The methodology of the proposed technique is built on a hypothetical example and its efficacy is demonstrated through an application dealing with the selection of drilling fluid/mud for offshore oil and gas operations.

189 citations

Journal ArticleDOI
TL;DR: A Bayesian Belief Network model is presented to evaluate the risk of failure of metallic water mains using structural integrity, hydraulic capacity, water quality, and consequence factors to justify proper decision action for maintenance/rehabilitation/replacement (M/R/R).

181 citations

Journal ArticleDOI
TL;DR: In this article, the concept of intuitionistic fuzzy set is applied to AHP, called IF-AHP to handle both vagueness and ambiguity related uncertainties in the environmental decision-making process, which is demonstrated with an illustrative example to select best drilling fluid (mud) for drilling operations under multiple environmental criteria.
Abstract: Analytic hierarchy process (AHP) is a utility theory based decision-making technique, which works on a premise that the decision-making of complex problems can be handled by structuring them into simple and comprehensible hierarchical structures. However, AHP involves human subjective evaluation, which introduces vagueness that necessitates the use of decision-making under uncertainty. The vagueness is commonly handled through fuzzy sets theory, by assigning degree of membership. But, the environmental decision-making problem becomes more involved if there is an uncertainty in assigning the membership function (or degree of belief) to fuzzy pairwise comparisons, which is referred to as ambiguity (non-specificity). In this paper, the concept of intuitionistic fuzzy set is applied to AHP, called IF-AHP to handle both vagueness and ambiguity related uncertainties in the environmental decision-making process. The proposed IF-AHP methodology is demonstrated with an illustrative example to select best drilling fluid (mud) for drilling operations under multiple environmental criteria.

157 citations


Cited by
More filters
Journal Article
TL;DR: This book by a teacher of statistics (as well as a consultant for "experimenters") is a comprehensive study of the philosophical background for the statistical design of experiment.
Abstract: THE DESIGN AND ANALYSIS OF EXPERIMENTS. By Oscar Kempthorne. New York, John Wiley and Sons, Inc., 1952. 631 pp. $8.50. This book by a teacher of statistics (as well as a consultant for \"experimenters\") is a comprehensive study of the philosophical background for the statistical design of experiment. It is necessary to have some facility with algebraic notation and manipulation to be able to use the volume intelligently. The problems are presented from the theoretical point of view, without such practical examples as would be helpful for those not acquainted with mathematics. The mathematical justification for the techniques is given. As a somewhat advanced treatment of the design and analysis of experiments, this volume will be interesting and helpful for many who approach statistics theoretically as well as practically. With emphasis on the \"why,\" and with description given broadly, the author relates the subject matter to the general theory of statistics and to the general problem of experimental inference. MARGARET J. ROBERTSON

13,333 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations