scispace - formally typeset
Search or ask a question

Showing papers in "International Journal of Systems Assurance Engineering and Management in 2013"


Journal ArticleDOI
TL;DR: An ISM model has been prepared to identify some key enablers and their managerial implications in the implementation of TPM and their ranking is done by a questionnaire-based survey and interpretive structural modelling approach has been utilized in analysing their mutual interaction.
Abstract: Total Productive maintenance (TPM) is increasingly implemented by many organizations to improve their equipment efficiency and to obtain the competitive advantage in the global market in terms of cost and quality. But, implementation of TPM is not an easy task. There are certain enablers, which help in the implementation of TPM. The utmost need is to analyse the behaviour of these enablers for their effective utilization in the implementation of TPM. The main objective of this paper is to understand the mutual interaction of these enablers and identify the ‘driving enablers’ (i.e. which influence the other enablers) and the ‘dependent enablers’ (i.e. which are influenced by others). In the present work, these enablers have been identified through the literature, their ranking is done by a questionnaire-based survey and interpretive structural modelling (ISM) approach has been utilized in analysing their mutual interaction. An ISM model has been prepared to identify some key enablers and their managerial implications in the implementation of TPM.

99 citations


Journal ArticleDOI
TL;DR: This paper explains the fundamentals of fuzzy theory and describes application of fuzzy importance for using FFTA, and reveals the effectiveness of the FFTA in comparison with conventional FTA, when there is inadequate amount of accurate reliability oriented information.
Abstract: Fault tree analysis (FTA) is a widely used method for analyzing a system’s failure logic and calculating overall reliability. However, application of conventional FTA has some shortcomings, e.g. in handling the uncertainties, allowing the use of linguistic variables, and integrating human error in failure logic model. Hence, Fuzzy set theory has been proposed to overcome the limitation of conventional FTA. Fuzzy logic provides a framework whereby basic notions such as similarity, uncertainty and preference can be modeled effectively. The aim of this paper is to present a review of the concept of fuzzy theory with fault tree analysis and their applications since 1981, to reflect the current status of Fuzzy fault tree analysis (FFTA) methodologies, their strengths, weaknesses, and their applications. This paper explains the fundamentals of fuzzy theory and describes application of fuzzy importance for using FFTA. The concept of the failure possibility and uncertainty analysis by using FFTA is discussed, and concludes with discussion on the application of FFTA in different fields. The review reveals the effectiveness of the FFTA in comparison with conventional FTA, when there is inadequate amount of accurate reliability oriented information.

98 citations


Journal ArticleDOI
TL;DR: A questionnaire based survey and interpretive structural modelling approach have been used to model and analyse key barriers and drive managerial insights.
Abstract: In the highly competitive environment, to be successful and to achieve world-class-manufacturing, organizations must possess both efficient maintenance and effective manufacturing strategies. A strategic approach to improve the performance of maintenance activities is to effectively adapt and implement strategic TPM initiatives in the manufacturing organizations. Total productive maintenance (TPM) is not easy to adopt and implement, due to presence of many barriers. The purpose of this paper is to identify and analyse these barriers. A questionnaire based survey was conducted to rank these barriers. The results of this survey and interpretive structural modelling approach have been used to model and analyse key barriers and drive managerial insights.

77 citations


Journal ArticleDOI
TL;DR: In this research work, a survey of reliability approaches in various fields of engineering and physical sciences is carried out to provide the major areas i.e. past, current and future trends of reliability methods and applications for the readers.
Abstract: In the modern scenario, reliability has becomes the most challenging and demanding theory. The theory and the methods of reliability analysis have been developed significantly during the last 40 years and have also been acknowledged in a number of publications. So, a reliability engineer is aware about the importance of each reliability measure of the system and its fields. In this research work, a survey of reliability approaches in various fields of engineering and physical sciences is carried out. In this survey, the author tried to provide the major areas i.e. past, current and future trends of reliability methods and applications for the readers.

70 citations


Journal ArticleDOI
TL;DR: The transitional state probabilities, asymptotic behavior and some characteristics such as reliability, availability, MTTF and the cost effectiveness of the system have been evaluated with the help of supplementary variable technique, Laplace transformations and copula methodology.
Abstract: The paper deals with the availability analysis of a system, which consists of two subsystems namely subsystem-1 and subsystem-2. Subsystem-1 is working under k-out of n: good configuration while subsystem-2 has two identical units connected in parallel configuration. A controller is attached with each subsystem for proper functioning of the system. All failure rates are constant but repairs follow general and exponential distributions. The transitional state probabilities, asymptotic behavior and some characteristics such as reliability, availability, MTTF and the cost effectiveness of the system have been evaluated with the help of supplementary variable technique, Laplace transformations and copula methodology. At last, some particular cases and numerical examples have been taken to describe the model.

44 citations


Journal ArticleDOI
TL;DR: This study deals with the classical and Bayesian analysis of the hybrid censored lifetime data under the assumption that the lifetime follow Lindley distribution and achieves Bayes estimate along with its posterior standard error and highest posterior density credible intervals of the parameter.
Abstract: This study deals with the classical and Bayesian analysis of the hybrid censored lifetime data under the assumption that the lifetime follow Lindley distribution. In classical set up, the maximum likelihood estimate of the parameter with its standard error are computed. Further, by assuming Jeffrey’s invariant and gamma priors of the unknown parameter, Bayes estimate along with its posterior standard error and highest posterior density credible intervals of the parameter are obtained. Markov Chain Monte Carlo technique such as Metropolis–Hastings algorithm has been utilized to generate draws from the posterior density of the parameter. A real data set representing the waiting time of the bank customers has been analyzed for illustration purpose. A comparison study is conducted to judge the performance of the classical and Bayesian estimation procedure.

43 citations


Journal ArticleDOI
TL;DR: The main focus of this paper is on the reliability modelling of a computer system considering the concepts of redundancy, preventive maintenance and priority in repair activities, and expressions for several reliability and economic measures are derived in steady state using semi-Markov process and regenerative point technique.
Abstract: The main focus of this paper is on the reliability modelling of a computer system considering the concepts of redundancy, preventive maintenance and priority in repair activities. Two identical units of a computer system are taken—one unit is initially operative and the other is kept as spare in cold standby. In each unit h/w and s/w work together and may fail independently from normal mode. There is a single server who visits the system immediately as and when needed. Server conducts preventive maintenance of the unit (computer system) after a maximum operation time. Repair of the h/w is done at its failure while s/w is upgraded from time to time as per requirements. If server unable to repair the h/w in a pre-specific time (called maximum repair time), h/w is replaced by new one giving some replacement time. Priority to h/w repair is given over s/w up gradation if, in one unit s/w is under up-gradation and h/w fails in another operative unit. The failure time of h/w and s/w follows negative exponential distributions while the distributions of preventive maintenance, h/w repair/replacement and s/w up-gradation times are taken as arbitrary with different probability density functions. The expressions for several reliability and economic measures are derived in steady state using semi-Markov process and regenerative point technique. The graphical study of mean time to system failure (MTSF) and profit function has also been made giving particular values to various parameters and costs.

29 citations


Journal ArticleDOI
TL;DR: The use of PSO algorithm to the SRGM parameter estimation problem is proposed, and the results obtained are better than those obtained from GA, and PSO may be used to estimate SRGM parameters.
Abstract: Software quality includes many attributes including reliability of a software. Prediction of reliability of a software in early phases of software development will enable software practitioners in developing robust and fault tolerant systems. The purpose of this paper is to predict software reliability, by estimating the parameters of Software Reliability Growth Models (SRGMs). SRGMs are the mathematical models which generally reflect the properties of the process of fault detection during testing. Particle Swarm Optimization (PSO) has been applied to several optimization problems and has showed good performance. PSO is a popular machine learning algorithm under the category of Swarm Intelligence. PSO is an evolutionary algorithm like Genetic Algorithm (GA). In this paper we propose the use of PSO algorithm to the SRGM parameter estimation problem, and then compare the results with those of GA. The results are validated using data obtained from 16 projects. The results obtained from PSO have high predictive ability which is reflected by low error predictions. The results obtained using PSO are better than those obtained from GA. Hence, PSO may be used to estimate SRGM parameters.

27 citations


Journal ArticleDOI
TL;DR: The objective of this work is to develop a method for power system reliability using the FTA approach, and the methodology adopted in this investigation is to generate fault trees for each load point of the power system.
Abstract: The Fault Tree Analysis (FTA) serves as a powerful tool for system risk analysis and reliability assessment. FTA is a top-down approach to failure analysis, starting with a potential undesirable event and then determining Base event (BE). The undesired state of the system is represented by the Top Event (TE). TE and BE are integrated through electronic logic gates (AND gate, OR gate). The fault tree is a tool to identify and assess the combinations of the undesired events in the control of system operation and its environment that can lead to the undesired state of the system. It is recognized worldwide as an important tool for evaluating safety and reliability in system design, development and operation. In this work, an efficient methodology is utilized to find out reliability assessment of critical and/or complex system. The main features and application of this technique for a power system are discussed. Minimal cut sets are developed by means of Boolean equation method. For main substation all CCF are considered at an average temperature of 35 °C. The objective of this work is to develop a method for power system reliability using the FTA approach. The methodology adopted in this investigation is to generate fault trees for each load point of the power system. This fault trees are related to disruption of energy delivery from generators to the specific load points. Quantitative evaluation of the fault trees represents a standpoint for assessment of reliability of power delivery and enables identification of the most important elements in the power system. The power system reliability is assessed and the main contributors to power system reliability are identified, both qualitatively and quantitatively.

20 citations


Journal ArticleDOI
TL;DR: Two machine learning methods, support vector machine and K-Nearest Neighbours (KNN) were investigated in this paper to predict the coagulant dosage in water treatment plants (WTPs) and show that different machineLearning methods have competing predictive abilities.
Abstract: Two machine learning methods, support vector machine and K-Nearest Neighbours (KNN) were investigated in this paper to predict the coagulant dosage in water treatment plants (WTPs). Two types of support vector machine regression techniques, e-SVR and v-SVR, using two different kernel functions (radial basis function (RBF) and polynomial function), and KNN were investigated in order to predict coagulant dosage in a large, a medium, and two small-sized WTPs. The results show that these two types of support vector machine regression techniques have good predictive capabilities for the large and medium WTPs as compared to small water systems. The performances of e-SVR with RBF kernel function were compared with that obtained from the KNN algorithm (as baseline) for four WTPs. The comparison shows that the KNN has similar performances as e-SVR for the large and medium- sized WTPs and performs better for two small-sized WTPs. The results show that different machine learning methods have competing predictive abilities.

20 citations


Journal ArticleDOI
TL;DR: This paper deals with cost benefit analysis of a single server two non-identical unit cold standby system with two modes of each unit: Normal and Total failure.
Abstract: This paper deals with cost benefit analysis of a single server two non-identical unit cold standby system with two modes of each unit: Normal and Total failure. One unit gets priority in operation as well as in repair over the other unit. The switching device to put the standby unit into operation is always perfect and instantaneous. Failure and repair time distributions of each unit are taken as Weibull with common shape parameter and different scale parameters. The system is observed at suitable regenerative epochs to obtain various measures of system effectiveness of interest to system designers and operation managers.

Journal ArticleDOI
TL;DR: Qualitative data from PoF approach and quantitative data from the statistical analysis is combined to form a modified physics of failure approach which overcomes some of the challenges faced byPoF approach as it involves detailed analysis of stress factors, data modeling and prediction.
Abstract: Traditional approaches like MIL-HDBK, Telcordia, and PRISM etc. have limitation in accurately predicting the reliability due to advancement in technology, process, materials etc. As predicting the reliability is the major concern in the field of electronics, physics of failure approach gained considerable importance as it involves investigating the root-cause which further helps in reliability growth by redesigning the structure, changing the parameters at manufacturer level and modifying the items at circuit level. On the other hand, probability and statistics methods provide quantitative data with reliability indices from testing by experimentation and by simulations. In this paper, qualitative data from PoF approach and quantitative data from the statistical analysis is combined to form a modified physics of failure approach. This methodology overcomes some of the challenges faced by PoF approach as it involves detailed analysis of stress factors, data modeling and prediction. A decision support system is added to this approach to choose the best option from different failure data models, failure mechanisms, failure criteria and other factors.

Journal ArticleDOI
TL;DR: A novel practical method of detection and classification for broken rotor bars, using motor current signature analysis associated with a neural network technique is developed, based on (fs − fr) mixed eccentricity harmonic.
Abstract: Early detection and diagnosis of incipient faults are desirable to ensure an improved operational effectiveness of induction motors. A novel practical method of detection and classification for broken rotor bars, using motor current signature analysis associated with a neural network technique is developed. The motor-slip is calculated via a new simple and very rigorous formula, based on (f s − f r ) mixed eccentricity harmonic. It can be seen from the experimental study, carried out on hundreds of observation, that the mixed eccentricity harmonic (f s − f r ) has the largest amplitude in its existence range, under different motor loads and conditions (healthy or defective). Since (f s − f r ) is related to the slip and the mechanical rotational frequency, it is obvious that the detection of the broken rotor bars harmonics (1 ± 2ks)f s becomes easy. The amplitude of these harmonics and the slip value (detection and discernment criterion) are used as the neural network inputs. The neural network provides a reliable decision on the machine condition. The experimental results obtained from 1.1 and 3 kW motors prove the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This will be the first study which used combine approach of ANP and GTMA leading to single numerical index of effectiveness for a manufacturing system, and will help managers to benchmark the effectiveness of manufacturing system with their peers.
Abstract: Evaluating effectiveness of a manufacturing system is increasingly recognized as a tool for gaining competitive success. Today, lot of new manufacturing technologies are coming into the market. To build confidence of managers in adopting these new technologies, measurement of their effectiveness is must. So, developing a model on measurement of effectiveness for a manufacturing system will be significant from strategic management point of view. Manufacturing effectiveness factors from the literature and an expert questionnaire were utilized prior to building the effectiveness measurement model. To prioritize these, we used well known multi-attribute decision making (MADM) technique-Analytical Network Process (ANP). ANP allows interdependencies and feedback within and between clusters of factors. ANP is the generalized form of AHP. A group of experts were consulted to establish interrelations and to provide weightage for pairwise comparison. Outcome of the ANP is weighted comparison of the factors. A Manufacturing System Effectiveness Index (MSEI) is also calculated by using robust MADM technique-Graph Theoretic and Matrix Approach (GTMA). This index is a single numerical value and will help managers to benchmark the effectiveness of manufacturing system with their peers. A case study in three organisations is performed to demonstrate and validate the use of GTMA for calculation of MSEI. To the authors’ knowledge, this will be the first study which used combine approach of ANP and GTMA leading to single numerical index of effectiveness for a manufacturing system.

Journal ArticleDOI
TL;DR: It is shown that the fault index amplitudes of obtained signals from the constant permeability are larger than that of the real case, and the influence of the magnetic saturation upon the analysis of the faulty induction motor is indicated.
Abstract: Static eccentricity produces low frequency air gap flux components, however they can be observed in stator current spectrum only under mixed eccentricity, and for high degrees of rotor shifting. Unlike motor current signature analysis, the air-gap magnetic flux signature analysis allows to detect small degree of purely static eccentricity. The simulation results are obtained by using time stepping finite elements method. In order to indicate the influence of the magnetic saturation upon the analysis of the faulty induction motor, two constant and non-liner permeability; are included in this paper. It is shown that the fault index amplitudes of obtained signals from the constant permeability are larger than that of the real case. In this paper the amplitudes of characteristic frequency components $$ f_{ecc} = \left| {f_{s} \pm k{\kern 1pt} f_{r} } \right| $$ with low degrees of purely static eccentricity fault are detected using air-gap magnetic flux signature analysis. Moreover, new index signatures are detected around the third time harmonics in air-gap magnetic flux density spectrum for saturated motor, those components are expressed by $$ f_{ecc} = mf_{s} \pm f_{r} $$ .

Journal ArticleDOI
TL;DR: A framework to facilitate the successful implementation of performance based railway infrastructure maintenance contracting is presented and a performance monitoring system is proposed to assess the outcome and identify improvement potentials of the maintenance outsourcing strategy.
Abstract: The achievement of maintenance objectives to support the overall business objectives is the pursuit of any maintenance department. Using in-house or outsourced maintenance service provider is a decision which poses challenge in the management of maintenance function. Should the decision be for outsourcing, the next concern is the selection of the most appropriate strategy suitable for the business environment, structure and philosophy. In an effort to improve maintenance function so as to deliver set objectives, some infrastructure managers adopted the approach of outsourcing maintenance function, giving larger responsibilities to maintenance service providers called contractors. Moreover, such change requires adequate attention to meet the pressing need of achieving the designed capacity of the existing railway infrastructure and also support a competitive and sustainable transport system. This paper discusses performance based railway infrastructure maintenance contracting with its issues and challenges. The approach of this article is review of literature and as well as synthesis of practices. A framework to facilitate the successful implementation of performance based railway infrastructure maintenance is presented. Also a performance monitoring system is proposed to assess the outcome and identify improvement potentials of the maintenance outsourcing strategy. A case study is given to demonstrate the monitoring of a typical maintenance activity that can be outsourced using this outsourcing strategy.

Journal ArticleDOI
TL;DR: Important issues are identified for O&M support of mechanical systems and steps are proposed and Mathematical models are developed to assess the support objectives and a vital saving can be made in system’s LCC, when the support has been optimized in context to preventive maintenance.
Abstract: Performance of mechanical systems over their life cycle is the main concern of users. Original equipment manufacturers (OEMs) of such systems are made to deliver customized products with documented support to meet the objective. The product support is of various kinds, one of these is operation and maintenance (O&M) support. Cost incurred for each support action adds to the system life cycle cost (LCC) and therefore, a trade-off between O&M support, operational objectives and design characteristics is required to optimize the LCC. In this paper, important issues are identified for O&M support of mechanical systems and steps are proposed. Mathematical models are developed to assess the support objectives i.e. LCC and operational availability of the system. The LCC takes into accounts the acquisition cost and discounted sum of support activity cost, which consist of operating cost, inspection, corrective, preventive, overhaul and logistic cost. The proposed methodology has been implemented on a real life problem, in which the OEM provides O&M support to their compressors, installed at compressed natural gas workstation in national capital region, India. The results shows that a vital saving can be made in system’s LCC, when the support has been optimized in context to preventive maintenance, viz: age based at system level, while group maintenance based on optimum replacement interval and components’ scale parameter of their failure distribution. The sensitivity analysis validated the robustness of the solution methodology and the obtained results. This work will be helpful to OEMs, customers, academician, researchers, industrialists, and other concerned persons, in understanding the importance, severity and benefits obtained by the application and implementation of the O&M support.

Journal ArticleDOI
TL;DR: An attempt is made to improve the performance of LX-PM, hybridizing with Quadratic Approximation, and the efficiency and reliability of the design hybrid algorithm is realized through a set of 15 benchmark test problems.
Abstract: Finding global optimal solution of a non-linear constrained optimization problem with high complexity is a challenge for the researchers. Now days, real coded genetic algorithm (GA) becomes popular to solve them, due to their diversity preserving mechanism. In recent literature it is proved that for solving constrained optimization problem, the real coded GA (LX-PM) that uses Laplace Crossover and Power Mutation, is much efficient. In this paper an attempt is made to improve the performance of LX-PM, hybridizing with Quadratic Approximation. The efficiency and reliability of the design hybrid algorithm is realized through a set of 15 benchmark test problems.

Journal ArticleDOI
TL;DR: Predictive model based on Ordinal Regression proposed in this work indicates that the proposed complexity metric may act as objective indicator for understandability as accuracy of the model is 86.3 % which is quite high.
Abstract: Structural complexity metrics have been widely used to assess quality of an artefact. Researchers in past have defined complexity metrics to assess the quality of multidimensional models for data warehouse. These metrics have been defined considering various elements like facts, dimensions, dimension hierarchies etc., but have not taken into account the relationships among these elements of the models. In our previous work, a comprehensive complexity metric for multidimensional models for data warehouse has been proposed which not only considered complexity due to the elements but also structural complexity due to relationships among these elements. However, the proposal lacks theoretical and empirical validation of the metric. Hence, practical utility of the metric could not be established. This paper validates the proposed metric theoretically as well as empirically. The theoretical validation using Briand’s framework shows that the proposed metric satisfies most of the properties required for a complexity measure. Empirical validation is carried out to observe the relationship between the complexity metric and understandability-a sub-characteristic of maintainability of multidimensional models. The results show that the metric has significant positive correlation with understandability of multidimensional models. Predictive model based on Ordinal Regression proposed in this work indicates that the proposed complexity metric may act as objective indicator for understandability as accuracy of the model is 86.3 % which is quite high.

Journal ArticleDOI
TL;DR: The Bayesian procedure for the estimation of the parameters of inverse Weibull distribution under Type-II hybrid censoring scheme has been discussed and the highest posterior density credible intervals for the parameters have been constructed.
Abstract: In this paper, we have discussed the Bayesian procedure for the estimation of the parameters of inverse Weibull distribution under Type-II hybrid censoring scheme. The highest posterior density credible intervals for the parameters have also been constructed. The performance of the Bayes estimators of the model parameters have been compared with maximum likelihood estimators through the Monte Carlo Markov chain techniques. Finally, two real data sets have been analysed for illustration purpose.

Journal ArticleDOI
TL;DR: An approach called wrenching is proposed that claims the best resident satisfaction at the reduced deployment cost and to resolve the non-reusability of the resources, the product line engineering methodology can be employed in selecting or configuring the appropriate smart home technology.
Abstract: The product line engineering (PLE) has proven to be the paradigm for developing diversified products and systems in shorter time, at lower cost, and higher quality. Ultimately the goal of PLE is to evolve a set of products that have both commonality and variations built into them, which allows a high degree of variability between the different products. The current trend in smart home deployment—based on commissions—guarantees a complete satisfaction for each inhabitant, as designed per resident basis. To reduce high deployment cost and to resolve the non-reusability of the resources, the product line engineering methodology can be employed in selecting or configuring the appropriate smart home technology. Within the smart homes and ambient assisted living, not only do the reusable features reduce cost, they also do not compromise with the guarantee of complete satisfaction of each resident as few senior inhabitants may impose severe constraints on the binding of few common features. In our previous work (Sharma et al., 9th International Conference on Smart Homes and Health Telematics (ICOST), 2011), we propose an approach called, wrenching that claims the best resident satisfaction at the reduced deployment cost. Today seniors are fast growing population globally ( http://www.aoa.gov/agingstatsdotnet/Main_Site/Data/2008 ) and the increased demand for smart homes in the near future is undeniable. Thus to provide assistance and independence to each senior resident of a smart home, reduction in time to market (Wohlin and Ahlgren, Softw Qual J 4:189–205, 1995) of appropriate assistive smart home technology becomes an essential consideration. Long term analysis of variability binding can help to analyze that few variants are bound more often over others. Respecting the satisfaction guarantee, the highly demanded variants can be permanently migrated into commonality, advocating the reusability. The improved reusability of features (e.g., sensors) not only enhances economy of scale but also time to market. The migration of a feature from variability to commonality is known as realizing.

Journal ArticleDOI
TL;DR: The interference theory, which is the subject matter of this paper, has acquired an important place in reliability study of the systems and the system’s strength and stress working on it are taken into consideration for evaluation of its reliability.
Abstract: The interference theory, which is the subject matter of this paper, has acquired an important place in reliability study of the systems. In it the system’s strength and stress working on it are taken into consideration for evaluation of its reliability. Two aspects of reliability problems are considered here viz. evaluation of system reliability from mathematical model of the system and inference of reliability. Under the former we have discussed some important interference or stress–strength (S–S) models along with the expression of reliability in each case. The models considered here are: for single component systems, when S–S are independent/correlated, when they have mixture of distributions, when more than one stress is working on a component i.e. a component may fail in different ways, when parameters of the distributions are random variables and for multi-components systems, the chain model, cold, warm and cascade redundancy with perfect and imperfect switches and when S–S are stochastic processes. In addition we have discussed some time dependent S–S models where along with S–S time is also taken into consideration and some maintenance (in particular repair) problems. For inference in interference models we have discussed parametric (classical and Bayesian), as well as non-parametric studies. The studies involving Monte-Carlo simulation for estimation of reliability and other characteristics are also presented. We have highlighted some studies of reliability growth models also.

Journal ArticleDOI
TL;DR: A data mining based model was used to identify the unproductive activities in a maintenance system comprising of independent components (an urban bus network) and supports the maintenance decision makers to set goals to make amendments in the maintenance systems under their supervisions.
Abstract: Nowadays the data collected in the process in maintenance systems comprise a big portion of the related databases. Analyzing these maintenance data provides the firms, enterprises and organizations with a tremendous competitive edge both in manufacturing and service sectors. As maintenance management is a costly and inevitable part of the organization, ensuring that the maintenance activities are performed in an effective manner, is of outmost importance. In other words, organizations can precede with the cost reduction operations, for instance, if and only if the unproductive maintenance activities and processes can be identified. Subsequently, rectifying or removing these kinds of activities or taking other means of modification can help enterprises and organizations to reduce their costs. Data mining is known to be an excellent tool which helps the decision makers to discover the hidden knowledge and patterns when dealing with a large amount of data. Seeing a gap in the related literature reviewed and in order to fill it, this study proposes a data mining based model to identify the unproductive maintenance activities in a maintenance system. By identifying specific inefficient maintenance activities, this model supports the maintenance decision makers to set goals to make amendments in the maintenance systems under their supervisions. Consequently, the organizations can focus on rectifying these fruitless activities and therefore reducing the costs associated with performing them. Finally, the model was used to identify the unproductive activities in a maintenance system comprising of independent components (an urban bus network).

Journal ArticleDOI
TL;DR: The profit analysis of a three unit redundant system in which two units work in parallel and one unit is kept as spare in cold standby is carried out, adopting semi-Markov process and regenerative point technique.
Abstract: The purpose of this paper is to carry out the profit analysis of a three unit redundant system in which two units work in parallel and one unit is kept as spare in cold standby. Each unit has direct complete failure from normal mode. There is a single repairman (called server) who visits the system immediately to do repair, inspection and replacement of the units. The unit does not work as new after repair and so called a degraded unit. The degraded unit at its further failure undergoes for inspection to see the feasibility of its repair. If the repair is not feasible, it is replaced immediately by new unit. The system is considered in up-state if any two of original and/or degraded units are operative. The time to failure, repair and inspection of the units are taken as arbitrary with different probability density functions. By adopting semi-Markov process and regenerative point technique, the results for some measures of system effectiveness are obtained in steady state.

Journal ArticleDOI
TL;DR: The authors of this paper make use of a higher class of Petri nets called stochastic reward nets (SRNs) to model and analyze the FMM system, which helps in combining the modeling power ofPetri nets and analytical tractability of Markov processes for carrying out the reliability analysis of the F MM.
Abstract: The machines and robots in a flexible manufacturing module (FMM) are more prone to failures as compared to the traditional manufacturing systems. The failures due to wear and tear of the machine components can be eliminated by properly scheduling the preventive maintenance operations but the failures due to random causes are unpredictable and cannot be eliminated. A corrective maintenance strategy is necessary to deal with such failures and due to these failures a lot of production time is lost. Keeping in mind the higher initial investments required in acquiring the FMM systems it is necessary to study the effect of the system reliability in system performance. Markov models have been used extensively for such studies but in many cases the system designer or reliability engineer faces a major difficulty in transforming the problem into a Markov chain due to the huge state space associated with the models. To this effect the authors of this paper make use of a higher class of Petri nets called stochastic reward nets (SRNs) to model and analyze the FMM system. The use of SRNs helps in combining the modeling power of Petri nets and analytical tractability of Markov processes for carrying out the reliability analysis of the FMM.

Journal ArticleDOI
TL;DR: The good dynamic performance of the proposed sensorless maximal wind energy capture and effectiveness of the estimation of rotational speed at different wind speed are confirmed.
Abstract: This paper presents a sensorless maximum power point tracking control of brushless double fed machine for variable speed wind energy conversion system based on extended Kalman filter for estimation of rotor speed, The estimated rotational speed is used to regulate the aerodynamic power generated to the point of maximum energy capture, the control strategy for flexible power flow control is developed by applying power winding flux oriented vector control algorithm, the extended Kalman filter is employed to identify the rotor speed which is regarded as a parameter, the analyzed and simulation results in MATLAB/Simulink platform confirmed the good dynamic performance of the proposed sensorless maximal wind energy capture and effectiveness of the estimation of rotational speed at different wind speed.

Journal ArticleDOI
TL;DR: A mathematical model using the graph theory and matrix method is developed to evaluate the performance of a gas based CCPP to help improve performance, design, maintenance planning, and selection of new power generation systems.
Abstract: The performance of a combined cycle power plant (CCPP) and cost of electricity generation per unit is a function of its basic structure (i.e., layout and design), availability (maintenance aspects), efficiency (trained manpower and technically advanced equipments), cost of equipments and maintenance, pollutants emission and other regulatory aspects. Understanding of its structure will help in the improvement of performance, design, maintenance planning, and selection of new power generation systems. A mathematical model using the graph theory and matrix method is developed to evaluate the performance of a gas based CCPP. In the graph theoretic model, a directed graph or digraph is used to represent abstract information of the system using directed edges, which is useful for visual analysis. The matrix model developed from the digraph is useful for computer processing. Detailed methodology for developing a system structure graph, various system structure matrices, and their permanent functions are described for the combined cycle power plant. A top–down approach for complete analysis of CCPP is given.

Journal ArticleDOI
TL;DR: This paper explores the combination of network coding and node buffering for handling communication failure situation in the 2D Mesh network and results are presented for various cases of communication failures.
Abstract: Parallel network coding is a new communication paradigm that takes advantage of the broadcast using characteristics of network coding for parallel architectures. Network coding has recently developed as an innovative paradigm for optimization problems for high-scale communication between several nodes of these architectures. In previous work, a decentralized approach by optimizing parallel communication with the use of network coding has been proposed. In the present paper, the chance of communication failure is evaluated and an efficient solution (PAC: LF–RM) for such situations is proposed. It is shown that since communication failure may occur and is not avoidable so to overcome such failure and the data loss, buffering at an alternate degree of network nodes will reduce this stipulation. This paper explores the combination of network coding and node buffering for handling communication failure situation in the 2D Mesh network and results are presented for various cases of communication failures.

Journal ArticleDOI
TL;DR: This paper models availability of a repairable system with multiple subsystems in which the involved components follow the load sharing strategy to find the optimal number of repairmen and redundant components in each subsystem for optimization of availability subject to weight, cost and volume constraints.
Abstract: Redundancy technique is considered as a way to enhance the reliability and availability of a system. This paper models availability of a repairable system with multiple subsystems in which the involved components follow the load sharing strategy. In redundancy allocation problems, the redundancy level is just considered, whereas, in this paper, the number of repair facility (repairman) is added to the decision variables. The goal is to find the optimal number of repairmen and redundant components in each subsystem for optimization of availability subject to weight, cost and volume constraints. The main contribution of this study is to present a straightforward model for availability optimization of a series system with multiple load sharing subsystems. Two types of decision variables (i.e., redundancy level and the number of repairmen) are directly located into objective function. The proposed model belongs to nonlinear programming. Due to complexity of the objective function, it is very severe to code the model in typical applications such as GAMS or LINGO. A particle swarm optimization algorithm is proposed to solve the problems related to the model. Finally, computational results of three assumed cases with various subsystems are illustrated to prove the efficiency of the proposed algorithm.

Journal ArticleDOI
TL;DR: Methods have been developed in order to find positive correlations between measurements and adjustments by analysing a set of historical product measurement and their following adjustments and only cases containing strong positive correlations will be used by the system.
Abstract: Measurements from products are continuously collected to allow adjustments in the production line to certify a feasible product quality. Case-based reasoning is a promising methodology for this type of quality assurance. It allows product measurements and its related adjustments to the production line to be stored as cases in a case-based reasoning system. The idea is to describe an event of adjustments based on deviations in geometric measurement points on a product and connect these measurements to their correlated adjustments done to the production line. Experience will implicitly be stored in each case in the form of uniquely weighted measurement points according to their positive influence on adjustments. Methods have been developed in order to find these positive correlations between measurements and adjustments by analysing a set of historical product measurement and their following adjustments. Each case saved in the case base will be "quality assured" according to this methods and only cases containing strong positive correlations will be used by the system. The correlations will be used to supply each case with its own set individual weights.