scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Loss Prevention in The Process Industries in 2013"


Journal ArticleDOI
TL;DR: The results demonstrate that the proposed fuzzy logic model provides more accurate, precise, sure results; so that, it can be taken into account as an intelligent risk assessment tool in different engineering problems.
Abstract: The problem of less and/or even lack of information and uncertainty in modeling and decision making plays a key role in many engineering problems; so that, it results in designers and engineers could not reach to sure solutions for the problems under consideration. In this paper, an application of the fuzzy logic for modeling the uncertainty involved in the problem of pipeline risk assessment is developed. For achieving the aim, relative risk score (RRS) methodology, one of the most popular techniques in pipeline risk assessment, is integrated with fuzzy logic. The proposed model is performed on fuzzy logic toolbox of MATLAB ® using Mamdani algorithm based on experts' knowledge. A typical case study is implemented and a comparison between the classical risk assessment approach and the proposed model is made. The results demonstrate that the proposed model provides more accurate, precise, sure results; so that, it can be taken into account as an intelligent risk assessment tool in different engineering problems.

159 citations


Journal ArticleDOI
TL;DR: An application of dynamic Bayesian networks for quantitative risk assessment of human factors on offshore blowouts is presented and the results show that the human factor barrier failure probability only increases within the first two weeks and rapidly reaches a stable level when the repair is considered.
Abstract: An application of dynamic Bayesian networks for quantitative risk assessment of human factors on offshore blowouts is presented. Human error is described using human factor barrier failure (HFBF), which consists of three categories of factors, including individual factor barrier failure (IFBF), organizational factor barrier failure (OFBF) and group factor barrier failure (GFBF). The structure of human factors is illustrated using pseudo-fault tree, which is defined by incorporating the intermediate options into fault tree in order to eliminate the binary restriction. A methodology of translating pseudo-fault tree into Bayesian networks and dynamic Bayesian networks taking repair into consideration is proposed and the propagation is performed. The results show that the human factor barrier failure probability only increases within the first two weeks and rapidly reaches a stable level when the repair is considered, whereas it increases continuously when the repair action is not considered. The results of mutual information show that the important degree sequences for the three categories of human factors on HFBF are: GFBF, OFBF and IFBF. In addition, each individual human factor contributes different to the HFBF, those which contribute much should given more attention in order to improve the human reliability and prevent the potential accident occurring.

128 citations


Journal ArticleDOI
TL;DR: A hybrid approach of fuzzy set theory and fault tree analysis is investigated to quantify the CotFE fault tree in fuzzy environment and evaluate the COTFE occurrence probability.
Abstract: Crude oil tank fire and explosion (COTFE) is the most frequent type of accident in petroleum refineries, oil terminals or storage which often results in human fatality, environment pollution and economic loss. In this paper, with fault tree qualitative analysis technique, various potential causes of the COTFE are identified and a COTFE fault tree is constructed. Conventional fault tree quantitative analysis calculates the occurrence probability of the COTFE using exact probability data of the basic events. However, it is often very difficult to obtain corresponding precise data and information in advance due to insufficient data, changing environment or new components. Fuzzy set theory has been proven to be effective on such uncertain problems. Hence, this article investigates a hybrid approach of fuzzy set theory and fault tree analysis to quantify the COTFE fault tree in fuzzy environment and evaluate the COTFE occurrence probability. Further, importance analysis for the COTFE fault tree, including the Fussell–Vesely importance measure of basic events and the cut sets importance measure, is performed to help identifying the weak links of the crude oil tank system that will provide the most cost-effective mitigation. Also, a case study and analysis is provided to testify the proposed method.

128 citations


Journal ArticleDOI
TL;DR: In this paper, coal dust and the suppressing agent were injected into the experimental tube by the dust dispersion units, and a self-sustained detonation wave characterized by the existence of a transverse wave was propagated in the coal dust/air mixtures.
Abstract: Methane/coal dust/air explosions under strong ignition conditions have been studied in a 199 mm inner diameter and 30.8 m long horizontal tube. A fuel gas/air manifold assembly was used to introduce methane and air into the experimental tube, and an array of 44 equally spaced dust dispersion units was used to disperse coal dust particles into the tube. The methane/coal dust/air mixture was ignited by a 7 m long epoxypropane mist cloud explosion. A deflagration-to-detonation transition (DDT) was observed, and a self-sustained detonation wave characterized by the existence of a transverse wave was propagated in the methane/coal dust/air mixtures. The suppressing effects on methane/coal dust/air mixture explosions of three solid particle suppressing agents have been studied. Coal dust and the suppressing agent were injected into the experimental tube by the dust dispersion units. The length of the suppression was 14 m. The suppression agents examined in this study comprised ABC powder, SiO2 powder, and rock dust powder (CaCO3). Methane/coal dust/air explosions can be efficiently suppressed by the suppression agents characterized by the rapid decrease in overpressure and propagating velocity of the explosion waves.

107 citations


Journal ArticleDOI
TL;DR: In this paper, a PCA-based generalized likelihood ratio (GLR) fault detection algorithm was developed to exploit the advantages of the GLR test in the absence of a process model.
Abstract: Safe process operation requires effective fault detection (FD) methods that can identify faults in various process parameters. In the absence of a process model, principal component analysis (PCA) has been successfully used as a data-based FD technique for highly correlated process variables. Some of the PCA detection indices include the T 2 or Q statistics, which have their advantages and disadvantages. When a process model is available, however, the generalized likelihood ratio (GLR) test, which is a statistical hypothesis testing method, has shown good fault detection abilities. In this work, a PCA-based GLR fault detection algorithm is developed to exploit the advantages of the GLR test in the absence of a process model. In fact, PCA is used to provide a modeling framework for the develop fault detection algorithm. The PCA-based GLR fault detection algorithm provides optimal properties by maximizing the detection probability of faults for a given false alarm rate. The performance of the PCA-based GLR fault detection algorithm is illustrated and compared to conventional fault detection methods through two simulated examples, one using synthetic data and the other using simulated continuously stirred tank reactor (CSTR) data. The results of these examples clearly show the effectiveness of the developed algorithm over conventional methods.

103 citations


Journal ArticleDOI
TL;DR: In this paper, a mathematical model for self-ignition of coarse coal piles was developed and the process of spontaneous ignition of coal stockpiles in temporary coal storage yards was investigated numerically using COMSOL multiphysics software.
Abstract: Spontaneous combustion of coarse coal stockpiles in temporary coal storage yards was investigated numerically using COMSOL Multiphysics software. The main purposes of the numerical investigation were to identify the self-ignition characteristics of coarse coal stockpiles and formulate a theoretical model to predict the self-ignition time and locations of coarse coal piles. A mathematical model for self-ignition of coarse coal piles was developed and the process of spontaneous ignition of coarse coal stockpiles was simulated. The kinetic data of low-temperature oxidation reaction was obtained from the laboratory-scale experiments with bituminous coals taken from Jindi Coal Mine of Shanxi Province in China. The influence of moisture was ignored because the studied coal had low moisture content (mass concentration: 1.87%) and both coal and ambient environment were assumed to be saturated with moisture (or ambient environment was assumed to be dry). The effects of five variables (i.e. wind velocity, oxygen concentration, height, porosity, and side slope) on the spontaneous ignition in coarse coal piles were examined. Simultaneously, a theoretical prediction model was formulated in light of variable analyses and a great number of simulations. Compared to self-ignition characteristics of fine-particle coal piles, several self-ignition characteristics of coarse coal piles were identified by numerical investigation. Wind-driven forced convection plays a predominant role in self-heating of coarse coal piles. As wind velocity increases, the self-ignition location in the pile migrates from the lower part which is close to the surface of the windward side to the upper part near to the surface of the leeward side. Wind velocity increase exerts the positive or the negative effect on self-heating, which depends on a critical wind velocity value to sustain balances of both the heat and the availability of oxygen in the coarse coal pile. The behavior of self-ignition is remarkably sensitive to both oxygen concentration and height, and a coarse coal stockpile will not ignite spontaneously as long as one of two critical variable values is satisfied: oxygen concentration of 5% or height of 3 m. The theoretical prediction model suggests when and where countermeasures should be made to prevent the self-ignition in the coal stockpile with engineering accuracy.

89 citations


Journal ArticleDOI
TL;DR: Explosibility of micron-and nano-titanium was determined and compared according to explosion severity and likelihood using standard dust explosion equipment as discussed by the authors, which was followed using a Siwek 20-L explosion chamber, MIKE 3 apparatus and BAM oven.
Abstract: Explosibility of micron- and nano-titanium was determined and compared according to explosion severity and likelihood using standard dust explosion equipment. ASTM methods were followed using a Siwek 20-L explosion chamber, MIKE 3 apparatus and BAM oven. The explosibility parameters investigated for both size ranges of titanium include explosion severity (maximum explosion pressure (Pmax) and size-normalized maximum rate of pressure rise (KSt)) and explosion likelihood (minimum explosible concentration (MEC), minimum ignition energy (MIE) and minimum ignition temperature (MIT)). Titanium particle sizes were −100 mesh (

87 citations


Journal ArticleDOI
TL;DR: DyPASI features as a tool to support emerging risk management process, having the potentiality to contribute to an integrated approach aimed at breaking “vicious circles”, helping to trigger a gradual process of identification and assimilation of previously unrecognised atypical scenarios.
Abstract: The availability of a hazard identification methodology based on early warnings is a crucial factor in the identification of emerging risks In the present study, a specific method named Dynamic Procedure for Atypical Scenarios Identification (DyPASI) was conceived as a development of bow-tie identification techniques The main aim of the methodology is to provide a comprehensive hazard identification of the industrial process analysed, joined to a process of continuous improvement of the results of the assessment DyPASI is a method for the continuous systematization of information from early signals of risk related to past events The technique provides a support to the identification and assessment of atypical potential accident scenarios related to the substances, the equipment and the site considered, capturing available early warnings or risk notions DyPASI features as a tool to support emerging risk management process, having the potentiality to contribute to an integrated approach aimed at breaking “vicious circles”, helping to trigger a gradual process of identification and assimilation of previously unrecognised atypical scenarios

87 citations


Journal ArticleDOI
TL;DR: In this article, the authors reviewed and analyzed six categories of pitting knowledge to assess the current depth and breadth of understanding and to identify knowledge gaps in each category, and they found that the depth of knowledge on pitting corrosion rate modeling and pitting mechanism is limited and requires further detailed study.
Abstract: Corrosion under insulation (CUI) is an important issue in marine environments. Pitting corrosion is a significant contributor to this issue. The ability to understand and model pitting behavior is integral to designing and maintaining assets in marine environments to decreased costs and increase safety and productivity. This paper reviews and analyses six categories of pitting knowledge to assess the current depth and breadth of understanding and to identify knowledge gaps in each category. The categories investigated are: identification of pitting, experimental methods, mechanism of pitting, modeling of pitting corrosion rates, remaining life assessment modeling, and risk-based inspections. This analysis finds that the depth of knowledge on pitting corrosion rate modeling and pitting mechanism is limited and requires further detailed study. The outcome of such study will strengthen pitting corrosion rate modeling, the accuracy of fitness-for-service assessments and risk-based inspection strategies.

84 citations


Journal ArticleDOI
TL;DR: The methane-air detonation experiments were performed to characterize high pressure explosion processes that may occur in sealed areas of underground coal mines as mentioned in this paper, and the detonations propagated with an average velocity between 1512 and 1863m/s.
Abstract: The methane–air detonation experiments are performed to characterize high pressure explosion processes that may occur in sealed areas of underground coal mines. The detonation tube used for these studies is 73 m long, 105 cm internal diameter, and closed at one end. The test gas is 97.5% methane with about 1.5% ethane, and the methane–air test mixtures varied between 4% and 19% methane by volume. Detonations were successfully initiated for mixtures containing between 5.3% and 15.5% methane. The detonations propagated with an average velocity between 1512 and 1863 m/s. Average overpressures recorded behind the first shock pressure peak varied between 1.2 and 1.7 MPa. The measured detonation velocities and pressures are close to their corresponding theoretical Chapman-Jouguet (CJ) detonation velocity (DCJ) and detonation pressure (PCJ). Outside of these detonability limits, failed detonations produced decaying detached shocks and flames propagating with velocities of approximately 1/2 DCJ. Cell patterns on smokefoils during detonations were very irregular and showed secondary cell structures inside primary cells. The measured width of primary cells varied between 20 cm near the stoichiometry and 105 cm (tube diameter) near the limits. The largest detonation cell (105 cm wide and 170 cm long) was recorded for the mixture containing 15.3% methane.

83 citations


Journal ArticleDOI
TL;DR: In this paper, a fuzzy mathematical model for assessment of organizational resilience potential in SMEs of the process industry is presented, which forms the basis for a survey that may include a significant number of organizations from one region and future improvement based on benchmark and knowledge sharing.
Abstract: In order to establish adequate tools for the modern business environment, and with a need for new mechanisms with the goal of overcoming crisis and emerging disorder, the concept of organizational resilience has emerged. A high level of organizational resilience represents one of an organization's target values during a normal period of operation. In a period of crisis, the presence of resilience is even more needed; this is emphasized in the process industry because in these conditions when one process fails it may cause significant problems in other processes. The contribution of this paper is shown through a fuzzy mathematical model for assessment of organizational resilience potential in SMEs of the process industry. The model is verified through an illustrative example where obtained data suggest measures which should enhance business strategy and improve organizational resilience factors. This study forms the basis for a survey that may include a significant number of organizations from one region and future improvement based on benchmark and knowledge sharing.

Journal ArticleDOI
TL;DR: In this paper, a two-dimensional staggered array of square obstacles is modeled by solving the compressible multidimensional reactive Navier-Stokes equations, and the energy release rate for a stoichiometric H 2 -air mixture was modeled by a one-step Arrhenius kinetics.
Abstract: We study flame acceleration and DDT in a two-dimensional staggered array of square obstacles by solving the compressible multidimensional reactive Navier–Stokes equations. The energy release rate for a stoichiometric H 2 -air mixture is modeled by a one-step Arrhenius kinetics. The space between obstacles is filled with a stoichiometric H 2 -air mixture at 1 atm and 298 K. Initially, the flow is at rest, and a flame is ignited at the center of the array. Computations show effects of the obstacles as a series of events leading to DDT. During the initial flame acceleration, the speed of the flame depends on the direction of flame propagation since some directions are more obstructed than others. This affects the macroscopic shape of the expanding burned region, which forms concave boundaries in more obstructed directions. As the flame accelerates, shocks form ahead of the flame, reflect from obstacles, and interact with the flame. There are more shock–flame interactions in more obstructed directions, and this leads to a greater flame acceleration and stronger leading shocks. When the shocks become strong enough, their collisions with obstacles ignite the gas mixture, and detonations form. The simulation shows four independent DDT events within a 90-degree sector, all in more obstructed directions. Resulting detonations spread in all directions. Some parts of detonation fronts are quenched by diffractions around obstacles, but they are reignited by collisions of decoupled shocks, or overtaken by other detonations. Thus detonations continue to spread and quickly burn all the material between the obstacles.

Journal ArticleDOI
TL;DR: Details of a Bayesian approach to quantitative risk analysis are described and some examples of both discrete random variables, such as the probability values in a LOPA, and continuous distributions, which can better reflect the uncertainty in data are provided.
Abstract: Quantitative risk analysis is in principle an ideal method to map one’s risks, but it has limitations due to the complexity of models, scarcity of data, remaining uncertainties, and above all because effort, cost, and time requirements are heavy. Also, software is not cheap, the calculations are not quite transparent, and the flexibility to look at various scenarios and at preventive and protective options is limited. So, the method is considered as a last resort for determination of risks. Simpler methods such as LOPA that focus on a particular scenario and assessment of protection for a defined initiating event are more popular. LOPA may however not cover the whole range of credible scenarios, and calamitous surprises may emerge. In the past few decades, Artificial Intelligence university groups, such as the Decision Systems Laboratory of the University of Pittsburgh, have developed Bayesian approaches to support decision making in situations where one has to weigh gains and costs versus risks. This paper will describe details of such an approach and will provide some examples of both discrete random variables, such as the probability values in a LOPA, and continuous distributions, which can better reflect the uncertainty in data.

Journal ArticleDOI
TL;DR: In this article, the authors quantify the potential overpressures due to vapour cloud explosions (VCEs) using the Process Hazard Analysis DNV Norway based PHAST 6.51 Software.
Abstract: On 29 October 2009, at 19:30 IST, a devastating vapour cloud explosion occurred in a large fuel storage area at the Indian Oil Corporation (IOC) Depot in Jaipur, India, generating significant blast pressure. As a consequence of this explosion, the entire installation was destroyed, buildings in the immediate vicinity were heavily damaged, and windowpane breakages were found up to 2 km from the terminal. The IOC estimated that the total loss from the fire and explosion was approximately INR 2800 million. Ironically, as a storage site, the Jaipur terminal was not highly congested, and thus was not considered to have adequate potential for a vapour cloud explosion (VCE). Nevertheless, the prima facie evidences indicate that this was a case of VCE. Therefore, the main objective of this study is to quantify the potential overpressures due to vapour cloud explosions (VCEs) using the Process Hazard Analysis DNV Norway based PHAST 6.51 Software. The results are validated by the extent of the damage that had occurred. The estimation of the VCE shows that a maximum 1.0 bar overpressure was generated in the surrounding area. The initial assessment of the accident data roughly estimates the release mode, time, and amount of vaporized fuel. A more accurate estimate has been obtained by modelling the dispersion of vapour clouds in the surrounding atmosphere, which reveals trends and relationships for the occurrence of vapour cloud explosions.

Journal ArticleDOI
TL;DR: The proposed model can provide accident investigators with a tool to generate cost-efficient safety intervention strategies and rankings in a cost-effectiveness manner through Best-Fit method and Evidential Reasoning approach.
Abstract: In this paper, an accident analysis model is proposed to develop the cost-efficient safety measures for preventing accidents. The model comprises two parts. In the first part, a quantitative accident analysis model is built by integrating Human Factors Analysis and Classification System (HFACS) with Bayesian Network (BN), which can be utilized to present the corresponding prevention measures. In the second part, the proposed prevention measures are ranked in a cost-effectiveness manner through Best-Fit method and Evidential Reasoning (ER) approach. A case study of vessel collision is analyzed as an illustration. The case study shows that the proposed model can be used to seek out accident causes and rank the derived safety measures from a cost-effectiveness perspective. The proposed model can provide accident investigators with a tool to generate cost-efficient safety intervention strategies.

Journal ArticleDOI
TL;DR: In this article, the authors used the data mining classification and regression tree (CART) to examine the distribution and rules governing the factors of major occupational accidents in the petrochemical industry.
Abstract: Accidents that occur in the petrochemical industry frequently result in serious social issues. Behind every occupational accident, there are safety management problems requiring investigation. This study collected 349 cases of major occupational accidents in the petrochemical industry between 2000 and 2010 in Taiwan for analysis. Using descriptive statistics, we elucidated the factor distribution of these major occupational accidents. The data mining classification and regression tree (CART) was used to examine the distribution and rules governing the factors of the disasters. This study found that for equipment such as pipelines and control valves, devising high-quality safety and protective devices/maintenance/renewal plans and pipeline setups/design plans can effectively prevent accidents such as fires, explosions, and poisoning caused by material leakage, as well as employees being caught in/rolled up in machinery. Furthermore, implementing safety management measures, such as worker safety educational training, and enforcing standards for inspections, operations, and risk assessments personnel, has become an important factor in accident prevention. This study suggests the use of the following measures: for abnormal conditions such as pipeline cracking/damage or rusting, high-temperatures caused by material leakage into the inner protective layer of pipelines should be prevented. Considering overlapping pipelines, rusting issues caused by pipelines touching each other should be avoided, and maintenance and repair should be performed to ensure the safety of work environments. These measures can eliminate the risk of work injuries and resulting social issues.

Journal ArticleDOI
TL;DR: In this paper, a new approach to risk management in mining projects is presented based on a novel concept called "hazard concentration" and on the multi-criteria analysis method known as the Analytic Hierarchy Process (AHP).
Abstract: The mining industry worldwide is currently experiencing an economic boom that is contributing to economic recovery and social progress in many countries. For this to continue, the mining industry must meet several challenges associated with the start-up of new projects. In a highly complex and uncertain environment, rigorous management of risks remains indispensable in order to repel threats to the success of mining. In this article, a new practical approach to risk management in mining projects is presented. This approach is based on a novel concept called “hazard concentration” and on the multi-criteria analysis method known as the Analytic Hierarchy Process (AHP). The aim of the study is to extend the use of this approach to goldmines throughout Quebec. The work is part of a larger research project of which the aim is to propose a method suitable for managing practically all risks inherent in mining projects. This study shows the importance of taking occupational health and safety (OHS) into account in all operational activities of the mine. All project risks identified by the team can be evaluated. An adaptable database cataloguing about 250 potential hazards in an underground goldmine was constructed. In spite of limitations, the results obtained in this study are potentially applicable throughout the Quebec mining sector.

Journal ArticleDOI
TL;DR: In this paper, a vented cylindrical vessel 162mm in diameter 4.5m long was used to study the effect of separation distance of two low blockage (30%) obstacles.
Abstract: The separation distance (or pitch) between two successive obstacles or rows of obstacles is an important parameter in the acceleration of flame propagation and increase in explosion severity. Whilst this is generally recognised, it has received little specific attention by investigators. In this work a vented cylindrical vessel 162 mm in diameter 4.5 m long was used to study the effect of separation distance of two low blockage (30%) obstacles. The set up was demonstrated to produce overpressure through the fast flame speeds generated (i.e. in a similar mechanism to vapour cloud explosions). A worst case separation distance was found to be 1.75 m which produced close to 3 bar overpressure and a flame speed of about 500 m/s. These values were of the order of twice the overpressure and flame speed with a double obstacle separated 2.75 m (83 characteristic obstacle length scales) apart. The profile of effects with separation distance was shown to agree with the cold flow turbulence profile determined in cold flows by other researchers. However, the present results showed that the maximum effect in explosions is experienced further downstream than the position of maximum turbulence determined in the cold flow studies. It is suggested that this may be due to the convection of the turbulence profile by the propagating flame. The present results would suggest that in many previous studies of repeated obstacles the separation distance investigated might not have included the worst case set up, and therefore existing explosion protection guidelines may not be derived from worst case scenarios.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new barrier-based accident model for offshore drilling blowout, which is based on the three-level well control theory, and primary and secondary well control barriers and an extra well monitoring barrier are established between the reservoir and the blowout event.
Abstract: Blowout is one of the most serious accidents in the offshore oil and gas industry. Accident records show that most of the offshore blowouts have occurred in the drilling phase. Efficient measures to prevent, mitigate, and control offshore drilling blowouts are important for the entire offshore oil and gas industry. This article proposes a new barrier-based accident model for drilling blowouts. The model is based on the three-level well control theory, and primary and secondary well control barriers and an extra well monitoring barrier are established between the reservoir and the blowout event. The three barriers are illustrated in a graphical model that is similar to the well-known Swiss cheese model. Five additional barriers are established to mitigate and control the blowout accident, and event tree analysis is used to analyze the possible consequence chains. Based on statistical data and literature reviews, failures of each barrier are presented. These failures can be used as guidance for offshore drilling operators to become aware of the vulnerabilities of the safety barrier system, and to assess the risk related to these barriers. The Macondo accident is used as a case study to show how the new model can be used to understand the development of the events leading to the accident. The model can also be used as an aid to prevent future blowouts or to stop the escalation of events.

Journal ArticleDOI
TL;DR: A new and effective method to inspect the multiple leak locations, and it is revealed that improved PSO-SVM can be used as a powerful tool for studying the leak of pipeline.
Abstract: An improved and integrated approach of support vector machine and particle swarm optimization theory (PSO-SVM) is first used to detect the leak location of pipelines and overcome the problem of multiple leaks. The calibration and predictive ability of improved PSO-SVM is investigated and compared with that of other common method, back-propagation neural network (BPNN). Two conditions are evaluated. One with a leak involves a set of 20 samples, while another with two leaks has 127 samples. Both internal and external validations are performed to validate the performance of the resulting models. The results show that, for the two conditions, the values calculated by improved PSO-SVM are in good agreement with those simulated by transient model, and the performances of improved PSO-SVM models are superior to those of BPNN. This paper provides a new and effective method to inspect the multiple leak locations, and also reveals that improved PSO-SVM can be used as a powerful tool for studying the leak of pipeline.

Journal ArticleDOI
TL;DR: In this paper, a gaseous kerosene-air mixture with a liquid fuel temperature of 60°C and a fixed spark gap of 3.3 mm was tested for the possibility of ignition.
Abstract: Quantifying the risk of accidental ignition of flammable mixtures is extremely important in industry and aviation safety. The concept of a minimum ignition energy (MIE), obtained using a capacitive spark discharge ignition source, has traditionally formed the basis for determining the hazard posed by fuels. While extensive tabulations of historical MIE data exist, there has been little work done on ignition of realistic industrial and aviation fuels, such as gasoline or kerosene. In the current work, spark ignition tests are performed in a gaseous kerosene–air mixture with a liquid fuel temperature of 60°C and a fixed spark gap of 3.3 mm. The required ignition energy was examined, and a range of spark energies over which there is a probability of ignition is identified and compared with previous test results in Jet A (aviation kerosene). The kerosene results are also compared with ignition test results obtained in previous work for traditional hydrogen-based surrogate mixtures used in safety testing as well as two hexane–air mixtures. Additionally, the statistical nature of spark ignition is discussed.

Journal ArticleDOI
TL;DR: The Work and Accident Process (WAP) classification scheme was proposed in the paper and aims to investigate how maintenance impacts the occurrence of major accidents and develop classification schemes for causes of maintenance-related major accidents.
Abstract: The potential for major accidents is inherent in most industries that handle or store hazardous substances, for e.g. the hydrocarbon and chemical process industries. Several major accidents have been experienced over the past three decades. Flixborough Disaster (1974), Seveso Disaster (1976), Alexander Kielland Disaster (1980), Bhopal Gas Tragedy (1984), Sandoz Chemical Spill (1986), Piper Alpha Disaster (1988), Philips 66 Disaster (1989), Esso Longford Gas Explosion (1998), Texas City Refinery Explosion (2005), and most recently the Macondo Blowout (2010) are a few examples of accidents with devastating consequences. Causes are being exposed over time, but in recent years maintenance influence tends to be given less attention. However, given that some major accidents are maintenance-related, we intend to concentrate on classifying them to give a better insight into the underlying and contributing causes. High degree of technological and organizational complexity are attributes of these industries, and in order to control the risk, it is common to deploy multiple and independent safety barriers whose integrity cannot be maintained without adequate level of maintenance. However, maintenance may have a negative effect on barrier performance if the execution is incorrect, insufficient, delayed, or excessive. Maintenance can also be the triggering event. The objectives of this article are: (1) To investigate how maintenance impacts the occurrence of major accidents, and (2) To develop classification schemes for causes of maintenance-related major accidents. The paper builds primarily on model-based and empirical approaches, the latter being applied to reports on accident investigation and analysis. Based on this, the Work and Accident Process (WAP) classification scheme was proposed in the paper.

Journal ArticleDOI
TL;DR: In this paper, the potential explosion consequences from any non-homogeneous gas cloud can be approximated by exploding a smaller gas cloud at stoichiometric concentration, which is the equivalent cloud method used in this paper.
Abstract: The reactivity of a flammable gas mixture depends strongly on the concentration. Explosions can only take place between the flammability limits LFL and UFL (5%–14% for methane), with by far the strongest explosions occurring near stoichiometry. When performing explosion studies to evaluate or minimize risk, optimizing design or ways to mitigate, many different approaches exist. Worst-case approaches assuming stoichiometric gas clouds filling the entire facility are often much too conservative and may lead to very expensive solutions. More refined approaches studying release scenarios leading to flammable clouds can give a more precise description of the risk (probabilistic approach) or worst-case consequences (realistic worst-case study). One main challenge with such approaches is that there can be thousands of potential release scenarios to study, e.g. variations of release location, direction, rate-profile, wind direction and strength. For each resulting gas cloud there can further be thousands of explosion scenarios as the transient non-homogeneous gas cloud can be ignited at a number of different locations and times. To reduce the number of explosion scenarios, in early 1990s GexCon developed a concept called Equivalent Stoichiometric Clouds (ESC, initially called Erfac, later modified to Q5 and Q9) to linearize the expected hazards from arbitrary non-homogeneous, dispersed flammable gas clouds. The idea is that the potential explosion consequences from any non-homogeneous gas cloud can be approximated by exploding a smaller gas cloud at stoichiometric concentration. These concepts are in extensive use in explosion risk and consequence assessments. For probabilistic assessments all transient dispersion scenarios modeled may for each time step be given an ignition probability and an equivalent cloud size. For realistic worst-case assessments, the dispersed gas clouds may be ignited at the time when the estimated equivalent gas cloud has its maximum. Compared to alternative simplifications, e.g. applying faster and less accurate consequence models, the equivalent cloud method simplifications keep much of the precision required in an explosion study. Despite the wide acceptance and use of these methods, they have also been criticized for not being conservative enough or for being inaccurate, and some groups prefer a much more conservative approach substituting any predicted flammable gas cloud volume with the most reactive concentration. It is well known that explosion consequences may vary strongly when changing ignition location and other parameters, and one can therefore not expect that an equivalent cloud method will accurately reproduce any ignited gas cloud scenario (which has never been the goal), but rather provide a reasonable estimate of expected explosion consequences when used according to GexCon developed guidance. The currently recommended Q9 method works well for a range of scenarios, and for certain more confined or high reactivity scenarios a more conservative approach is recommended (Q8). For some scenarios the current approach has weaknesses, e.g. aerosols. This paper will describe different equivalent cloud methods, show examples of use and evaluate performance, and discuss weaknesses and potential improvements.

Journal ArticleDOI
TL;DR: In this article, the authors present a novel quantitative risk analysis process for urban natural gas pipeline networks using geographical information systems (GIS), which incorporates an assessment of failure rates of integrated pipeline networks, a quantitative analysis model of accident consequences, and assessments of individual and societal risks.
Abstract: This paper presents a novel quantitative risk analysis process for urban natural gas pipeline networks using geographical information systems (GIS). The process incorporates an assessment of failure rates of integrated pipeline networks, a quantitative analysis model of accident consequences, and assessments of individual and societal risks. Firstly, the failure rates of the pipeline network are calculated using empirical formulas influenced by parameters such as external interference, corrosion, construction defects, and ground movements. Secondly, the impacts of accidents due to gas leakage, diffusion, fires, and explosions are analyzed by calculating the area influenced by poisoning, burns, and deaths. Lastly, based on the previous analyses, individual risks and social risks are calculated. The application of GIS technology helps strengthen the quantitative risk analysis (QRA) model and allows construction of a QRA system for urban gas pipeline networks that can aid pipeline management staff in demarcating high risk areas requiring more frequent inspections.

Journal ArticleDOI
TL;DR: In this paper, the authors have conducted computational fluid dynamics (CFD) simulations for dense gas dispersion of liquefied natural gas (LNG) and found that almost 75% of the dispersed vapour was retained inside the impoundment zone.
Abstract: Computational fluid dynamics (CFD) simulations have been conducted for dense gas dispersion of liquefied natural gas (LNG). The simulations have taken into account the effects of gravity, time-dependent downwind and crosswind dispersion, and terrain. Experimental data from the Burro series field tests, and results from integral model (DEGADIS) have been used to assess the validity of simulation results, which were found to compare better with experimental data than the commonly used integral model DEGADIS. The average relative error in maximum downwind gas concentration between CFD predictions and experimental data was 19.62%. The validated CFD model was then used to perform risk assessment for most-likely-spill scenario at LNG stations as described in the standard of NFPA 59A (2009) “Standard for the Production, Storage and Handling of Liquefied Natural Gas”. Simulations were conducted to calculate the gas dispersion behaviour in the presence of obstacles (dikes walls). Interestingly for spill at a higher elevation, e.g., tank top, the effect of impounding dikes on the affected area was minimal. However, the impoundment zone did affect the wind velocity field in general, and generated a swirl inside it, which then played an important function in confining the dispersion cloud inside the dike. For most cases, almost 75% of the dispersed vapour was retained inside the impoundment zone. The finding and analysis presented here will provide an important tool for designing LNG plant layout and site selection.

Journal ArticleDOI
TL;DR: RAPID-N as discussed by the authors is an on-line, extensible risk assessment and mapping software framework that allows rapid local and regional natech risk assessment with minimal data input, such as on-site natural hazard parameters and plant unit characteristics.
Abstract: Natech accidents at industrial plants are an emerging risk with possibly serious consequences. For the mitigation of natech risk, authorities need to identify natech prone areas in a systematic manner. In order to facilitate probabilistic natech risk mapping, a unified methodology was developed that is based on the estimation of on-site natural hazard parameters, determination of damage probabilities of plant units, and assessment of probability and severity of possibly triggered natech events. The methodology was implemented as an on-line, extensible risk assessment and mapping software framework called RAPID-N, which allows rapid local and regional natech risk assessment and mapping with minimal data input. RAPID-N features an innovative data estimation framework to complete missing input data, such as on-site natural hazard parameters and plant unit characteristics. The framework is also used for damage assessment and natech consequence analysis, and allows easy modification of input parameters, dynamic generation of consequence models according to data availability, and extension of models by adding new equations or substituting existing ones with alternatives. Results are presented as summary reports and interactive risk maps, which can be used for land-use and emergency planning purposes by using scenario hazards, or for rapid natech consequence assessment following actual disasters. As proof of concept, the framework provides a custom implementation of the U.S. EPA's RMP Guidance for Offsite Consequence Analysis methodology to perform natech consequence analysis and includes comprehensive data for earthquakes. It is readily extendible to other natural hazards and more comprehensive risk assessment methods.

Journal ArticleDOI
TL;DR: The American National Standards Institute (ANSI)/American Petroleum Institute (API) Standard 780 Security Risk Assessment (SRA) Methodology was published in June 2013 as a U. S. standard for security risk assessments on petroleum and petrochemical facilities as mentioned in this paper.
Abstract: The American National Standards Institute (ANSI)/American Petroleum Institute (API) Standard 780 Security Risk Assessment (SRA) Methodology was published in June 2013 as a U. S. standard for security risk assessments on petroleum and petrochemical facilities. The standard represents a model standard for evaluating all security risks of petroleum and petrochemical infrastructure and operations and assists industries in more thoroughly and consistently conducting SRAs. The 2013 Standard is an update from the previous API/NPRA SRA Methodology (2004) and focuses on expanding functional utility without changing the basic methodology. The methodology can be applied to a wide range of assets even beyond the typical operating facilities of the industry. This includes refining and petrochemical manufacturing operations, pipelines, and transportation operations including truck, marine, and rail, as well as worker and executive security, housing compounds, and remote operational sites. The new standard describes the most efficient and thorough approach for assessing security risks widely applicable to the types of facilities operated by the industry and the security issues they face. It is voluntary but has been adopted by the Kingdom of Saudi Arabia Ministry of Interior High Commission for Industrial Security as the mandatory security risk assessment methodology for industrial facilities. This paper examines the key elements of the ANSI/API SRA process and discusses how forward thinking organizations may use risk-based performance metrics to systematically analyze facility security postures and identify appropriately scaled and fiscally responsible countermeasures based on current and projected threats. The AcuTech Consulting Group developed the methodology under contract to the API, and the author was the project manager for the project.

Journal ArticleDOI
TL;DR: In this article, a 2-region risk matrix is proposed and used to evaluate the acceptability of the inherent risk based on the severity and likelihood rating, and modification for improvement can be done using the inherent safety principles.
Abstract: At preliminary design stage, process designers normally lack of information on the risk level from process plant. An inherently safer process plant could be designed if the information of risk levels could be known earlier at the preliminary design stage. If the risk level could be determined, there is a possibility to eliminate or reduce the risk by applying the well-known concept: inherent safety principle. This paper presents a technique to determine the risk levels at preliminary process design stage using a 2-region risk matrix concept. A model to calculate the severity and likelihood of a toxic release accident was developed in Microsoft Excel spreadsheet. This model is integrated with process design simulator, iCON to allow for data transfer during preliminary design stage. 2-region risk matrix is proposed and used to evaluate the acceptability of the inherent risk based on the severity and likelihood rating. If the inherent risk level is unacceptable, modification for improvement can be done using the inherent safety principles. A case study has been carried out to illustrate the benefit of applying this newly developed technique. It was successfully shown that an inherently safer plant could easily be designed by applying this technique.

Journal ArticleDOI
TL;DR: In this paper, the authors used infrared photography to study the behavior of flame behavior and blast waves generated during unconfined hydrogen deflagrations, and found that the self-acceleration of the flame was caused by diffusional-thermal and hydrodynamic instabilities of the blast wave.
Abstract: Flame behavior and blast waves generated during unconfined hydrogen deflagrations were experimentally studied using infrared photography. Infrared photography enables expanding spherical flame behaviors to be measured and flame acceleration exponents to be evaluated. In the present experiments, hydrogen/air mixtures of various concentrations were filled in a plastic tent of thin vinyl sheet of 1 m3 and ignited by an electric spark. The onset of accelerative dynamics on the flame propagation was analyzed by the time histories of the flame radius and the stretched flame speed. The results demonstrated that the self-acceleration of the flame, which was caused by diffusional-thermal and hydrodynamic instabilities of the blast wave, was influenced by hydrogen deflagrations in unconfined areas. In particular, it was demonstrated that the overpressure rapidly increased with time. The burning velocity acceleration was greatly enhanced with spontaneous-turbulization.

Journal ArticleDOI
TL;DR: In this paper, the frequency of major gas explosion accidents between the years 1980-2000, and the years 2001-2010 was reviewed, and case studies were also compared, showing that the proportion of accidents caused by deliberate violation was reduced by 31.13% compared with data from 1980 to 2000.
Abstract: Major accidents from gas explosions have a high rate of occurrence in Chinese coal mines. The frequency of major gas explosion accidents between the years 1980–2000, and the years 2001–2010 was reviewed. Case studies were also compared. The study of direct causes indicates that during the period 2001–2010 the proportion of accidents caused by deliberate violation was reduced by 31.13% compared with data from 1980 to 2000. However the proportion of accidents caused by mismanagement rose by 32.38% during this period. Direct causes of high occurrence rate accidents include deliberate violations such as illegal blasting, conducting maintenance with the power on, and mismanagement behaviors such as chaotic electromechanical management and chaotic ventilation management. The study of environmental characteristics shows that the proportion of accidents occurring in the heading faces increased by 27.18%. The study of human factors indicates that deliberate violation behaviors showed a high utility–high cost factor. Mismanagement behaviors showed strong correlation with responsibility awareness and weak correlation with technological ability.