scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Creating Fragility Functions for Performance-Based Earthquake Engineering:

01 May 2007-Earthquake Spectra (SAGE PublicationsSage UK: London, England)-Vol. 23, Iss: 2, pp 471-489
TL;DR: In this paper, a set of procedures for creating fragility functions from various kinds of data is introduced, including actual EDP at which each specimen failed, bounding EDP, where some specimens failed and one knows the EDP to which each specimens was subjected, capable EDP where specimen EDPs are known but no specimens failed, and derived, where fragility function are produced analytically; expert opinion; and updating, in which one improves an existing fragility model using new observations.
Abstract: The Applied Technology Council is adapting PEER's performance-based earthquake engineering methodology to professional practice. The methodology's damage-analysis stage uses fragility functions to calculate the probability of damage to facility components given the force, deformation, or other engineering demand parameter (EDP) to which each is subjected. This paper introduces a set of procedures for creating fragility functions from various kinds of data: (A) actual EDP at which each specimen failed; (B) bounding EDP, in which some specimens failed and one knows the EDP to which each specimen was subjected; (C) capable EDP, where specimen EDPs are known but no specimens failed; (D) derived, where fragility functions are produced analytically; (E) expert opinion; and (U) updating, in which one improves an existing fragility function using new observations. Methods C, E, and U are all introduced here for the first time. A companion document offers additional procedures and more examples.

Summary (4 min read)

BACKGROUND AND OBJECTIVES

  • This paper summarizes such a standard developed for ATC-58.
  • See Porter et al. (2006) for more detail, examples, commentary, and alternative approaches.
  • Method A is not applicable when one knows the maximum EDP to which each specimen was subjected, but not the value of EDP at which specimens actually failed.
  • The methods proposed here are no substitute for understanding the processes that lead to damage, but are intended to help practitioners and scholars create fragility functions from damage data.

DOCUMENTATION REQUIREMENTS

  • Four requirements are proposed for documenting fragility functions: 1. Description of specimens.
  • Indicate whether EDP is the value at which damage occurred (Method A data) or the maximum to which each specimen was subjected (Methods B, C, and U).
  • Define damage measures (DMs) quantitatively in terms of repairs required.

DAMAGE STATE PROBABILITY

  • Fdm edp denotes the fragility function for damage state dm, defined as the probability that the component reaches or exceeds damage state dm, given a particular EDP value (Equation 1), and idealized by a lognormal distribution (Equation 2): Fdm edp P DM dm EDP = edp 1.
  • Fdm edp = ln edp/xm 2 where denotes the standard normal cumulative distribution function (e.g., normsdist in Microsoft Excel), xm denotes the median value of the distribution, and denotes the logarithmic standard deviation.
  • The authors use the lognormal because it fits a variety of structural component failure data well (e.g., Beck et al.
  • Both xm and are established for each component type and damage state using methods presented later.
  • The probability that the component is in damage state dm, given EDP=edp, is given by.

METHOD A, ACTUAL EDP: ALL SPECIMENS FAILED AT OBSERVED EDP

  • These are the most informative data for creating fragility functions.
  • They are most common where DM can be associated with a point on the observed force-deformation behavior of a component, such as a yield point.
  • Alternatively, specimens are subjected to increasing levels of EDP.
  • The test is interrupted after each level of EDP is imposed, and the specimen examined for damage.

M number of specimens tested to failure i index of specimens, i 1,2 , . . .M ri EDP at which damage was observed to occur in specimen i.

  • One tests the resulting fragility function using the Lilliefors goodness-of-fit test (presented below).
  • If it passes at the 5% significance level, the fragility function is acceptable.
  • Example 1. Aslani (2005) provides a table of peak transient drift ratios at which 43 specimens of pre-1976 reinforced concrete slab-column connections experienced cracking of no more than 0.3 mm width, repaired by applying a surface coating.
  • The data are repeated in Table 2 with original specimen numbers.
  • The lognormal distribution with these parameters passes the Lilliefors goodness-of-fit test at the 5% significance level.

METHOD B, BOUNDING EDP: SOME SPECIMENS FAILED, PEAK EDP KNOWN

  • Here, the data include the maximum EDP to which each of M specimens was subjected, and knowledge of whether the specimen exceeded the damage state of interest.
  • Data must not be biased by damage state, i.e., specimens must not be selected because they experienced damage.
  • The data are grouped into bins by ranges of EDP, where each bin has approximately the same number of specimens in it.
  • These serve as independent data points of failure probability and EDP.
  • The following approach converts Equation 2 to a linear regression problem by taking the inverse Gaussian cumulative distribution function of each side and fitting a line ŷ=sx+c to the data (e.g., see “probability paper” in Ang and Tang 1975).

N number of EDP bins

  • Porter et al. (2006) presents an alternative approach using a leastsquares fit to the binary failure data, i.e., to the pairs of EDP and a binary (0,1) failure indicator.
  • The alternative approach avoids errors associated with bin-average EDPs.
  • Consider the damage statistics in Figure 2, which depicts motor control centers (MCCs) observed after various earthquakes in 45 facilities.
  • Crosshatched boxes represent MCCs that experienced noticeable earthquake effect such as shifting but that remained operable.
  • For each bin, the values of xj− x̄ and yj− ȳ are calculated as shown.

METHOD C, CAPABLE EDP: NO SPECIMENS FAILED, EDPS ARE KNOWN

  • It addresses the best case for this type of data, i.e., many specimens, none of which had apparent distress, and several of which were subjected to EDP near the maximum value.
  • The specimens in this bin without apparent distress are assigned 0% subjective failure probability, 10% for specimens with distress not suggestive of imminent failure, and 50% for specimens with distress suggestive of imminent failure.
  • ANCO Engineers, Inc. (1983) performed shake-table tests on ceiling systems with various lateral restraints.
  • Peak diaphragm acceleration (PDA) from nine of these tests is recorded in Table 5.
  • Failure required replacement of damaged grid and tiles.

METHOD D, DERIVED FRAGILITY FUNCTIONS

  • The capacity of some components can be calculated by modeling the component as a structural system, and determining the EDP (e.g., acceleration or shear deformation) that would cause the system to reach dm.
  • Other components may be amenable to fault tree analysis; e.g., see Vesely et al. (1981).
  • Let r denote the calculated capacity of the component to resist damage state dm, including consideration of any anchorage or bracing.

METHOD E, EXPERT OPINION

  • There are several methods for eliciting expert opinion, from ad hoc to structured processes involving multiple experts, self-judgment of expertise, and iteration to examine major discrepancies between experts.
  • The method (introduced for the first time here) employs Spetzler and von Holstein (1972) for probability encoding and Dalkey et al. (1970) for expert qualification, with some useful simplifications.
  • To use Method E, select experts with professional experience in the design or postearthquake observation of the component.
  • Representative images should be offered to the experts and recorded.
  • If an expert refuses to provide estimates or limits them to certain conditions, either narrow the component definition accordingly and iterate, or ignore that expert’s response and analyze the remaining ones.

N number of experts providing judgment about a value i index of experts, i 1,2 , . . .N

  • If the results of the survey produce 0.4, and this low value of cannot be justified, use the judged xl to anchor the fragility function, apply =0.4, and calculate the resulting value of xm.
  • Kennedy and Short (1994) show that by establishing the EDP at which the component has 10% failure probability, the overall reliability of the component is insensitive to , hence the value of directly encoding experts’ judgment of this value in particular.
  • Stone cladding on the exterior of retail buildings may fall in earthquakes.
  • Consider 2-in. x 6-in. x 1-3/16-in. stone veneer adhered to a concrete masonry unit substrate with thin-bed mortar (liquid latex mixed with Portland cement, 100% coverage).
  • Create a fragility function for the probability that any given stone would fall from the building (posing a life-safety threat) and require replacement, as a function of the peak transient drift ratio of the story on which the stone is applied.

METHOD U, UPDATING A FRAGILITY FUNCTION WITH NEW DATA

  • Here, the data are a pre-existing fragility function and M specimens with known damage state and maximum EDP.
  • It is not necessary that any of the specimens failed.
  • The method uses Bayes’ Theorem (e.g., Ang and Tang 1975) to revise xm and of an existing fragility function with new observations of M specimens whose EDP and damage state have been observed.
  • For those familiar with Bayesian updating, the prior probability distribution of xm is taken as lognormal with median equal to the xm value in the pre-existing fragility function, and logarithmic standard deviation taken as 0.707 of the pre-existing fragility function, consistent with a compound lognormal fragility function and r= u=0.707 .
  • Their joint distribution is approximated by five discrete points (xmj, j), each with probability-like weight wj (where j=1,2 , . . .5).

ASSESSING FRAGILITY FUNCTION QUALITY

  • The previous section provided mathematical procedures for developing fragility functions.
  • Issues associated with the quality of those fragility functions are now addressed, particularly the treatment of competing EDPs, goodness-of-fit testing, dealing with fragility functions that cross, and how to assign an overall quality level to a fragility function.

CONSIDERING COMPETING EDPS

  • One may be uncertain which is the best EDP to use.
  • In such a case, create fragility functions for each alternative and choose the fragility function with the lowest .
  • See Porter et al. (2006) for choosing between EDPs with differing COV.

GOODNESS OF FIT

  • A goodness-of-fit test checks that an assumed distribution adequately fits the data.
  • It is a special case of the Kolmogorov-Smirnov (K-S) test, applicable when the parameters of the distribution are estimated from the same data as are being compared with the distribution, as is the case here.

FRAGILITY FUNCTIONS THAT CROSS

  • Some components have two or more fragility functions.
  • Any two lognormal fragility functions i and j with medians xmj xmi and logarithmic standard deviations i j cross if: edp exp jln xmi − iln xmj j − i : i j edp exp jln xmi − iln xmj j − i : i j 25.
  • This produces a negative probability of being in damage state i under Equation 3b.
  • Figure 6a illustrates the point: F2 has a higher than F1, and F3 has a lower than F2.
  • Two methods are proposed to deal with the problem.

ASSIGNING A SINGLE QUALITY LEVEL TO A FRAGILITY FUNCTION

  • Fragility functions come from data with varying quantity and quality.
  • It is based solely on the authors’ judgment.
  • The analyst should report the quality of fragility functions used with any loss estimate.

CONCLUSIONS

  • Six methods for creating fragility functions were presented, including three new ones: one for dealing with cases where no failure has been observed, another for situations where one must rely on expert opinion, and a third for updating an existing fragility function with new damage observations.
  • The procedures are under consideration as a standard for ATC-58, a technology-transfer project by the Applied Technology Council to bring PEER’s performance-based earthquake engineering methodology to practice.
  • The procedures are intended for engineering professionals who will eventually use PBEE.
  • Little unfamiliar math is involved, and no calculus.
  • A larger document, Porter et al. (2006), presents these procedures with more commentary, some alternative approaches, and more sample problems.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

EARTHQUAKE ENGINEERING PRACTICE
Creating Fragility Functions for
Performance-Based Earthquake
Engineering
Keith Porter,
a)
M.EERI, Robert Kennedy,
b)
M.EERI,
and Robert Bachman,
c)
M.EERI
The Applied Technology Council is adapting PEER’s performance-based
earthquake engineering methodology to professional practice. The
methodology’s damage-analysis stage uses fragility functions to calculate the
probability of damage to facility components given the force, deformation, or
other engineering demand parameter (EDP) to which each is subjected. This
paper introduces a set of procedures for creating fragility functions from
various kinds of data: (A) actual EDP at which each specimen failed; (B)
bounding EDP, in which some specimens failed and one knows the EDP to
which each specimen was subjected; (C) capable EDP, where specimen EDPs
are known but no specimens failed; (D) derived, where fragility functions are
produced analytically; (E) expert opinion; and (U) updating, in which one
improves an existing fragility function using new observations. Methods C, E,
and U are all introduced here for the first time. A companion document offers
additional procedures and more examples. DOI: 10.1193/1.2720892
INTRODUCTION
BACKGROUND AND OBJECTIVES
A second-generation performance-based earthquake engineering (PBEE-2) proce-
dure has been developed by the Pacific Earthquake Engineering Research (PEER) Cen-
ter and others that estimates the probabilistic future seismic performance of buildings
and bridges in terms of system-level decision variables (DVs), i.e., performance mea-
sures that are meaningful to the owner, such as repair cost, casualties, and loss of use
(dollars, deaths, and downtime). Under contract to the Federal Emergency Management
Agency, the Applied Technology Council has undertaken to transfer the PEER method-
ology to professional practice (ATC 2005). The methodology involves four stages: haz-
ard analysis, structural analysis, damage analysis, and loss analysis. This paper addresses
the damage analysis, whose input is the engineering demand parameters (EDP) calcu-
lated in the structural analysis, and whose output is the damage measure (DM) of each
a)
California Institute of Technology, Pasadena, CA
b)
RPK Structural Mechanics Consulting, Inc., Escondido, CA
c)
Consulting Structural Engineer, Laguna Niguel, CA
471
Earthquake Spectra, Volume 23, No. 2, pages 471–489, May 2007; © 2007, Earthquake Engineering Research Institute

damageable structural and nonstructural component in the facility. The analysis uses fra-
gility functions, which in this context give the probability of exceeding a damage state (a
value of DM) as a function of EDP. One such fragility function is required for each com-
ponent type and damage state. Many building-component fragility functions have been
created in the past, but no comprehensive set of procedures exists on how to create them.
This paper summarizes such a standard developed for ATC-58. See Porter et al. (2006)
for more detail, examples, commentary, and alternative approaches.
Damage data come in many forms, but generally comprise knowledge of specimen
damage and the EDP imposed. Table 1 lists methods for six situations. Each addresses
different data and thus they are not interchangeable. For example, Method A is not ap-
plicable when one knows the maximum EDP to which each specimen was subjected, but
not the value of EDP at which specimens actually failed. One cannot use Method C if
some specimens failed.
The methods proposed here are no substitute for understanding the processes that
lead to damage, but are intended to help practitioners and scholars create fragility func-
tions from damage data. No calculus is required, and the only possibly unfamiliar ex-
pression is the Gaussian distribution, typically available in spreadsheet software.
DOCUMENTATION REQUIREMENTS
Four requirements are proposed for documenting fragility functions:
1. Description of specimens. What is the component type or taxonomic group the
fragility function addresses? (See Porter 2005 for an ATC-58 component tax-
onomy.) Where and how many specimens were tested or observed, how are they
counted, and what were their materials, material properties, configuration, and
building code (if applicable)? Provide a bibliographic reference of any data
source.
2. Excitation and EDP. Detail the loading protocol or characteristics of earthquake
motion. Identify the EDP(s) examined that might be most closely related to fail-
ure probability and define how EDP is calculated or inferred from the loading
protocol or observed excitation. Indicate whether EDP is the value at which
damage occurred (Method A data) or the maximum to which each specimen was
subjected (Methods B, C, and U).
Table 1. Analysis methods and data employed
Method name Data used
A. Actual failure EDP All specimens failed at observed values of EDP
B. Bounding EDP Some specimens failed; maximum EDP for each is known
C. Capable EDP No specimens failed; maximum EDP for each is known
D. Derived fragility Fragility functions produced analytically
E. Expert opinion Expert judgment is used
U. Updating Enhance existing fragility functions with new method-B data
472 K. PORTER, R. KENNEDY, AND R. BACHMAN

3. Damage evidence and DM. What kinds of physical damage or force-
deformation were observed? Define damage measures (DMs) quantitatively in
terms of repairs required. Damage is assumed to have a repair cost, but note
threats to life-safety or potential for loss of use. Explain how DM is inferred
from damage or force-deformation evidence.
4. Observation summary, analysis method, and results. Present a tabular or graphi-
cal listing of specimens, EDP, and DM. Which method was used to derive the
fragility function (Table 1)? Present resulting fragility function parameters
x
m
and
and results of tests to establish fragility function quality (discussed be-
low). Provide sample calculations.
DAMAGE STATE PROBABILITY
F
dm
edp denotes the fragility function for damage state dm, defined as the probabil-
ity that the component reaches or exceeds damage state dm, given a particular EDP
value (Equation 1), and idealized by a lognormal distribution (Equation 2):
F
dm
edp兲⬅PDM dmEDP = edp兴共1
F
dm
edp =
lnedp/x
m
2
where denotes the standard normal (Gaussian) cumulative distribution function (e.g.,
normsdist in Microsoft Excel),
x
m
denotes the median value of the distribution, and
denotes the logarithmic standard deviation.
We use the lognormal because it fits a variety of structural component failure data
well (e.g., Beck et al. 2002, Aslani 2005, Pagni and Lowes 2006), as well as nonstruc-
tural failure data (Reed et al. 1991 [Appendix J], Porter and Kiremidjian 2001, Badillo-
Almaraz et al. 2006), and building collapse by IDA (e.g., Cornell et al. 2005). It has
strong precedent in seismic risk analysis (e.g., Kennedy and Short 1994, Kircher et al.
1997). Finally, there is a strong theoretical reason to use the lognormal: it has zero prob-
ability density at and below zero EDP, is fully defined by measures of the first and sec-
ond moments—
lnx
m
and
—and imposes the minimum information given these con-
straints, in the information-theory sense (Goodman 1985).
Both
x
m
and
are established for each component type and damage state using
methods presented later. The probability that the component is in damage state dm,given
EDP= edp,isgivenby
PDM = dmEDP = edp =1−F
1
edp dm =0
=F
dm
edp F
dm+1
edp 1 dm N
=F
dm
edp dm = N 3
CREATING FRAGILITY FUNCTIONS 473

where N denotes the number of possible damage states for the component, in addition to
the undamaged state, and
dm=0 denotes the undamaged state. Where N2 and
i
j
for two damage states i j , Equation 3 can produce a meaningless negative prob-
ability at some levels of EDP. This situation is addressed later.
CREATING FRAGILITY FUNCTIONS
This section provides mathematical procedures for developing fragility functions.
METHOD A, ACTUAL EDP: ALL SPECIMENS FAILED AT OBSERVED EDP
These are the most informative data for creating fragility functions. They are most
common where DM can be associated with a point on the observed force-deformation
behavior of a component, such as a yield point. Alternatively, specimens are subjected to
increasing levels of EDP. The test is interrupted after each level of EDP is imposed, and
the specimen examined for damage. Let
Mnumber of specimens tested to failure
iindex of specimens, i 1,2, ...M
r
i
EDP at which damage was observed to occur in specimen i.
From the basic definitions of x
m
and
e.g., Ang and Tang 1975,
x
m
= exp
1
M
i=1
M
ln r
i
=
1
M −1
i=1
M
lnr
i
/x
m
兲兲
2
4
One tests the resulting fragility function using the Lilliefors goodness-of-fit test (pre-
sented below). If it passes at the 5% significance level, the fragility function is accept-
able.
Example 1. Aslani (2005) provides a table of peak transient drift ratios at which 43
specimens of pre-1976 reinforced concrete slab-column connections experienced crack-
ing of no more than
0.3 mm width, repaired by applying a surface coating. The data are
repeated in Table 2 with original specimen numbers. Calculate the fragility function and
test goodness of fit.
Solution. The data are sorted in order of increasing r, an index i is added, the sta-
tistics
lnr
i
and lnr
i
/x
m
2
calculated and summed. Using Equation 4, x
m
=0.38, and
=0.39. The lognormal distribution with these parameters passes the Lilliefors
goodness-of-fit test at the 5% significance level. The math is omitted here, but the test is
illustrated in Figure 1.
METHOD B, BOUNDING EDP: SOME SPECIMENS FAILED, PEAK EDP KNOWN
Here, the data include the maximum EDP to which each of M specimens was sub-
jected, and knowledge of whether the specimen exceeded the damage state of interest.
474 K. PORTER, R. KENNEDY, AND R. BACHMAN

Some specimens must be damaged. The method works best where M 25. Data must
not be biased by damage state, i.e., specimens must not be selected because they expe-
rienced damage. The data are grouped into bins by ranges of EDP, where each bin has
approximately the same number of specimens in it. For each bin, one calculates the frac-
tion of specimens that failed and the bin-average EDP. These serve as independent data
points of failure probability and EDP. The following approach converts Equation 2 to a
linear regression problem by taking the inverse Gaussian cumulative distribution func-
tion of each side and fitting a line
y
ˆ
=sx+c to the data (e.g., see “probability paper” in
Ang and Tang 1975). Let (cont.)
Table 2. Example 1 slab-column connection damage data; s = specimen, r = peak transient drift
ratio, %
s rsrsrsrsr s r
3 0.43 11 0.28 22 0.40 60 0.54 72 0.19 80 0.43
4 0.30 12 0.35 23 0.36 61 0.40 73 0.28 81 0.50
5 0.28 16 0.31 24 0.28 62 0.80 74 0.28 82 0.70
6 0.65 17 0.31 25 0.20 66 0.50 75 0.40
ln r −41.6
7 0.22 18 0.28 26 0.50 67 0.50 76 0.74 x
m
0.38
8 0.32 19 0.22 27 0.25 68 0.50 77 0.54
lnr/ x
m
2
6.40
9 0.43 20 0.22 28 0.50 69 0.50 78 0.43
0.39
10 0.42 21 0.31 59 0.64 71 0.19 79 0.71
(Specimens without PTD data are omitted)
Figure 1. Example 1 fragility function (smooth curve) and sample cumulative distribution
(stepped curve).
CREATING FRAGILITY FUNCTIONS 475

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the applicability of statistical inferences to seismic assessment procedures is discussed, and the application of statistical inference to seismic fragility functions is also discussed, using dynamic structural analysis.
Abstract: Estimation of fragility functions using dynamic structural analysis is an important step in a number of seismic assessment procedures. This paper discusses the applicability of statistical inferenc...

896 citations


Cites background or methods from "Creating Fragility Functions for Pe..."

  • ...…of damage, static structural analyses, or judgment (e.g., Kennedy and Ravindra 1984; Kim and Shinozuka 2004; Calvi et al. 2006; Villaverde 2007; Porter et al. 2007; Shafei et al. 2011), but here the focus is on so-called analytical fragility functions developed from dynamic structural analysis....

    [...]

  • ...…lognormally distributed; this is a common assumption that has been confirmed as reasonable in a number of cases (e.g., Ibarra and Krawinkler 2005, Porter et al. 2007, Bradley and Dhakal 2008, Ghafory-Ashtiany et al. 2011, Eads et al. 2013); however, it is not required and alternate assumptions…...

    [...]

  • ...This fragility-fitting approach has been used widely and is denoted “Method A” by Porter et al. (2007)....

    [...]

  • ...Another alternative is fitting the function using “Method B” of Porter et al. (2007), which transforms the observed fractions of collapse so that linear regression can be used to estimate the fragility function parameters....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors present recommended methodologies for the quantitative analysis of landslide hazard, vulnerability and risk at different spatial scales (site-specific, local, regional and national), as well as for the verification and validation of the results.
Abstract: This paper presents recommended methodologies for the quantitative analysis of landslide hazard, vulnerability and risk at different spatial scales (site-specific, local, regional and national), as well as for the verification and validation of the results. The methodologies described focus on the evaluation of the probabilities of occurrence of different landslide types with certain characteristics. Methods used to determine the spatial distribution of landslide intensity, the characterisation of the elements at risk, the assessment of the potential degree of damage and the quantification of the vulnerability of the elements at risk, and those used to perform the quantitative risk analysis are also described. The paper is intended for use by scientists and practising engineers, geologists and other landslide experts.

776 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present an ambitious review that describes all the main advances that have taken place since the beginning of the 21st century in the field of progressive collapse and robustness of buildings.

257 citations

15 Nov 2011
TL;DR: The National Institute of Standards and Technology (NIST) funded a project to improve guidance to the earthquake engineering profession for selecting and scaling earthquake ground motions for the purpose of performing nonlinear response-history analysis.
Abstract: The National Institute of Standards and Technology (NIST) funded a project to improve guidance to the earthquake engineering profession for selecting and scaling earthquake ground motions for the purpose of performing nonlinear response-history analysis. The project supported problem-focused studies related to defining target spectra for seismic design and performance assessment, response-spectrum matching, and near-fault ground motions. Recommendations are presented for target spectra, selection of seed ground motions, and scaling of motions to be consistent with different target spectra. Minimum numbers of sets of motions are recommended for computing mean component and systems responses, and distributions of responses. Guidance is provided on selection and scaling of ground motions per ASCE/SEI 7-10.

241 citations

Journal ArticleDOI
TL;DR: In this paper, the fragility functions are derived using data provided by the Ministry of Land, Infrastructure and Transportation of Japan, with more than 250,000 structures surveyed, and the set of data has details on damage level, structural material, number of stories per building and location (Town).
Abstract: A large amount of buildings was damaged or destroyed by the 2011 Great East Japan tsunami. Numerous field surveys were conducted in order to collect the tsunami inundation extents and building damage data in the affected areas. Therefore, this event provides us with one of the most complete data set among tsunami events in history. In this study, fragility functions are derived using data provided by the Ministry of Land, Infrastructure and Transportation of Japan, with more than 250,000 structures surveyed. The set of data has details on damage level, structural material, number of stories per building and location (town). This information is crucial to the understanding of the causes of building damage, as differences in structural characteristics and building location can be taken into account in the damage probability analysis. Using least squares regression, different sets of fragility curves are derived to demonstrate the influence of structural material, number of stories and coastal topography on building damage levels. The results show a better resistant performance of reinforced concrete and steel buildings over wood or masonry buildings. Also, buildings taller than two stories were confirmed to be much stronger than the buildings of one or two stories. The damage characteristic due to the coastal topography based on limited number of data in town locations is also shortly discussed here. At the same tsunami inundation depth, buildings along the Sanriku ria coast were much greater damaged than buildings from the plain coast in Sendai. The difference in damage states can be explained by the faster flow velocities in the ria coast at the same inundation depth. These findings are key to support better future building damage assessments, land use management and disaster planning.

209 citations


Cites methods from "Creating Fragility Functions for Pe..."

  • ...This method was first applied in earthquake engineering studies for seismic damage assessment (i.e., Porter et al. 2007) and has now been adapted to tsunami damage probability estimation....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, the power of the Kolmogorov-smirnov test is investigated and a table for testing whether a set of observations is from a normal population when the mean and variance are not specified but must be estimated from the sample.
Abstract: The standard tables used for the Kolmogorov-Smirnov test are valid when testing whether a set of observations are from a completely-specified continuous distribution. If one or more parameters must be estimated from the sample then the tables are no longer valid. A table is given in this note for use with the Kolmogorov-Smirnov statistic for testing whether a set of observations is from a normal population when the mean and variance are not specified but must be estimated from the sample. The table is obtained from a Monte Carlo calculation. A brief Monte Carlo investigation is made of the power of the test.

3,923 citations

Book
04 Aug 1975
TL;DR: This research attacked the mode confusion problem by developing a modeling framework called “model schizophrenia” to estimate the posterior probability of various modeled errors.
Abstract: Keywords: Probabiliste ; Methode statistique Reference Record created on 2004-09-07, modified on 2016-08-08

2,679 citations


"Creating Fragility Functions for Pe..." refers methods in this paper

  • ...The following approach converts Equation 2 to a linear regression problem by taking the inverse Gaussian cumulative distribution function of each side and fitting a line ŷ=sx+c to the data (e.g., see “probability paper” in Ang and Tang 1975)....

    [...]

  • ...From the basic definitions of xm and e.g., Ang and Tang 1975 , xm = exp 1M i=1 M ln ri = 1M − 1 i=1 M ln ri/xm 2 4 One tests the resulting fragility function using the Lilliefors goodness-of-fit test (presented below)....

    [...]

Book
17 Dec 1987
TL;DR: This handbook has been developed not only to serve as text for the System Safety and Reliability Course, but also to make available to others a set of otherwise undocumented material on fault tree construction and evaluation.
Abstract: Introduction: Since 1975, a short course entitled "System Safety and Reliability Analysis" has been presented to over 200 NRC personnel and contractors. The course has been taught jointly by David F. Haasl, Institute of System Sciences, Professor Norman H. Roberts, University of Washington, and members of the Probabilistic Analysis Staff, NRC, as part of a risk assessment training program sponsored by the Probabilistic Analysis Staff. This handbook has been developed not only to serve as text for the System Safety and Reliability Course, but also to make available to others a set of otherwise undocumented material on fault tree construction and evaluation. The publication of this handbook is in accordance with the recommendations of the Risk Assessment Review Group Report (NUREG/CR-0400) in which it was stated that the fault/event tree methodology both can and should be used more widely by the NRC. It is hoped that this document will help to codify and systematize the fault tree approach to systems analysis.

1,266 citations

Proceedings ArticleDOI
08 May 2002
TL;DR: In this article, a generalisation of the unscented transformation (UT) which allows sigma points to be scaled to an arbitrary dimension is described. But the scaling issues are illustrated by considering conversions from polar to Cartesian coordinates with large angular uncertainties.
Abstract: This paper describes a generalisation of the unscented transformation (UT) which allows sigma points to be scaled to an arbitrary dimension. The UT is a method for predicting means and covariances in nonlinear systems. A set of samples are deterministically chosen which match the mean and covariance of a (not necessarily Gaussian-distributed) probability distribution. These samples can be scaled by an arbitrary constant. The method guarantees that the mean and covariance second order accuracy in mean and covariance, giving the same performance as a second order truncated filter but without the need to calculate any Jacobians or Hessians. The impacts of scaling issues are illustrated by considering conversions from polar to Cartesian coordinates with large angular uncertainties.

1,122 citations

Journal ArticleDOI

768 citations


"Creating Fragility Functions for Pe..." refers methods in this paper

  • ...The following approach converts Equation 2 to a linear regression problem by taking the inverse Gaussian cumulative distribution function of each side and fitting a line ŷ=sx+c to the data (e.g., see “probability paper” in Ang and Tang 1975)....

    [...]

  • ..., Ang and Tang 1975) to revise xm and of an existing fragility function with new observations of M specimens whose EDP and damage state have been observed. Some explanation may be useful to readers unfamiliar with Bayesian updating. It is recognized here that xm and are themselves uncertain, and can be assigned probability distributions. The distributions are revised based on how likely it is that the observed damage would have occurred for various possible values of xm and . For those familiar with Bayesian updating, the prior probability distribution of xm is taken as lognormal with median equal to the xm value in the pre-existing fragility function, and logarithmic standard deviation taken as 0.707 of the pre-existing fragility function, consistent with a compound lognormal fragility function and r= u=0.707 . The prior of is taken as normal with expected value equal to the of the pre-existing fragility function, and coefficient of variation (COV) of 0.21. This COV is selected because it provides for 98% probability that is within the bounds of 0.5 and 1.5 times the prior , which agrees with the observed range for of 0.2 to 0.6. The distributions of xm and are assumed to be independent. Their joint distribution is approximated by five discrete points (xmj, j), each with probability-like weight wj (where j=1,2 , . . .5). Using a method described in Julier (2002), the values of xmj, j, and wj are chosen so that the first five moments of the discrete joint distribution match those of the continuous joint distribution....

    [...]

  • ...From the basic definitions of xm and e.g., Ang and Tang 1975 , xm = exp 1M i=1 M ln ri = 1M − 1 i=1 M ln ri/xm 2 4 One tests the resulting fragility function using the Lilliefors goodness-of-fit test (presented below)....

    [...]

Frequently Asked Questions (13)
Q1. What have the authors contributed in "Creating fragility functions for performance-based earthquake engineering" ?

This paper introduces a set of procedures for creating fragility functions from various kinds of data: ( A ) actual EDP at which each specimen failed ; ( B ) bounding EDP, in which some specimens failed and one knows the EDP to which each specimen was subjected ; ( C ) capable EDP, where specimen EDPs are known but no specimens failed ; ( D ) derived, where fragility functions are produced analytically ; ( E ) expert opinion ; and ( U ) updating, in which one improves an existing fragility function using new observations. 

Create a fragility function for the probability that any given stone would fall from the building (posing a life-safety threat) and require replacement, as a function of the peak transient drift ratio of the story on which the stone is applied. 

Kennedy and Short (1994) show that by establishing the EDP at which the component has 10% failure probability, the overall reliability of the component is insensitive to , hence the value of directly encoding experts’ judgment of this value in particular. 

To create a fragility function from Method-C data, letri EDP experienced by specimen i i=1,2 , . . .M rmax=maxi ri rd minimum EDP experienced by any specimen with distress ra the smaller of rd and 0.7rmax MA number of specimens without apparent distress and with ri ra MB number of specimens at any ri with distress not suggestive of imminent failure MC number of specimens at any ri with distress suggestive of imminent failure rm=rmax if MB+MC 0=0.5· rmax+ra otherwise S subjective failure probability at rmS = 0.5MC + 0.1MB / MA + MB + MC 14Use Table 4 to determine Fdm rm and Equation 15 to determine and xm.= 0.4z = −1 Fdm rmxm = rmexp − z 15Example. 

For those familiar with Bayesian updating, the prior probability distribution of xm is taken as lognormal with median equal to the xm value in the pre-existing fragility function, and logarithmic standard deviation taken as 0.707 of the pre-existing fragility function, consistent with a compound lognormal fragility function and r= u=0.707 . 

The specimens in this bin without apparent distress are assigned 0% subjective failure probability, 10% for specimens with distress not suggestive of imminent failure, and 50% for specimens with distress suggestive of imminent failure. 

No calculus is required, and the only possibly unfamiliar expression is the Gaussian distribution, typically available in spreadsheet software. 

0.4xm = 1.67xl 18 Regarding Equation 18, it is common for experts to express overconfidence in an uncertain variable, such as the EDP at which damage will occur. 

This paper introduces a set of procedures for creating fragility functions from various kinds of data: (A) actual EDP at which each specimen failed; (B) bounding EDP, in which some specimens failed and one knows the EDP to which each specimen was subjected; (C) capable EDP, where specimen EDPs are known but no specimens failed; (D) derived, where fragility functions are produced analytically; (E) expert opinion; and (U) updating, in which one improves an existing fragility function using new observations. 

The probability that the component is in damage state dm, given EDP=edp, is given by=Fdm edp − Fdm+1 edp 1 dm N=Fdm edp dm = N 3where N denotes the number of possible damage states for the component, in addition to the undamaged state, and dm=0 denotes the undamaged state. 

The capacity of some components can be calculated by modeling the component as a structural system, and determining the EDP (e.g., acceleration or shear deformation) that would cause the system to reach dm. 

The method uses Bayes’ Theorem (e.g., Ang and Tang 1975) to revise xm and of an existing fragility function with new observations of M specimens whose EDP and damage state have been observed. 

To properly elicit expert opinion on uncertain quantities requires attention to clear definitions, biases, assumptions, and expert qualifications.