scispace - formally typeset
Search or ask a question

Showing papers in "Seismological Research Letters in 2010"


Journal ArticleDOI
TL;DR: ObsPy as discussed by the authors is a Python toolbox that simplifies the usage of Python programming for seismologists by providing direct access to the actual time series, allowing the use of powerful numerical array-programming modules like NumPy (http://numpy.thz.edu/manuals/sac/Manual.html), as well as filtering, instrument simulation, triggering, and plotting.
Abstract: The wide variety of computer platforms, file formats, and methods to access seismological data often requires considerable effort in preprocessing such data. Although preprocessing work-flows are mostly very similar, few software standards exist to accomplish this task. The objective of ObsPy is to provide a Python toolbox that simplifies the usage of Python programming for seismologists. It is conceptually similar to SEATREE (Milner and Thorsten 2009) or the exploration seismic software project MADAGASCAR (http://www.reproducibility.org). In ObsPy the following essential seismological processing routines are implemented and ready to use: reading and writing data only SEED/MiniSEED and Dataless SEED (http://www.iris.edu/manuals/SEEDManual_V2.4.pdf), XML-SEED (Tsuboi et al. 2004), GSE2 (http://www.seismo.ethz.ch/autodrm/downloads/provisional_GSE2.1.pdf) and SAC (http://www.iris.edu/manuals/sac/manual.html), as well as filtering, instrument simulation, triggering, and plotting. There is also support to retrieve data from ArcLink (a distributed data request protocol for accessing archived waveform data, see Hanka and Kind 1994) or a SeisHub database (Barsch 2009). Just recently, modules were added to read SEISAN data files (Havskov and Ottemoller 1999) and to retrieve data with the IRIS/FISSURES data handling interface (DHI) protocol (Malone 1997). Python gives the user all the features of a full-fledged programming language including a large collection of scientific open-source modules. ObsPy extends Python by providing direct access to the actual time series, allowing the use of powerful numerical array-programming modules like NumPy (http://numpy.scipy.org) or SciPy (http://scipy.org). Results can be visualized using modules such as matplotlib (2D) (Hunter 2007) or MayaVi (3D) (http://code.enthought.com/projects/mayavi/). This is an advantage over the most commonly used seismological analysis packages SAC, SEISAN, SeismicHandler (Stammler 1993), or PITSA (Scherbaum and Johnson 1992), which do not provide methods for general numerical array manipulation. Because Python and its previously mentioned modules are open-source, there …

923 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an update that corrects the shortcomings identified in those equations, which are primarily, but not exclusively, related to the model for the ground-motion variability.
Abstract: The true performance of ground-motion prediction equations is often not fully appreciated until they are used in practice for seismic hazard analyses and applied to a wide range of scenarios and exceedance levels. This has been the case for equations published recently for the prediction of peak ground velocity (PGV), peak ground acceleration (PGA), and response spectral ordinates in Europe, the Middle East, and the Mediterranean (Akkar and Bommer 2007a,b). This paper presents an update that corrects the shortcomings identified in those equations, which are primarily, but not exclusively, related to the model for the ground-motion variability. Strong-motion recording networks in Europe and the Middle East were first installed much later than in the United States and Japan but have grown considerably over the last four decades. The databanks of strong-motion data have grown in parallel with the accelerograph networks, and in addition to national collections there have been concerted efforts over more than two decades to develop and maintain a European database of associated metadata ( e.g. , Ambraseys et al. 2004). As the database of strong-motion records from Europe, the Mediterranean region, and the Middle East has expanded, there have been two distinct trends in terms of developing empirical ground-motion prediction equations (GMPEs): equations derived from a large dataset covering several countries, generally of moderate-to-high seismicity; and equations derived from local databanks for application within national borders. We refer to the former as pan-European models, noting that this is for expedience since the equations are really derived for southern Europe, the Maghreb (North Africa), and the active areas of the Middle East. The history of the development of both pan-European and national equations is discussed by Bommer et al. (2010), who also review studies that consider the arguments for and against the existence of consistent regional …

602 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a method to reduce the impact of sigma on the results of probabilistic seismic hazard analysis (PSHA) by integrating the full distribution of ground motions.
Abstract: Modern ground-motion prediction models use datasets of recorded ground-motion parameters at multiple stations during different earthquakes and in various source regions to generate equations that are later used to predict site-specific ground motions. These models describe the distribution of ground motion in terms of a median and a logarithmic standard deviation ( e.g. , Strasser et al. 2009). This standard deviation, generally referred to as sigma (σ), exerts a very strong influence on the results of probabilistic seismic hazard analysis (PSHA) ( e.g. , Bommer and Abrahamson 2006). Although there are numerous examples of sigma being neglected in seismic hazard, it is now generally accepted that integration over the full distribution of ground motions is an indispensable element of PSHA (Bommer and Abrahamson 2006). Attempts to justify, on a statistical basis, a truncation of the ground-motion distribution at a specified number of standard deviations above the median have proven unfeasible with current strong-motion datasets (Strasser et al. 2008). The most promising approach to reduce the overall impact of sigma on the results of PSHA is to find legitimate approaches to reduce the value of the standard deviation associated with ground-motion prediction equations (GMPEs). The present state-of-the-practice of seismic hazard studies applies the standard deviations from ground-motion models developed using a broad range of earthquakes, sites, and regions to analyze the hazard at a single site from a single small source region. Such practice assumes that the variability in ground motion at a single site-source combination is the same as the variability in ground motion observed in a more global dataset and is referred to as the ergodic assumption (Anderson and Brune 1999). In recent years, the availability of well recorded ground motions at single sites from multiple occurrences of earthquakes in the same regions allowed researchers to estimate the ground-motion variability …

463 citations


Journal ArticleDOI
TL;DR: A key element in any seismic hazard analysis is the selection of appropriate ground-motion prediction equations (GMPEs) as discussed by the authors, where Cotton et al. proposed seven criteria as the basis for selecting GMPEs.
Abstract: A key element in any seismic hazard analysis is the selection of appropriate ground-motion prediction equations (GMPEs). In an earlier paper, focused on the selection and adjustment of ground-motion models for probabilistic seismic hazard analysis (PSHA) in moderately active regions--with limited data and few, if any, indigenous models--Cotton et al. (2006) proposed seven criteria as the basis for selecting GMPEs. Recent experience in applying these criteria, faced with several new GMPEs developed since the Cotton et al. (2006) paper was published and a significantly larger strong-motion database, has led to consideration of how the criteria could be refined and of other conditions that could be included to meet the original objectives of Cotton et al. (2006). In fact, about a dozen new GMPEs are published each year, and this number appears to be increasing. Additionally, Cotton et al. (2006) concluded that the criteria should not be excessively specific, tied to the state-of-the-art in ground-motion modeling at the time of writing and thus remaining static, but rather should be sufficiently flexible to be adaptable to the continuing growth of the global strong-motion database and the continued evolution of GMPEs

257 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived source scaling relations between rupture dimensions and moment magnitude for subduction-zone earthquakes, separating between interface events occurring at the contact of the subducting and overriding tectonic plates, and intraslab events, which occur within the subductioning slab.
Abstract: This paper derives source scaling relations between rupture dimensions and moment magnitude for subduction-zone earthquakes, separating between interface events occurring at the contact of the subducting and overriding tectonic plates, and intraslab events, which occur within the subducting slab. These relations are then compared with existing scaling relations, which are predominantly based on data from crustal events. Relations between the dimensions of the rupture zone of earthquakes and the amount of energy released as measured by the seismic moment, M , or equivalently moment magnitude, Mw , (Hanks and Kanamori 1979), are of great practical use in engineering seismology. Early relations ( e.g. , Kanamori and Anderson 1975; Wyss 1979) were derived with the purpose of using rupture dimensions to constrain estimates of magnitude. Additionally, the relation between independently determined rupture dimensions and seismic moment also was used to draw inferences in terms of source scaling from comparisons between observed data and predictions of theoretical seismological models ( e.g. , Kanamori and Anderson 1975; Astiz et al. 1987). Nowadays, moment magnitude is routinely estimated from instrumental recordings, and the scaling relations described above are predominantly used to infer the probable dimensions of an earthquake of given magnitude. Applications include distance calculations using finite-fault distance metrics ( e.g. , Chiou and Youngs 2006), characterization of seismic sources in seismic hazard analysis, and theoretical studies involving forward-modeling of fault slip and resulting ground motions ( e.g. , Atkinson and Macias 2009; Somerville et al. 2008). However, the reciprocal relations giving moment magnitude as a function of rupture dimensions may still be useful for estimating the moment magnitude of either historical or hypothetical scenario events for which an estimate of the rupture dimensions is available, for instance on the basis of the dimensions of an observed seismic gap or from fault …

218 citations


Journal ArticleDOI
TL;DR: In this paper, a quick review of Twitter and its capabilities and investigate the possibility of using the tweets to detect seismic events and produce rapid maps of the felt area is presented. But, the authors do not consider the use of Twitter to detect earthquakes.
Abstract: Following the 12 May 2008 Wenchuan, China, earthquake, discussion circulated on the Internet describing how the U.S. Geological Survey's earthquake notification lagged behind firsthand accounts sent through Twitter, a popular Internet-based service for sending and receiving short text messages, referred to as “tweets.” A prominent technology blogger, Robert Scoble (http://scobleizer.com), is generally credited for being the first to aggregate and redistribute tweets from people in China who directly experienced and reported the shaking resulting from the Wenchuan earthquake. Subsequent earthquakes generated volumes of earthquake-related tweets, and numerous accounts are on the Web. For example, Ian O'Neill discusses Twitter activity following a magnitude 3.3 Los Angeles earthquake on 24 January 2009 in his blog http://astroengine.com. He showed a remarkable increase in frequency of tweets containing the word “earthquake” after the event and discussed the possibility of a Twitter-based earthquake detector. Similarly, following the 30 March 2009, Morgan Hill, California, magnitude 4.3 earthquake, Michal Migursk (http://mike.teczno.com) noted an increase in earthquake-related tweets. Access to firsthand accounts of earthquake shaking within seconds of an earthquake is intriguing, but is there any reliable information that can be gleaned from the Twitter messages? To explore this question, we provide a quick review of Twitter and its capabilities and investigate the possibility of using the tweets to detect seismic events and produce rapid maps of the felt area. In this exploratory study, we examine the tweets that followed the 30 March 2009 Morgan Hill earthquake. Twitter is a service that allows anyone to send and receive 140-character text messages (tweets) via any Internet-enabled device. Tweets can be sent and received through a Web page, mobile device, or third-party Twitter applications. Tweets can be sent publicly or privately to a specified user. All users who opt to “follow” a Twitter user will receive that user's tweets …

188 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed 3-D seismic velocity models for many parts of New Zealand through individual regional studies using local earthquake data, often with regional temporary seismic networks, and showed that incorporating appropriate 3D material properties and 3D structures can improve results in seismology and numerical modeling.
Abstract: Over the past 15 years, we have developed 3-D seismic velocity models for many parts of New Zealand through individual regional studies using local earthquake data, often with regional temporary seismic networks. The permanent seismic network and earthquakes of M > 5 during the period 2001–2009 inclusive are shown in Figure 1. Since most of New Zealand is seismically active, most of the country is covered by 3-D velocity studies. While the motivation for the regional studies has been to understand crustal structure and tectonic processes, there are many applications of 3-D velocity models. Earth properties are inherently 3-D. Taking account of appropriate 3-D material properties and 3-D structures can improve results in seismology and numerical modeling. For example, the deeper structure obtained in teleseismic tomography studies is more reliable when appropriate 3-D crustal structure is incorporated, as in Kohler and Eberhart-Phillips (2002). Similarly, the use of appropriate regional 3-D structure can substantially aid in the reliability of finer-scale 3-D tomographic studies, such as in volcanic zones where the inclusion of regional structure allows incorporation of distant raypaths that sample below the shallow seismicity (Sherburn et al. 2006). Numerical modeling studies have most often used 1-D or 2-D material properties with simple structures, yet with their ever-increasing computational capability, they can now incorporate more realistic 3-D properties. Ellis et al. (2006) used the 3-D seismic model of Eberhart-Phillips and Bannister (2002) to input elastic properties for the Southern Alps and evaluated stress transfer in the mid-crust. Upton et al. (2003) used 3-D results of Reyners et al. (1999) in modeling strain-partitioning in an oblique subduction zone. On a large scale, Jadamec and Billen (2010) showed that using realistic 3-D shapes and properties for the subducted slab in central Alaska is required to understand the complex 3-D flow with distinctive …

124 citations


Journal ArticleDOI
TL;DR: The results of a tsunami survey conducted by an International Tsunami Survey Team in the Samoa Islands on 4-10 October 2009 and in northern Tonga on 25-27 November 2009 are presented in this paper.
Abstract: On 29 September 2009, a strong earthquake took place south of the Samoa Islands in the southcentral Pacific. It triggered a local tsunami, which caused considerable damage and 189 fatalities on the Samoa Islands and in the northern Tonga archipelago. We present here the results of a tsunami survey conducted by an International Tsunami Survey Team in the Samoa Islands on 4-10 October 2009 and in northern Tonga on 25–27 November 2009. ### The Earthquake of 29 September 2009: Geographical Background The earthquake occurred at 17:48:10 GMT (local time 06:48 on the 29th in Samoa; on the 30th in Tonga), with a source located at 15.51°S and 172.03°W and a focal depth estimated at 18 km by the U.S. Geological Survey (USGS). The epicenter is thus 200 km south of the Samoa Islands and 350 km NNE of the principal groups of Tonga (Figure 1). Note however, the presence of a small island, Niuatoputapu, only 200 km WSW of the epicenter. The Samoa Islands comprise the territory of American Samoa, which regroups the island of Tutuila (142 km2; capital: Pago Pago), and the islets of Ofu, Olosega, Ta'u, Rose, and Swains, and the independent country of Samoa (formerly Western Samoa), comprised of the islands of Upolu (1,125 km2; capital: Apia), Savai'i (1,708 km2), and a few islets including Manono. The island of Niuatoputapu, the nearby islet of Tafahi, and the more distant island of Niuafo'ou belong to the Kingdom of Tonga. ### Plate Tectonics Background The Samoa Islands are located 200 km north of the bend in the boundary of the Pacific plate marking the termination of the Kermadec-Tonga subduction zone. The convergent boundary expressing the subduction of the Pacific plate under the Australian one gives way to a strike-slip regime along a transform fault running north of the Fiji Islands, and linked across a spreading center in …

111 citations


Journal ArticleDOI
TL;DR: In this article, the goal of operational earthquake forecasting is to provide the public with authoritative information on the time dependence of regional seismic hazards, and the public is provided with a set of tools for earthquake forecasting.
Abstract: The goal of operational earthquake forecasting is to provide the public with authoritative information on the time dependence of regional seismic hazards.

110 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a retrospective analysis of the rapid source parameters determination procedure developed at INGV (Scognamiglio et al 2009) as applied to the L'Aquila seismic sequence.
Abstract: On 6 April 2009, a magnitude Mw = 61 earthquake struck the Abruzzi region in central Italy Despite its moderate size, the earthquake caused more than 300 fatalities and partially destroyed the city of L'Aquila and many surrounding villages The mainshock was preceded by an earthquake swarm that started at the end of 2008 The largest earthquakes of the swarm included an Mw = 40, which occurred on 2009/03/30 at 13:38:26 (UTC), and Mw = 39 and Mw = 35 events that occurred on 2009/04/05 at 20:48 and 22:39 (UTC), respectively By the end of November 2009, more than 16,000 aftershocks with ML ≥ 05 had been recorded by the INGV seismic network (Figure 1) Current advances in data transmission and communication yield high-quality broadband velocity and strong-motion waveforms in near real time These data are all crucial for rapid determination of earthquake source parameters ( eg , fault geometry, focal depth, and seismic moment) For the L'Aquila mainshock, the velocimeter data of the Italian National Seismic Network (INSN, code IV), MedNet (code MN, station PDG), the North-East Italy Broadband Network (code NI, stations ACOM and PALA), and the SudTirol Province (code SI, station KOSI) were available in real time In the following days, the strong-motion data of the RAN network (Rete Accelerometrica Nazionale) and, in addition, the displacement data recorded by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) GPS network (Anzidei et al 2009) also become available In this study we present a retrospective analysis of the rapid source parameters determination procedure developed at INGV (Scognamiglio et al 2009) as applied to the L'Aquila seismic sequence Our approach consists of two stages: the near real-time determination of the seismic moment tensor, which is already routinely performed for all ML ≥ 35 earthquakes …

93 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a method for computing low self-noise models of seismic sensors based on coherence analysis methods, which can be used to make absolute comparisons between different models of different seismic sensors.
Abstract: Based on coherence analysis methods we develop a method for computing low self-noise models of seismic sensors. We calculate self-noise models for 11 different production seismometers. This collection contains the majority of sensors currently in use at Global Seismographic Network Stations. By developing these noise models, with a standard estimation method, we are able to make absolute comparisons between different models of seismic sensors. This also provides a method of identifying quality variations between two or more of the same model sensor. Studying Earth's free oscillations requires a large amount of seismic data with a high signal-to-noise ratio at long periods (Laske 2004). Recent tomographic studies using ambient seismic noise (Shapiro et al. 2005) also require the self-noise of seismic instruments to be below that of the Earth's ambient background noise, because as they use Earth noise as the seismic signal. It is also important when making temporary sensor deployments that the instrument's noise levels are below that of the signals being used in the study (Wilson et al. 2002). In order to verify that seismic instruments meet the above demands and other user requirements it is important from a testing standpoint, that one be able to measure the self-noise of seismic sensors and develop baselines for different models of seismic instruments. The different methods used to estimate self-noise of seismic sensors have made it difficult to do side-by-side comparisons of their performance (Hutt et al. 2009). This lack of a self-noise estimate standard makes it difficult to assess when a sensor's self-noise is above the manufacturers' specifications, indicating a possible problem with the sensor or noisy site conditions. In sensor development it is important to be able to compare a prototype sensor's self-noise to that of known self-noise levels of a reference sensor. On top of these complications some …

Journal ArticleDOI
TL;DR: In this article, the authors present a simulation of an M 7.8 scenario earthquake on the southern San Andreas fault and demonstrate the use of high-frequency (HF) stochastic synthetics.
Abstract: Broadband synthetics obtained from scenario simulations of earthquakes with a frequency content between 0 and 10 Hz, referred to hereafter as “BBSs,” are playing an increasingly important role in seismic hazard analysis. An example is the Great Southern California ShakeOut, the largest disaster response exercise in U.S. history and an annual event since 2008 (Jones et al. 2008). The drill was the first to be based on BBSs, in this case for an M 7.8 scenario earthquake on the southern San Andreas fault. Another example of the important role of synthetic ground motions is the increasing awareness of the advantages of using site-specific ground-motion time series, rather than empirical intensity measures or scaled time series from different sources or locations, for more realistic non-linear dynamic analysis of buildings and performance-based earthquake engineering. BBSs appear to be one of the only viable alternatives to the very limited amount of strong-motion time series, particularly in the near-field from large earthquakes. Effectively meeting demands of this sort for realistic BBSs requires careful validation against recorded data. BBSs are currently achieved by combining deterministic low-frequency (LF) synthetics up to a maximum frequency ( fmax) of typically 1–2 Hz with high-frequency (HF) stochastic synthetics above this upper cutoff frequency (see, for example, Graves and Pitarka 2004; Liu et al. 2006; Mai et al. forthcoming). Visual inspection has been used for decades to claim success or failure of the ability of simulations to match observations (or synthetics derived from an alternative numerical method). However, at shorter periods such visual waveform fits are not practical, likely due to chaotic source and path variability. For example, specific intensity measures tend to be more practical and relevant than actual waveform fits at higher frequencies. Candidates for metrics to measure the misfit for BBSs include commonly used ground-motion intensity …

Journal ArticleDOI
TL;DR: Wech et al. as mentioned in this paper explored new web-based resources for disseminating data in an interactive and intuitive way that is accessible and engaging to the public without sacrificing scientific utility.
Abstract: The purpose of this article is to explore new Web-based resources for disseminating data in an interactive and intuitive way that is accessible and engaging to the public without sacrificing scientific utility. This article highlights these new opportunities and describes their synthesis with recently developed automated tremor monitoring methodologies. The resulting product is a website that the reader may find useful to explore in tandem with this article, http://www.pnsn.org/tremor. In this article, the data dissemination example is specifically applied to Cascadia tremor, but the utilization of freely accessible Web applications to provide an interactive Web experience for the public and scientific community alike could easily be applied to many different aspects of seismology. Since their discovery nearly a decade ago, advances in both instrumentation and methodology in subduction zones around the world have brought the causal connection between seismically observed tectonic tremor (Obara 2002) and geodetically observed slow slip (Dragert et al. 2001) into sharper focus. In addition to the strong spatio-temporal correlation between the separately observed phenomena (Wech et al. 2009), mounting evidence indicates that deep, non-volcanic tremor represents slow shear (Ide et al. 2007; Wech and Creager 2007) occurring at the interface (Shelly et al. 2006, 2007) between the subducting oceanic and overriding continental plates, suggesting the two phenomena are different manifestations of the same process. Of course, there is still ongoing debate over the depth and mechanism of tectonic tremor (Kao et al. 2005; McCausland et al. 2005), but what is clear is that tremor serves as a proxy for slow slip. Considering geodesy's lower limits in spatial resolution and slow slip detection together with the abundance of low-level, ageodetic tremor, this connection makes tremor a key component for monitoring when, where, and how much slip is occurring. Because slow slip transfers stress …

Journal ArticleDOI
TL;DR: The Global Strain Rate Map (GSRM) as mentioned in this paper is a numerical velocity gradient tensor field model for the entire Earth's surface that describes the spatial variations of horizontal strain rate tensor components, rotation rates, and velocities.
Abstract: The Global Strain Rate Map (GSRM) of Kreemer et al. (2003) was the main result of Project II-8 of the International Lithosphere Program. The GSRM is a numerical velocity gradient tensor field model for the entire Earth's surface that describes the spatial variations of horizontal strain rate tensor components, rotation rates, and velocities. The model consists of 25 rigid spherical plates and ∼25,000 0.6° by 0.5° deformable grid areas within the diffuse plate boundary zones ( e.g. , western North America, central Asia, Alpine-Himalaya belt). The model provides an estimate of the horizontal strain rates in diffuse plate boundary zones as well as the motions of the spherical caps. This is one of the first successful models of its kind that includes the kinematics of plate boundary zones in the description of global plate kinematics. The vast majority of the data used to obtain the GSRM comes from horizontal velocity measurements obtained using Global Positioning System (GPS) measurements. The latest model version of May 2004 ( i.e. , GSRM version 1.2) includes 5,170 velocities for 4,214 sites worldwide (Holt et al. 2005). Most geodetic velocities are measured within plate boundary zones. The observed velocities are obtained from 86 different (mostly published) studies. The model includes additional constraints on the style (not magnitude) of the strain rate tensor inferred from moment tensors of shallow earthquakes. In addition, geologic strain rates in central Asia inferred from Quaternary faulting data are fit simultaneously with the geodetic velocities to improve the model there. See Kreemer et al. (2000, 2003) for more details. It was always a goal of the GSRM project to support long-term forecasts of seismicity based on tectonic deformation. Two recent developments make this especially timely. First, the Collaboratory for the Study of Earthquake Predictability (CSEP; Jordan et al. 2007) is accepting global …

Journal ArticleDOI
TL;DR: A number of recent publications on strong motion data processing ignore the complexities involved when calculating residual displacements by double integrating strong motion accelerograms as discussed by the authors, which may be due to a lack of appreciation as to the limits associated with an accelerogram recorded by a standard three-component strong motion instrument.
Abstract: A number of recent publications on strong motion data processing ignore the complexities involved when calculating residual displacements by double integrating strong motion accelerograms. It looks as if there may be a lack of appreciation as to the limits associated with an accelerogram recorded by a standard three-component strong motion instrument. This may be in part because papers on strong motion data processing were published in different journals and languages during the past 40 and more years. In this article I present a brief summary of previous papers and a summary of some of the associated problems, so that researchers may take into account some of the nuances involved in strong motion data processing. My aim is to show that to do justice to strong motion data processing, a more in-depth appreciation of some of the principles behind the instrumentation used and the data processing involved is essential. In the first place I present an overview of some of the theory of strong motion instruments. Most strong motion instruments are pendulums of the mass-on-rod type. As a first approximation, the response of a pendulum can be described as a single degree of freedom oscillator: \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \[\ {\varphi}\_{x}^{{^{\prime\prime}}}+2{\omega}\_{x}D\_{x}{\varphi}\_{x}^{{^\prime}}+{\omega}\_{x}^{2}{\varphi}\_{x}=-u_{x}^{{^{\prime\prime}}}\] \end{document}(1) where φ x is the recorded response of the instrument in the x -direction, ω x and Dx are the natural circular frequency and fraction of critical damping of the oscillator, and \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \(u_{x}^{{^{\prime\prime}}}\) \end{document} is the translational ground motion acceleration along the x -axis. This equation is published in many books and usually serves as a basis for data processing. Since accelerometers usually have relatively high natural frequency fx (ω x = 2π fx ) Equation 1 is usually simplified to: \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \[\ {\omega}\_{x}^{2}{\varphi}\_{x}{\approx}-u_{x}^{{^{\prime\prime}}}.\] \end{document}(2) Assuming that Equations 1 and 2 are a reflection of reality, in theory one can recover velocity and displacement by double integrating Equation …

Journal ArticleDOI
TL;DR: In this paper, the authors used the forensic evidence registered by North Korea's 2006 nuclear test to determine the location of the 2009 test in high precision, and presented their determination of the location in North Korea.
Abstract: On 25 May 2009, the Democratic People's Republic of Korea (North Korea) announced that it had conducted a second nuclear test, without providing information of exact time, location, and yield. On that day, the United States Geological Survey (USGS) reported detecting a magnitude 4.7 seismic tremor in an aseismic region in North Korea (http://earthquake.usgs.gov/eqcenter/recenteqsww/Quakes/us2009hbaf.php; also archived copy at http://geophysics.geo.sunysb.edu/wen/NK/usgs\_north\_korea\_2009\_test.webarchive). The seismic waveform features recorded at the seismic stations around the globe for the event exhibit characteristics of an explosion. However, the exact location of the test remains elusive. Seismic monitoring of underground nuclear explosions relies on seismic observations recorded by seismometers around the globe. Because seismic observations are influenced by the seismic properties along the paths of the wave propagation from the source to the seismometers, the accuracy of determination of an event location and time depends on the degree of our knowledge of the seismic properties in the interior of the Earth. The challenge in accurately determining the location of North Korea's nuclear tests stems from the fact that, due to the lack of seismic stations and seismicity in the region, the seismic structure is not known in enough detail that its influence can be well calibrated. For example, the horizontal uncertainty of the 2009 event location reported by the USGS is about ±3.8 km (http://earthquake.usgs.gov/eqcenter/recenteqsww/Quakes/us2009hbaf.php). While our knowledge of the seismic structure in the region is unlikely to improve soon, in this study we demonstrate a strategy that uses the forensic evidence registered by North Korea's 2006 nuclear test to determine the location of the 2009 test in high precision, and we present our determination of the location of the 2009 test. ### Scientific Evidence Registered by the 2006 Test The possible location of North Korea's 2006 test is identified by satellite images (http://cryptome.org/eyeball/dprk-test/dprktest.htm; also an archived copy at http://geophysics.geo.sunysb.edu/wen/NK/eyeball.webarchive) (Table 1). High-quality …

Journal ArticleDOI
TL;DR: In this paper, the main earthquake events above magnitude MW 4.0 in northern Algeria and its surrounding region, specifically for the area between 32° to 38°N and 3°W to 10°E, were cataloged.
Abstract: The primary goal of this work is to catalog all the main earthquake events above magnitude MW 4.0 in northern Algeria and its surrounding region, specifically for the area between 32° to 38°N and 3°W to 10°E, as part of a project to reassess the seismic hazard in this zone. The catalog can be downloaded from the University of Jaen Web site at http://www.ujaen.es/investiga/rnm024/northern\_algerian\_catalog.dat. Until now, there have been only partial (albeit useful) catalogs compiled specifically for this zone (Rothe 1950; Grandjean 1954; Mokrane et al. 1994; Yelles Chauche et al. , 2002; Yelles Chauche, Deramchi et al. 2003), as well as regional earthquake catalogs that included seismicity for this area ( e.g. , Mezcua and Martinez Solares 1983; Benouar 1994; El Mrabet 2005; Godey et al. 2006); however, none of these catalogs focused on seismic hazard studies. The main drawbacks of these catalogs are: no usage of a unified magnitude, coverage of only a short time interval for this type of study, inclusion of non-Poissonian events, and no consideration of known mainshocks in the historical period. In recent years, several efforts have been made to compile more or less complete, homogeneous, and accurate new catalogs in different regions of the world in order to define and characterize seismicity, forecast long-term seismicity, or perform seismic-hazard analysis ( e.g. , Kagan et al. 2006; Pelaez et al. 2007; Wang et al. 2009; Yadav et al. 2009). Along these same lines, we present the most complete and homogeneous unified catalog that we could compile, collecting earthquakes from several individual and international seismological agencies, as well as from research papers reporting seismicity data. To achieve this goal, we performed a declustering analysis and a magnitude unification process in order to provide a unified …

Journal ArticleDOI
TL;DR: The TopoScandiaDeep project as mentioned in this paper developed a self-consistent geophysical model for the lithosphere-asthenosphere system under southern Norway and at understanding the mechanisms that led to mountain-building far away from the plate boundary.
Abstract: The 4-D evolution of topography is a complex scientific topic in geosciences that requires interdisciplinary efforts. In Europe, increased attention toward this research topic was originally coordinated by the TOPO-EUROPE initiative (Cloetingh et al. 2007), which has led to a continent-wide research program under the auspices of the European Science Foundation. One project within that framework is titled “TopoScandiaDeep— the Scandinavian Mountains: Deep Processes,” which aims at developing a self-consistent geophysical model for the lithosphere-asthenosphere system under southern Norway and at understanding the mechanisms that led to mountain-building far away from the plate boundary. A major component of the project is the analysis of recently acquired, passive seismological data from the MAGNUS ( MA ntle investi G ations of N orwegian U plift S tructures) experiment. The temporary network covered the high topography of southern Norway, which is, in the absence of compressional tectonics, a typical example of a particular geoscientific problem that is still not entirely understood—the origin of high topography along passive continental margins ( e.g. , Japsen and Chalmers 2000). Along the Northeast-Atlantic continental margin, at the western rim of the Fennoscandian shield, this topography is expressed through the Scandes mountain range, the second largest mountain range in Europe (Figure 1). It extends over a total length of more than 2,000 km along the entire Norwegian coast and has a peak topography of up to 2,500 m. With a few exceptions, such as the alpine formation in Sunnmore (central Norway), the topography is rather smooth, to a large extent dominated by subplanar (paleic) surfaces at elevations above 500 m. Contrary to the central European Alps, whose orogeny is in principle well understood as a classic collision belt, the evolution of the Scandes and its high topography remains an unsolved and highly debated issue. In particular the vicinity of …

Journal ArticleDOI
TL;DR: In this article, the authors investigated the aftershocks of the Mw 6.0 earthquake in Storfjorden, off the coast of the island of Spitsbergen, on 21 February 2008 and initiated an extensive aftershock sequence.
Abstract: The Svalbard Archipelago is situated in the northwestern part of the Barents shelf, in close proximity to the passive continental margin. This intraplate region is characterized by some of the highest seismicity in the entire Barents Sea and adjoining continental shelf, surpassed only by the Knipovich ridge ( e.g. , Engen et al. 2003; International Seismological Centre 2001), which, as a spreading plate boundary, is the structure that dominates the regional stress field. Most of the seismic activity (Figure 1) is characterized by smaller events, which often occur in small concentrations sparsely distributed in time. However, earthquakes of moderate to stronger magnitudes do occur in the Svalbard area, such as the 4 July 2003 mb 5.7 event close to Hopen Island ( e.g. , Stange and Schweitzer 2004). A more recent example will be discussed here: the Mw 6.0 earthquake that occurred in Storfjorden, off the coast of the island of Spitsbergen, on 21 February 2008 and initiated an extensive aftershock sequence. The data presented in this contribution cover approximately seven months following the occurrence of the mainshock and involve more than 250 aftershocks included in the NORSAR Regional Reviewed Bulletin (http://www.norsardata.no/NDC/bulletins/regional/), which contains events with an automatic network magnitude ( MGBF ) larger than 2.0. The 2008 seismic activity in Storfjorden coincided temporally with an International Polar Year (IPY) project to study the continental margin in the region between the Mohns and Knipovich ridges and Bear Island, south of Spitsbergen (see http://www.norsar.no/c-24-International-Polar-Year.aspx). This resulted in the mainshock and a significant part of the aftershock sequence being recorded by several temporary seismic stations deployed in the region, in addition to the permanent installations in the European Arctic. Data used in this study were obtained from the NORSAR arrays SPITS and ARCES; the Norwegian National Seismic Network (NNSN) stations KBS, BJO1, and …

Journal ArticleDOI
TL;DR: In this paper, the authors combine the temperature and wind effect in the effective sound speed, written as \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \USEpackage{amsmath} \pagestyle{empty}
Abstract: The speed of sound in an ideal gas, under adiabatic conditions, is a function of temperature and is given by: \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \[C=\sqrt{{\gamma}RT}\] \end{document} where C is the speed of sound, γ, known as the adiabatic index, is the ratio of specific heats at constant pressure and constant volume ( Cp/Cv ), R is the gas constant, and T is the absolute temperature. In addition to temperature, which is the dominant factor, infrasound propagation is affected by the local wind velocity. Therefore we can combine the temperature and wind effect in the effective sound speed, written as \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \[C_{eff}=C+{\eta}{\cdot}{\upsilon},\] \end{document} where Ceff is the effective sound speed, ν is the wind velocity, and η is a unit vector in a direction of sound propagation. The temperature (and therefore Ceff ) distribution in the atmosphere is controlled by solar radiation. As most of the heat transfer takes place on the ground surface, temperature tends to decrease with altitude. A perturbation of this trend occurs in the stratosphere due to the heating associated with absorption of ultraviolet radiation by the ozone layer (Figure 1), but above the ozone layer the temperature decreases again with altitude up to around 100 km. Above this height, in the thermosphere, the temperature increases with altitude due to direct ultraviolet radiation heating from the sun. Classic ray theory implies that infrasound energy, generated on the surface of the ground and propagating through the atmosphere, must reach a layer of effective sound speed greater than the velocity of sound at the source in order to return to receivers located on the surface of the Earth. Normally, this happens at heights around 110 km, in the thermosphere, and these rays are usually recorded at distances greater than 250 km from the source. The region up to 250 km from the source where no infrasound …

Journal ArticleDOI
TL;DR: In this article, the authors used radiocarbon age constraints to find earthquake-induced liquefaction features in Holocene and Late Wisconsin sediments exposed along three rivers in the Charlevoix seismic zone.
Abstract: Earthquake-induced liquefaction features have been found in Holocene and Late Wisconsin sediments exposed along three rivers in the Charlevoix seismic zone. On the basis of their stratigraphic position and radiocarbon age constraints, the liquefaction features are thought to have formed during three or more earthquake episodes centered in Charlevoix during the past 10,000 years, including at least two prehistoric episodes approximately 5,000 and 10,000 years ago. The spatial distribution of liquefaction features coupled with liquefaction potential analysis suggests that the Charlevoix earthquakes were of moment magnitude ( M ) ≥ 6.2. Liquefaction features have not been found in similar sediments exposed along eight rivers in the Quebec City–Trois Rivieres area, 70 to 150 km from Charlevoix in the St. Lawrence River Valley. The apparent absence of liquefaction features in the Quebec City–Trois Rivieres area suggests that few, if any, large earthquakes have occurred here during the same time period. The geologic record of earthquakes may be incomplete in both areas due to fluctuations in Holocene sea level. Nevertheless, the rate of large earthquakes has apparently been much higher in the Charlevoix seismic zone than in adjacent areas of the St. Lawrence for thousands of years. These findings suggest that seismicity is localized in Charlevoix and that the presence of Iapetan rift faults that underlie the St. Lawrence Valley of southeastern Canada may not, in itself, indicate earthquake potential. These results may have important implications for other Iapetan rift faults in the eastern United States, as well as seismic source zone characterization and hazard assessment throughout eastern North America.

Journal ArticleDOI
TL;DR: The Engineering Seismology Toolbox is a virtual laboratory that provides processed seismographic data, including time series and spectral information, and an example application of the database is provided to study of the ground motions from the 2005 MN (Nuttli magnitude) 5.4 Riviere du Loup, Quebec, earthquake.
Abstract: The Engineering Seismology Toolbox ( ) is a virtual laboratory that provides processed seismographic data, including time series and spectral information. A variety of programs and tools for signal processing and various applications are also provided. The products of the toolbox are updated continuously and are freely available to all researchers. A key resource provided by the Toolbox is a database of processed time series and Fourier and response spectra for moderate earthquakes recorded on seismographic stations across Canada. We describe herein the data available in this online database, and provide an example application of the database to study of the ground motions from the 2005 MN (Nuttli magnitude) 5.4 Riviere du Loup, Quebec, earthquake.

Journal ArticleDOI
TL;DR: In this article, the authors explore a critical aspect of instrument performance, the self-noise level of the device and the amplitude range it can usefully record, where these levels fall, and the "operating range" between them, determines much of the instrument's viability and the applications for which it is appropriate.
Abstract: Understanding the performance of sensors and recorders is prerequisite to making appropriate use of them in seismology and earthquake engineering. This paper explores a critical aspect of instrument performance, the “self” noise level of the device and the amplitude range it can usefully record. Self noise limits the smallest signals, while instrument clipping level creates the upper limit (above which it either cannot produce signals or becomes unacceptably nonlinear). Where these levels fall, and the “operating range” between them, determines much of the instrument's viability and the applications for which it is appropriate. The representation of seismic-instrument self-noise levels and their effective operating ranges (cf., dynamic range) for seismological inertial sensors, recorders (data acquisition units, or DAUs), and integrated systems of sensors and recorders (data acquisition systems, or DASs) forces one to address an unnatural comparison between transient finite-bandwidth signals, such as earthquake records, and the instrument's self noise, an effectively stationary signal of infinite duration. In addition to being transient, earthquakes and other records of interest are characterized by a peak amplitude and generally a narrow, peaked spectral shape. Unfortunately, any power spectrum computed for such transient signals is ill defined, since the maximum of that spectrum depends strongly upon signal and record durations. In contrast, the noise floor of an instrument is approximately stationary and properly described by a power spectral density (PSD) or its root (rPSD). Put another way, earthquake records have units of amplitude ( e.g. , m/s2) while PSDs have units of amplitude-squared per hertz ( e.g. , (m/s2)2/Hz) and the rPSD has units of amplitude per root of hertz ( e.g. , (m/s2)/Hz1/2). Thus, this incompatability is a conflict between earthquake (amplitude) and PSD (spectral density) units that requires one to make various assumptions before they can be compared. …

Journal ArticleDOI
TL;DR: The joint Japanese-German underground acoustic emission research in South Africa (JAGUARS) project measured seismic events with frequencies 0.7 kHz 25 kHz (Nakatani et al. 2008), resulting in significantly enhanced detection of microseismicity.
Abstract: The joint Japanese-German Underground Acoustic Emission Research in South Africa (JAGUARS) project measures seismic events with frequencies 0.7 kHz 25 kHz (Nakatani et al. 2008), resulting in significantly enhanced detection of microseismicity (Yabe et al. 2009). From June 2007 to June 2008 nearly 500,000 events were recorded, including more than 57,000 events with frequency content f > 25 kHz. The study of microseismicity in the Earth's crust with source dimensions on the centimeter scale allows scientists to close the gap between laboratory fracture studies and seismology ( e.g. , Mendecki 1997; Young and Hazzard 2000; Richardson and Jordan 2002). The analysis of seismic events over a broad range of magnitudes is a prerequisite to testing seismological theories such as the existence of a minimum earthquake size (Ide 1973; Dietrich 1979; Aki 1987) or the breakdown of scaling laws for source processes (Abercrombie 1995; Ide and Beroza 2001; Yamada et al. 2005). Observation of microseismicity is technically difficult as it requires monitoring of seismic waves at very high frequencies ( f > 1 kHz). In particular, a network of sensors with high sensitivity and short source-receiver distances is needed to observe high frequency events. Deep mines offer a unique possibility to study seismic source processes with high resolution at seismogenic depth. Seismicity in mines is strongly related to the ongoing mining process (Cook 1976; Gibowicz and Kijko 1994; Boettcher et al. 2009). Most of the seismicity is spatially and temporally clustered close to the newly created stope front (Cook 1963; McGarr 1971). Additional seismic events are observed on pre-existing zones of …

Journal ArticleDOI
TL;DR: In this paper, the authors identify the cause of past tsunamis and mitigate the risk connected with future events, but recognizing a landslide-tsunami when the mass failure is entirely submarine is very challenging because of the paucity of detectable evidence.
Abstract: Identifying the cause of past tsunamis is of paramount importance to mitigate the risk connected with future events, but recognizing a landslide-tsunami when the mass failure is entirely submarine is very challenging because of the paucity of detectable evidence ( e.g. , Lynett et al. 1998; Tappin et al. 2001; Synolakis et al. 2002; Fritz et al. 2007). Efforts in this direction are, however, necessary because the arrival time of these tsunamis to the coast is commonly very short ( i.e. , they usually originate along the margin of the continental shelf) and the related runup and inundation may be locally very large (Bardet et al. 2003; Okal et al. 2003; Scheffers and Kelletat 2003; Okal and Synolakis 2004; Tappin, Watts, and Grilli 2008; Fritz et al. 2007). At present, the only effective way to mitigate the risk connected with landslide-tsunamis is to identify areas prone to these tsunamis. This task can be accomplished by marine surveys ( e.g. , Tappin et al. 2001; Chiocci et al. 2008; Minisini and Trincardi 2009) or through the analysis of historical tsunamis where these events are adequately documented ( e.g. , Okal et al. 2003, 2009; Marriner and Morhange 2007). Eastern Sicily and southern Calabria front the Messina Straits in southern Italy and have been affected by recent and historical destructive earthquakes and tsunamis (Boschi et al. 2000; Tinti et al. 2004; Neri et al. 2006; Pareschi et al. 2006; Galli et al. 2008; Gerardi et al. 2008; Pantosti et al. 2008; De Martini et al. 2010). For several of these events, careful observations and data were collected ( e.g. , Mercalli 1897, 1909; Omori 1909; Platania 1909; Baratta 1901, 1910), so …

Journal ArticleDOI
TL;DR: In this article, a systematic study of source properties can help seismic hazard mitigation efforts and offer insights into the seismogenic patterns of the Greek region, and the authors propose a method to determine moment tensors of even very small events (Mw ∼ 3.5).
Abstract: The routine use of regional broadband data for the determination of moment tensors of even very small events ( Mw ∼ 3.5) has considerably enhanced our understanding of tectonic processes in many active regions worldwide ( e.g. , Kao and Jian 2001; Braunmiller et al. 2002; Kubo et al. 2002; Pondrelli et al. 2002; Clinton et al. 2006; Risteau 2008). This development can be attributed to a number of reasons, such as the ability to generate accurate synthetic seismograms for a given velocity model and an increase in the amount and quality of data at regional distances. An additional reason may also be the variety of available methods for regional moment tensor inversion that utilize different parts of the recorded waveform, like surface waves (Thio and Kanamori 1995), body and surface waves (Zhao and Helmberger 1993), or just the long-period part of the signal (Ritsema and Lay 1995). Greece has high seismicity, and large events in the past have caused significant damage and casualties (for an overview see Papazachos and Papazachou 1997). Thus, a systematic study of source properties can help seismic hazard mitigation efforts and offer insights into the seismogenic patterns of the Greek region. Most available source mechanisms of previous events in Greece were derived either from P -wave polarities (Papazachos et al. 1983, 1988; Papadopoulos et al. 1986) or from the inversion of teleseismic waveforms recorded by the Global Seismic Network (Lyon-Caen et al. 1988; Hatzfeld et al. 1996; Bernard et al. 1997; Kiratzi and Louvari 2003; Benetatos et al. 2004). However, the former approach depends heavily on station coverage and the reliability of the first-motion readings, while the latter one can be used only for moderate to large events ( Mw > 5.0). Since 1997 the National …

Journal ArticleDOI
TL;DR: In this paper, the authors describe the deployment of the Incorporated Research Institutions for Seismology-National Science Foundation-USGS Global Seismographic Network (GSN) in parallel with numerous international deployments of broadband seismometers in the Federation of Digital Seismic Networks (FDSN).
Abstract: Seismologists often jest that “the best way to stop earthquakes is to deploy seismic stations.” The laborious effort to install seismometers to record signals from earthquakes or to pursue targeted investigations of Earth's structure is sometimes confounded by the vagaries of earthquake occurrence, which is decidedly nonuniform in space and time. Of course, this often proves to be more of an anxiety than a reality, and most efforts, especially those employing multiyear installations, succeed in gathering valuable seismic data. One of the most gratifying examples of success is provided by the deployment of the Incorporated Research Institutions for Seismology–National Science Foundation–U.S. Geological Survey (IRIS-NSF-USGS) Global Seismographic Network (GSN) in parallel with numerous international deployments of broadband seismometers in the Federation of Digital Seismic Networks (FDSN). Between about 1982 and 2004, global installations of high-quality broadband digital seismometers proliferated, replacing the obsolete analog systems of the World-Wide Standardized Seismograph Network and upgrading sparse institutional observatories and first-generation digital networks. While deployments of important additional stations continue, by 2004 the GSN had achieved its basic design goals: openly available continuous broadband data from roughly 130 stations providing real-time global coverage and large dynamic range (Butler et al. 2004). Then the 26 December 2004 great Sumatra-Andaman earthquake ( Mw 9.2) occurred—the first event to exceed magnitude 9 since the 1964 Alaska earthquake. The GSN and FDSN provided unprecedented global seismic recordings for this event and enabled more detailed seismological investigation than was possible for any prior event of such size ( e.g. , Lay et al. 2005; Bilek et al. 2007). Those same globally distributed seismic stations have gathered data from a substantial number of great earthquakes; from 2001 to 2010, there have been 18 events with MS ≥ 8 and 13 events with Mw ≥ 8, whereas over the previous …

Journal ArticleDOI
TL;DR: In this paper, the authors used the H/V method to find a correlation between damages from an earthquake and the frequency of ambient vibration H/Vs peaks, which is a good tool for site effect evaluation.
Abstract: The ambient vibration H/V method (Nakamura 1989) is now widely used to determine the fundamental soil frequency and has proven to be a good tool for site effect evaluation ( e.g. , Bonnefoy-Claudet, Cornou et al. 2006). More particularly, it has been used for the zoning of a number of cities around the world, such as Basel, Switzerland (Fah et al. 1997); Quito, Ecuador (Gueguen et al. 2000); Barcelona, Spain (Alfaro et al. 2001); Caracas, Venezuela (Duval et al. 2001); Almeria, Spain (Navarro et al. 2001); and Thessaloniki, Greece (Panou et al. 2005). This method has also been used to find a correlation between damages from an earthquake and the frequency of ambient vibration H/V peaks ( e.g. Gueguen et al. 1998; Panou et al. 2005; Teves Costa et al. 2007; Cara et al. 2008). Other studies show that this correlation is not always clear or even does not exist (Fallahi 2003; Gonzales et al. 2004; Chatelain, Guillier, and Parvez 2008; Theodulis et al. 2008). The 21 May 2003 Algerian ( Mw = 6.8) earthquake produced severe damages in the city of Boumerdes, among others (see Figure 1 for locations). These damages were not uniformly distributed throughout the city, with a destruction level varying from 0 up to 5%, except in two neighborhoods where it reached 30% (Hamane et al. 2007). These observations led us to try to link the destruction pattern to variations of local seismic wave amplification. One of the most widely used methods for studying this type of site effect is based on recording ambient vibrations (noise) in both the vertical and horizontal directions and computing their spectral ratio (a comprehensive review of ambient vibration studies may be found in Bonnefoy-Claudet, Bard, and Cotton 2006). The physical meanings of the ambient …

Journal ArticleDOI
TL;DR: In this paper, the authors used instrument-corrected response spectra and Fourier amplitude data from 120 stations, at distances from 60 to 1,000 km, to examine the attenuation and source characteristics of this important event.
Abstract: The M 5.0 23 June 2010 Val-des-Bois, Quebec, earthquake produced a rich instrumental and felt ground-motion database. We use instrument-corrected response spectra and Fourier amplitude data from 120 stations, at distances from 60 to 1,000 km, to examine the attenuation and source characteristics of this important event. The Val-des-Bois earthquake produced relatively large response spectral amplitudes at distances less than 200 km, greater than predicted by most recent ground-motion prediction equations (GMPEs) (including the Atkinson and Boore 2006 equations). By contrast, reported intensities at regional distances tended to be smaller than predicted by intensity GMPEs (Atkinson and Wald 2007), although they were high in the epicentral area. From recent moderate earthquakes in eastern North America (ENA) (2010 Val-des-Bois and the 2005 Riviere du Loup event), we have learned that amplitudes at near distances are not well-predicted by average attenuation shapes drawn to pass through regional observations. To infer the source spectrum or near-source motions, we suggest the use of seismic moment as a constraint on the level of the source spectrum. Using Q -corrected observations to deduce the source-spectral shape, and the known seismic moment to fix its absolute amplitude level, we obtain an apparent source spectrum for the Val-des-Bois earthquake. The Val-des-Bois source spectrum is well described by a Brune model with a stress drop of 250 bars. Future work will focus on resolving near-source attenuation issues to provide better GMPEs for ENA for all magnitudes.

Journal ArticleDOI
TL;DR: In 2008, a strong earthquake with a magnitude of 6.4 on the Richter scale occurred in Pakistan's southwestern province of Baluchistan and the worst hit areas were the union councils of Kuch, Kawas, and Zandra districts of Ziarat, Pishin and Harnai.
Abstract: On the morning of 29 October 2008, a strong earthquake with a magnitude of 6.4 on the Richter scale occurred in Pakistan's southwestern province of Baluchistan. The worst-hit areas were the union councils of Kuch, Kawas, and Zandra districts of Ziarat, Pishin, and Harnai (see Figure 1). The epicenter of the earthquake was 70 km northeast of the provincial capital of Quetta and 25 km east of Ziarat. The earthquake affected more than 120,000 people. One hundred sixty-four people lost their lives and 173 people were injured. The majority of building stock in the area consists of mud/adobe houses with roofs of mud or corrugated sheet metal. More than 5,000 buildings were destroyed and another 4,000 damaged. This event is no surprise, because the affected region of the 2008 Baluchistan earthquake is known to be one of the most seismically active areas in the country and is placed in seismic zone 4, the most severe seismic zone in the Building Code of Pakistan 2007 (BCP 2007). In this western region of Pakistan, the 1935 Quetta earthquake, which completely destroyed Quetta city, was the last major destructive event, with a death toll of 30,000 to 60,000. In Pakistan as a whole, the last previous major earthquake was in 2005 (Kashmir earthquake), which resulted in 86,000 deaths and the collapse of more than 600,000 buildings (Maqsood and Schwarz 2010). The total economic loss due to damage and destruction during the 2005 Kashmir earthquake was estimated to be US$5 billion. Figure 1. ▴ Location of Scenarios and affected areas. This article presents an overview of the seismicity and typical building types in the area affected by the 2008 Baluchistan earthquake, with emphasis on what can be learned from the 1935 Quetta earthquake and the eight earthquake-resistant building classes set forth in the Building Code for …