scispace - formally typeset
Search or ask a question

Showing papers in "Bulletin of the Seismological Society of America in 1987"


Journal ArticleDOI
TL;DR: In this article, an initial path estimate is perturbed using a geometric interpretation of the ray equations, and the travel time along the path is minimized in a piecewise fashion, iteratively performed until the travel times converges within a specified limit.
Abstract: A new approximate algorithm for two-point ray tracing is proposed and tested in a variety of laterally heterogeneous velocity models. An initial path estimate is perturbed using a geometric interpretation of the ray equations, and the travel time along the path is minimized in a piecewise fashion. This perturbation is iteratively performed until the travel time converges within a specified limit. Test results show that this algorithm successfully finds the correct travel time within typical observational error much faster than existing three-dimensional ray tracing programs. The method finds an accurate ray path in a fully three-dimensional form even where lateral variations in velocity are severe. Because our algorithm utilizes direct minimization of the travel time instead of solving the ray equations, a simple linear interpolation scheme can be employed to compute velocity as a function of position, providing an added computational advantage.

814 citations


Journal ArticleDOI
TL;DR: In this article, an automatic detection algorithm has been developed which is capable of time P -phase arrivals of both local and teleseismic earthquakes, but rejects noise bursts and transient events.
Abstract: An automatic detection algorithm has been developed which is capable of time P -phase arrivals of both local and teleseismic earthquakes, but rejects noise bursts and transient events. For each signal trace, the envelope function is calculated and passed through a nonlinear amplifier. The resulting signal is then subjected to a statistical analysis to yield arrival time, first motion, and a measure of reliability to be placed on the P -arrival pick. An incorporated dynamic threshold lets the algorithm become very sensitive; thus, even weak signals are timed precisely. During an extended performance evaluation on a data set comprising 789 P phases of local events and 1857 P phases of teleseismic events picked by an analyst, the automatic picker selected 66 per cent of the local phases and 90 per cent of the teleseismic phases. The accuracy of the automatic picks was “ideal” (i.e., could not be improved by the analyst) for 60 per cent of the local events and 63 per cent of the teleseismic events.

390 citations


Journal ArticleDOI
TL;DR: In this paper, a distance correction curve for determining the local magnitude, M_L, was derived for 972 earthquakes in the Southern California Seismographic Network (SCLN).
Abstract: Measurements (9,941) of peak amplitudes on Wood-Anderson instruments (or simulated Wood-Anderson instruments) in the Southern California Seismographic Network for 972 earthquakes, primarily located in southern California, were studied with the aim of determining a new distance correction curve for use in determining the local magnitude, M_L. Events in the Mammoth Lakes area were found to give an unusual attenuation pattern and were excluded from the analysis, as were readings from any one earthquake at distances beyond the first occurrence of amplitudes less than 0.3 mm. The remaining 7,355 amplitudes from 814 earthquakes yielded the following equation for M_L distance correction, log A_0 -log A_0 = 1.110log(r/100)+0.00189(r-100)+3.0 where r is hypocentral distance in kilometers. A new set of station corrections was also determined from the analysis. The standard deviation of the M_L residuals obtained by using this curve and the station corrections was 0.21. The data used to derive the equation came from earthquakes with hypocentral distances ranging from about 10 to 700 km and focal depths down to 20 km (with most depths less than 10 km). The log A_0 values from this equation are similar to the standard values listed in Richter (1958) for 50 200 km). The effect at close distances is consistent with that found in several other studies, and is simply due to a difference in the observed ≈ 1/r geometrical spreading for body waves and the 1/r^2 spreading assumed by Gutenberg and Richter in the construction of the log A_0 table. M_L's computed from our curve and those reported in the Caltech catalog show a systematic dependence on magnitude: small earthquakes have larger magnitudes than in the catalog and large earthquakes have smaller magnitudes (by as much as 0.6 units). To a large extent, these systematic differences are due to the nonuniform distribution of data in magnitude-distance space (small earthquakes are preferentially recorded at close distances relative to large earthquakes). For large earthquakes, however, the difference in the two magnitudes is not solely due to the new correction for attenuation; magnitudes computed using Richter's log A_0 curve are also low relative to the catalog values. The differences in that case may be due to subjective judgment on the part of those determining the catalog magnitudes, the use of data other than the Caltech Wood-Anderson seismographs, the use of different station corrections, or the use of teleseismic magnitude determinations. Whatever their cause, the departures at large magnitude may explain a 1.0:0.7 proportionality found by Luco (1982) between M_L's determined from real Wood-Anderson records and those from records synthesized from strong-motion instruments. If it were not for the biases in reported magnitudes, Luco's finding would imply a magnitude-dependent shape in the attenuation curves. We studied residuals in three magnitude classes (2.0 < M_L ≦ 3.5, 3.5 < M_L ≦ 5.5, and 5.5 < M_L ≦ 7.0) and found no support for such a magnitude dependence. Based on our results, we propose that local magnitude scales be defined such that M_L = 3 correspond to 10 mm of motion on a Wood-Anderson instrument at 17 km hypocentral distance, rather than 1 mm of motion at 100 km. This is consistent with the original definition of magnitude in southern California and will allow more meaningful comparison of earthquakes in regions having very different attenuation of waves within the first 100 km.

366 citations


Journal ArticleDOI
TL;DR: In this paper, a normalizing function, T/Tave, is used to estimate the expected time and prediction time window for a future earthquake recurrence, and the analysis is refined using the median recurrence interval, T, rather than Tave.
Abstract: Earthquake recurrence intervals for characteristic events from a number of plate boundaries are analyzed using a normalizing function, T/Tave , where Tave is the observed average recurrence interval for a specific fault segment, and T is an individual recurrence interval. The lognormal distribution is found to provide a better fit to the T/Tave data than the more commonly used Gaussian and Weibull distributions, and it has an appealing physical interpretation. The observation of a small and stable coefficient of variation (the ratio of the standard deviation to the mean) of normalized data covering a wide range of recurrence intervals, seismic moments, and tectonic environments indicates that the standard deviation of the recurrence intervals for each fault segment is a fixed fraction of the corresponding average recurrence interval. Given that the distribution of T/Tave data is approximately lognormal, the analysis is refined using the median recurrence interval, T, rather than Tave . An approximately optimal algorithm is derived for making stable estimates of T, its standard deviation , and the reliability of each T /T datum. By accounting for possible errors in the data, this algorithm treats both historical and geological data properly. All information is then combined to make an optimal estimate of the distribution of the T /T data. This refined distribution is finally used to estimate the probability of a recurrence in a future forecast time interval and the reliability of the forecast. It is found that the forecast interval must be short compared with T for the forecast to be statistically meaningful. In addition, the distribution of T /T data can be used to estimate an expected time and prediction time window for a future earthquake recurrence.

350 citations


Journal ArticleDOI
TL;DR: The stochastic model has been successfully applied to the prediction of ground motions in the Western United States and to short-period magnitudes from large to great earthquakes worldwide as mentioned in this paper.
Abstract: Empirical predictions of ground motions from large eastern North American earthquakes are hampered by a lack of data for such events. For this reason, most prediction techniques have been based, at least in part, on data from the seismically active and well-instrumented western North America. Concentrating on the prediction of response spectra on hard-rock sites, we have used a relatively new, theoretical technique that does not require western data to make ground motion predictions for eastern North America. This method, often referred to as the stochastic model, has its origins in the work of Hanks and McGuire, who treat high-frequency motions as filtered random Gaussian noise, for which the filter parameters are determined by a seismological model of both the source and the wave propagation. The model has been successfully applied to the predictions of ground motions in the Western United States and to short-period magnitudes from large to great earthquakes worldwide. For our application, the essential parameters of the model are estimated by using existing data from small to moderate eastern North American earthquakes. A crucial part of the model is the relation between seismic moment and corner frequency. The relation proposed in 1983 by Nuttli for mid-plate earthquakes leads to predictions of ground motions that are lower than available data by a factor of about 4. On the other hand, a constant stress parameter of 100 bars gives model predictions in good accord with the data. To aid in applications, the ground motion predictions are given in the form of regression equations for earthquakes of magnitude 4.5 to 7.5, at distances within 100 km of the source. The explanatory variables are hypocentral distance and moment magnitude (M). Because predictions are often required in terms of m,g rather than M, we have used the theoretical model to establish a relation between the two magnitudes. The predicted relation agrees with the sparse data available, although the large uncertainties in the observed magnitudes for the larger events, as well as the sensitivity of the theoretical magnitude to the attenuation model, make it difficult to discriminate between various source-scaling models.

336 citations


Journal ArticleDOI
TL;DR: In this article, a new model of seismic coda is presented, based on the balance between the energy scattered from the direct wave and the energy in the seismic Coda, and the model is tested using synthetic seismograms produced in finite difference simulations of wave propagation through media with random spatial variations in seismic velocity.
Abstract: A new model of seismic coda is presented, based on the balance between the energy scattered from the direct wave and the energy in the seismic coda. This energy-flux model results in a simple formula for the amplitude and time decay of the seismic coda that explicitly differentiates between the scattering and intrinsic (anelastic) attenuation of the medium. This formula is valid for both weak and strong scattering and implicitly includes multiple scattering. The model is tested using synthetic seismograms produced in finite difference simulations of wave propagation through media with random spatial variations in seismic velocity. Some of the simulations also included intrinsic dissipation. The energy-flux model explains the coda decay and amplitude observed in the synthetics, for random media with a wide range of scattering Q. In contrast, the single-scattering model commonly used in the analysis of microearthquake coda does not account for the gradual coda decay observed in the simulations for media with moderate or strong scattering attenuation (scattering Q less than or equal to 150). The simulations demonstrate that large differences in scattering attenuation cause only small changes in the coda decay rate, as predicted by the energy-flux model. The coda decay rate is sensitive, however, more » to the intrinsic Q of the medium. The ratio of the coda amplitude to the energy in the direct arrival is a measure of the scattering attenuation. Thus, analysis of the decay rate and amplitude of the coda can, in principle, produce separate estimates for the scattering and intrinsic Q values of the crust. We analyze the coda from two earthquakes near Anza, California. Intrinsic Q values determined from these seismograms using the energy-flux model are comparable to coda Q values found from the single-scattering theory. The results demonstrate that coda Q values are, at best, measures of the intrinsic attenuation of the lithosphere and are unrelated to the scatteing Q. « less

332 citations


Journal ArticleDOI
Fabio Sabetta1, Antonio Pugliese1
TL;DR: In this paper, the authors used Italian strong motion data to study attenuation characteristics of horizontal peak ground acceleration and velocity and found that peak acceleration and velocities were lognormally distributed with a standard error representing a 49 and 64 per cent increase in the median estimate.
Abstract: Italian strong-motion data were used to study attenuation characteristics of horizontal peak ground acceleration and velocity. The data base consisted of 190 horizontal components of accelerograms recorded in Italy since 1976 from 17 earthquakes of magnitudes 4.6 to 6.8. The resulting equations are log A = − 1.562 + 0.306 M − log ( R 2 + 5.8 2 ) 1 / 2 + 0.169 S log V = − 0.710 + 0.455 M − log ( R 2 + 3.6 2 ) 1 / 2 + 0.133 S where A is peak horizontal acceleration in g , V is peak horizontal velocity in centimeters/second, M is magnitude, R is the closest distance to the surface projection of the fault rupture in kilometers, and S is a variable taking the values of 0 and 1 according to the local site geology. Peak acceleration and velocity were found to be lognormally distributed with a standard error representing, respectively, a 49 and 64 per cent increase in the median estimate. We considered a magnitude-dependent shape of the attenuation curves, but we found no basis for it in the data. Attenuation relationships developed using epicentral distance in place of fault distance gave similar results, with higher standard errors and higher predicted values for short distances and high magnitudes. Sensitivity studies, performed with the analysis of the residuals, showed that predictions based on our relatiaons are stable with respect to reasonable variations of the model and the seismic areas providing the data. Comparisons with attenuation relationships proposed for western North America showed differences of the same order of magnitude as the statistical prediction uncertainty.

232 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that there is substantial amplification of motions at the ridges of Canal Beagle, a subdivision of Vina del Mar, by using frequency-dependent spectral ratios of aftershock data obtained from a temporarily established dense array.
Abstract: Site-response experiments were performed 5 months after the MS = 7.8 central Chile earthquake of 3 March 1985 to identify amplification due to topography and geology. Topographical amplification at Canal Beagle, a subdivision of Vina del Mar, was hypothesized immediately after the main event, when extensive damage was observed on the ridges of Canal Beagle. Using frequency-dependent spectral ratios of aftershock data obtained from a temporarily established dense array, it is shown that there is substantial amplification of motions at the ridges of Canal Beagle. The data set constitutes the first such set depicting topographical amplification at a heavily populated region and correlates well with the damage distribution observed during the main event. Dense arrays established in Vina del Mar also yielded extensive data which are quantified to show that, in the range of frequencies of engineering interest, there was substantial amplification at different sites of different geological formations. To substantiate this, spectral ratios developed from the strong-motion records of the main event are used to show the extensive degree of amplification at an alluvial site as compared to a rock site. Similarly, spectral ratios developed from aftershocks recorded at comparable stations qualitatively confirm that the frequency ranges for which the amplification of motions occur are quite similar to those from strong-motion records. In case of weak motions, the denser arrays established temporarily as described herein can be used to identify the frequency ranges for which amplification occurs, to quantify the degree of frequency-dependent amplification and used in microzonation of closely spaced localities.

216 citations


Journal ArticleDOI
TL;DR: The nonuniformity of the occurrence of large slip events producing surface ruptures on seismogenic faults and variations in slip rate probably characterize seismogenic faulting in the Great Basin province, Western United States as discussed by the authors.
Abstract: Nonuniformity of the occurrence of large slip events producing surface ruptures on seismogenic faults and variations in slip rate probably characterize seismogenic faulting in the Great Basin province, Western United States. Examples include: the grouping of faulting events along the Lost River fault, Idaho; changes in tilt rates of the East Range and Cortez Mountains, Nevada; extension of slip along a fault on the northwest flank of the Humboldt Range, Nevada; and migration or shifting of slip back and forth from one fault to another along subparallel range-front faults, in Dixie Valley, Nevada.

199 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyzed a catalog of earthquakes from the New Hebrides for the occurrence of temporal clusters that exhibit fractal behavior and found that significant deviations from random or Poisson behavior are found.
Abstract: The concept of fractals provides a means of testing whether clustering in time or space is a scale-invariant process. If the fraction x of the intervals of length T containing earthquakes is related to the time interval by x - then fractal clustering is occurring with fractal dimension D (O c D e 1). We have analyzed a catalog of earthquakes from the New Hebrides for the occurrence of temporal clusters that exhibit fractal behavior. Our studies have considered four distinct regions. The number of earthquakes considered in each region varies from 44 to 1,330. In all cases, significant deviations from random or Poisson behavior are found. The fractal dimensions found vary from 0.126 to 0.255. Our method introduces a new means of quantifying the clustering of earthquakes.

192 citations


Journal ArticleDOI
TL;DR: In this paper, the authors have found that these catalogs include a complex mixture of real and man-made changes and that these changes can be recognized by examining the distribution of seismicity rate changes in the magnitude domain.
Abstract: Seismicity catalogs contain important information about processes which occur in seismically active regions of the earth. Many authors have examined these catalogs for patterns and variations in patterns which might reflect changes in these processes. We have found that these catalogs include a complex mixture of real and man-made changes. One must identify and account for the man-made changes before the real ones can be identified and understood. Many man-made changes in seismicity catalogs are manifested as changes in seismicity rates and can, therefore, be identified through careful examination of these rates. Obvious effects include increases or decreases in the detection and reporting of smaller events which accompany the installation or closure of seismic stations. These types of changes can be recognized by examining the distribution of seismicity rate changes in the magnitude domain. They are characterized by increases or decreases in the number of small events in the catalog at times when the number of larger events remains constant. Systematic changes in the magnitudes assigned to events can also be identified by examining seismicity rates because they cause apparent changes in rates of data sets with magnitude cutoffs. The sign of the apparent rate change depends on the sign of the magnitude change and the type of cutoff used. The effects of detection changes can be easily remedied by using a magnitude cutoff, which eliminates the smaller events from consideration. The techniques we have developed allow one to determine the optimum cutoff. Magnitude corrections are necessary for remedying the effects of systematic changes in magnitude estimates. These corrections can be determined through modeling of observed rate changes caused by these shifts. Man-made changes are present in all seismicity catalogs whether local, regional, or teleseismic. They must be accounted for if these catalogs are to provide meaningful information on real process changes in the earth.

Journal ArticleDOI
TL;DR: In this article, a wave propagation in the crust suggests that attenuation relations should be more complex, and this complexity may be present in strong ground motion data for eastern North American earthquakes, which show amplitudes in the distance range of 60 to 150 km that lie above the trends at smaller and greater distances.
Abstract: Strong ground motion attenuation relations are usually described by smoothly decreasing functions of distance. However, consideration of wave propagation in the crust suggests that attenuation relations should be more complex. Such complexity may be present in strong ground motion data for eastern North American earthquakes, which show amplitudes in the distance range of 60 to 150 km that lie above the trends at smaller and greater distances. Using a wavenumber integration method to compute Green's functions and close-in recordings of several earthquakes as empirical source functions, we have generated synthetic seismograms that are in good agreement with regional and strong-motion recordings of eastern North American earthquakes. From these synthetic seismograms, we have shown that the observed interval of relatively high amplitudes may be attributable to postcritically reflected S waves from the Moho. The presence and location of the interval of relatively high amplitudes is highly dependent on the crustal velocity structure and may therefore be expected to show regional variation. However, for any realistic structure model, there will be a transition in the attenuation relation from an interval at shorter distances (less than about 100 km) that is dominated by direct waves to an interval at greater distances that is dominated by postcritically reflected waves. The synthetic seismograms have response spectral velocities that match those of the recorded data, and their m_(bLg) values are in good agreement with observed values.

Journal ArticleDOI
TL;DR: In this paper, a Y-shaped zone of surface faults that is divided into a southern, a western, and a northern section is described, with the largest amount of net slip, most complex rupture patterns, and best evidence of sinistral slip.
Abstract: On the morning of 28 October 1983, the Ms 7.3 Borah Peak earthquake struck central Idaho and formed a Y-shaped zone of surface faults that is divided into a southern, a western, and a northern section. The total length of the surface faults is 36.4 ± 3.1 km, and the maximum net throw is 2.5 to 2.7 m. The near-surface net slip direction, determined from the rakes of striations in colluvium, averaged 0.17 m of sinistral slip for 1.00 m of dip slip . The 20.8-km-long southern section is the main zone of surface faulting and coincides with the Thousand Springs segment of the Lost River fault. It has the largest amount of net throw, most complex rupture patterns, and best evidence of sinistral slip. The surface faults include zones of ground breakage as much as 140 m wide, en echelon scarps with synthetic and antithetic displacements, and individual scarps that are nearly 5 m high . The 14.2-km-long western section diverges away from the Lost River fault near the northern end of the southern section. The net throw on this section is generally less than 0.5 m but locally is as much as 1.6 m. The new ruptures are poorly developed across the crest and north flank of the Willow Creek hills; they are mostly downhill-facing, arcuate scars, perhaps incipient landslides, that may overlie a deeper zone of tectonic movement . The northern section, at least 7.9 km long, is on the Warm Spring segment of the Lost River fault and has a maximum net throw of about 1 m. The pattern of surface faulting on this section is simple compared to the other sections. A 4.7-km-long gap in 1983 surface faults separates the northern and southern sections but contains an older scarp of late Pleistocene age . Geologic, seismologic, and geodetic data from the earthquake suggest that barriers confined the primary coseismic rupture to the Thousand Springs segment of the fault. The rupture propagated unilaterally to the northwest from a hypocenter near the southeastern end of the segment. The southeastern boundary of the segment is marked by an abrupt bend in the range front, a 4-km-long gap in late Quaternary scarps, and transverse faults of Eocene age that intersect the Lost River fault . The northwestern boundary of the Thousand Springs segment is at the junction of the Willow Creek hills and the Lost River fault. Here, the southern and western sections of surface faults diverge and there is a gap in the 1983 scarps. During the first few weeks after the main shock, the large-magnitude and large stress-drop aftershocks clustered near this barrier. Later, aftershocks were mainly northwest of the barrier on the Warm Spring and Challis segments, and showed that strain adjustments eventually affected the entire northern part of the Lost River fault. Fault-scarp morphology and the bedrock geology suggest that the boundary between the Thousand Springs and Warm Spring segments has probably ruptured less frequently and had less net slip during much of the late Cenozoic than the interior of the adjacent segments. The 1983 faulting shows that although segment boundaries can stop or deflect primary ruptures, secondary surface faulting can occur on adjacent segments of the main fault. A late Pleistocene scarp in the 1983 gap suggests that infrequent earthquakes, perhaps larger than the 1983 event, might break through a segment boundary and thus release strain on two adjacent segments .

Journal ArticleDOI
TL;DR: In this paper, the effects of free-surface, near-surface velocity gradients, and low impedance surface layers on the amplitudes of upcoming body waves are analyzed. But the results are limited to surface and borehole seismometer data.
Abstract: A simple plane wave model is adequate to explain many surface versus borehole seismometer data sets. Using such a model, we present a series of examples which demonstrate the effects of the free-surface, near-surface velocity gradients, and low impedance surface layers on the amplitudes of upcoming body waves. In some cases, these amplitudes are predictable from simple free-surface and impedance contrast expressions. However, in other cases these expressions are an unreliable guide to the complete response, and the full plane wave calculation must be performed. Large surface amplifications are possible, even without focusing due to lateral heterogeneities or nonlinear effects. Both surface and borehole seismometer site responses are almost always frequency-dependent. Ocean bottom versus borehole seismic data from the 1983 Ngendei Seismic Experiment in the southwest Pacific are consistent with both a simple plane wave model and a more complete synthetic seismogram calculation. The borehole seismic response to upcoming P waves is reduced at high frequencies because of interference between the upgoing P wave and downgoing P and SV waves reflected from the sediment-basement interface. However, because of much lower borehole noise levels, the borehole seismometer enjoys a P -wave signal-to-noise advantage of 3 to 7 dB over nearby ocean bottom instruments.

Journal ArticleDOI
J. A. Snoke1
TL;DR: In this paper, the dependence of the Brune stress drop on vc was replaced with a dependence on a parameter which can be determined more reliably and which appears linearly (instead of as the third power) in the expression for σB.
Abstract: The Brune stress drop, σB, is generally calculated from two spectral observables: Ω0, the zero-frequency level of the far-field displacement amplitude spectrum, and vc, the corner frequency of that spectrum. σB takes the form σB=hΩvc³,(a), where h contains kinematic factors. In some recent studies, other estimates of the stress drop, such as the dynamic stress drop and the apparent stress, are found to be more stable and/or reliable than σB. We find that these apparent shortcomings of σB can be remedied if we replace the dependence of σB on vc as a directly determined spectral observable with a dependence on a parameter which can be determined more reliably and which appears linearly (instead of as the third power) in the expression for σB. This observable is J, the integral of the square of the ground velocity, which, by Parseval's theorem, is the second moment of the power spectrum. For spectra from broadband seismographs, the determination of J is totally objective; it is an integral for which the integrand approaches zero at both limits. For Brune's model, vc³ = qJ/(Ω0²) with q = 1/(2π³), and equation (a) becomes σB=hqJ/(Ω0) (b). If the P-wave contribution to the radiated energy as well as the angular dependence of J are neglected, the right-hand side of equation (b) is a constant multiple of the apparent stress. Hence, studies which show that apparent stress determinations provide more robust estimates of the stress drop than σB determined through equation (a) support evaluating σB through equation (b).

Journal ArticleDOI
TL;DR: In this article, the displacement field is evaluated throughout the elastic medium so that the continuity of the displacement and the traction fields along the interfaces between the layers is satisfied in a least-squares sense.
Abstract: Diffraction of plane harmonic P, SV , and Rayleigh waves by dipping layers of arbitrary shape is investigated by using an indirect boundary integral equation approach. The layers are of finite length perfectly bonded together. The material of the layers is assumed to be homogeneous, weakly anelastic, and isotropic. The displacement field is evaluated throughout the elastic medium so that the continuity of the displacement and the traction fields along the interfaces between the layers is satisfied in a least-squares sense. Presented numerical results show that the surface strong ground motion amplification effects depend upon a number of parameters present in the problem, such as, type, frequency, and angle of the incident wave, the impedance contrast between the layers, the component of the displacement field being observed, and the location of the observation point at the surface of the half-space. The results demonstrate that the presence of soft alluvial deposits in form of dipping layers may cause locally very large amplification or reduction of the surface ground motion when compared with corresponding free-field motion.

Journal ArticleDOI
TL;DR: In this article, the authors examined P, S, and surface waves derived from seismograms that were collected for the 1929 Grand Banks, Canada, earthquake and found that the total volume of sedimentary slumping was about 5.5 × 10^(11) m^3, which is approximately 5 times larger than a recent estimate of volume from in situ measurements.
Abstract: We have examined P, S, and surface waves derived from seismograms that we collected for the 1929 Grand Banks, Canada, earthquake. This event is noteworthy for the sediment slide and turbidity current that broke the trans-Atlantic cables and for its destructive tsunami. Both the surface-wave magnitude, M_S, and the body-wave magnitude, m_B, calculated from these seismograms are 7.2. Fault mechanisms previously suggested for this event include a NW-SE-striking strike-slip mechanism and an approximately E-W-striking thrust mechanism. In addition, because of the presence of an extensive area of slump and turbidity current, there exists the possibility that sediment slumping could also be a primary causative factor of this event. We tested these fault models and a horizontal single-force (oriented N5°W) model representing a sediment slide against our data. Among these models, only the single-force model is consistent with the P-, S-, and surface-wave data. Our data, however, do not preclude fault models which were not tested. From the spectral data of Love waves at a 50-sec period, we estimated the magnitude of the single force to be about 1.4 × 10^(20) dynes. From this value, we estimated the total volume of sedimentary slumping to be about 5.5 × 10^(11) m^3, which is approximately 5 times larger than a recent estimate of volume from in situ measurements. The difference in estimates of overall volume is likely due to a combination of the inherent difficulty in estimating accurately the displaced sediments from in situ measurements, and of inadequacy of the seismic model; or perhaps because not only the slump but also a tectonic earthquake could have been the cause of this event and contributed significantly to the waveforms studied.

Journal ArticleDOI
TL;DR: In this paper, a reversed two-station method was proposed to measure interstation surface wave attenuation in the case of Lg waves propagating through the structurally complex Appalachian Province, contaminated by the highfrequency coda of Sn waves.
Abstract: Spectral amplitudes of regionally recorded Lg waves are studied in detail between 0.6 and 10 Hz, using vertical-component, velocity seismograms of the Eastern Canada Telemetered Network stations and a supplementary Seismic Research Observatory-type station at Glen Almond (GAC), Quebec. We find that the site responses vary among these stations by more than a factor of 3 within the frequency range of interest. Furthermore, they are found to be strongly frequency dependent. Consequently, it is essential that they be taken into consideration in studies of Lg wave attenuation and Lg source spectra of regionally recorded seismic events. We present a new method of measuring interstation surface wave attenuation, which is closely related to the conventional two-station method. While retaining all the desirable features of the conventional two-station method, the new technique, which we will call the “reversed two-station method,” allows simple, direct (one-parameter) determination of the Lg wave attenuation from sparse spectral data in a manner unaffected by station site effects and associated instrument error. The reversed two-station method is successfully tested over weakly attenuating, short (53 to 210 km) interstation paths in eastern Canada, a difficult experimental condition by normal standards. The Lg attenuation coefficient (0.6 to 10 Hz) in eastern Canada is found to be frequency dependent and of the form γ( f ) = 0.0008 f 0.81 km−1. At higher frequencies, the Lg attenuation appears to be essentially frequency independent. This latter finding is preliminarily interpreted as evidence that regionally recorded Lg waves in the Canadian Shield are, as in the case of Lg waves propagating through the structurally complex Appalachian Province, contaminated by the high-frequency coda of Sn waves. The Lg contamination over the shield paths becomes severe starting at 14 Hz, twice the frequency above which the Lg signal propagating over the Appalachian Province becomes completely dominated by non- Lg arrivals.

Journal ArticleDOI
TL;DR: In this article, the authors used the radial component of stacked source-equalized receiver functions in the time domain to estimate the vertical shear velocity structure at the site and found that a high-velocity layer at mid-crustal depths (18 to 26 km) in their southeast backazimuth results correlates with a zone of high-amplitude reflections found on COCORP profiles 60 km south of RSNY.
Abstract: The site structure beneath the five broadband stations of the Regional Seismic Test Network (RSTN) has been determined utilizing teleseismic P waveforms. Our teleseismic waveform modeling technique involves inverting the radial component of stacked source-equalized receiver functions in the time domain to estimate the vertical shear velocity structure at the site. The receiver functions at RSNY (Adirondacks, New York), RSON (Red Lake, Ontario), and RSNT (Yellowknife, Northwest Territory) are quite simple compared to previously published RSCP (Cumberland Plateau, Tennessee) data, largely due to the absence of sedimentary surface rocks at these sites. At RSNY, a high-velocity layer at mid-crustal depths (18 to 26 km) in our southeast backazimuth results correlates in depth with a zone of high-amplitude reflections found on COCORP profiles 60 km south of RSNY. A gradational crust-mantle boundary is observed at RSCP and RSNY. The RSON and RSNT sites are characterized by simple crusts with fairly abrupt crust-mantle boundaries. The crust beneath RSON appears to have a clear division between the upper and lower crust at about 18 km depth while this boundary is not well-developed at RSNT. Pronounced azimuthal variations in crustal structure at RSSD (Black Hills, South Dakota) prohibit the determination of velocity using a layered earth model. The crustal thicknesses at each of these sites are: RSCP, 40 to 50 km; RSSD, 47 to 50 km; RSNY, 45 to 50 km; RSNT, 38 km; and RSON, 42 km.

Journal ArticleDOI
TL;DR: In this article, a random-vibration model of the Hanks-McGuire type is used to predict peak ground motions at rock sites in eastern North America, where the assumed geometric decay and distance-dependent duration approximate the propagation of direct body waves (at short distances) and Lg waves at regional distances.
Abstract: A random-vibration model of the Hanks-McGuire type is used to predict peak ground motions at rock sites in eastern North America. The assumed geometric decay and distance-dependent duration approximate the propagation of direct body waves (at short distances) and Lg waves (at regional distances). The model predicts peak acceleration and velocities, as well as response spectra and magnitude for a given seismic moment and corner frequency. To predict ground motions for a given Lg magnitude ( m Lg ), the model is first used to calculate the seismic moment corresponding to that magnitude, assuming a source scaling law. Then, knowing the moment and corner frequency, the model is used to calculate peak ground motions. Available data from strong motion recordings and from the ECTN and LRSM networks, modified to estimate horizontal ground motions on rock where appropriate, are used to verify the model9s assumption and predictions. Modified Mercalli intensity data are also used. Ground motions predicted by the model with a stress drop of 50 to 200 bars agree with the ground motion data, but the m Lg values computed by the model for given seismic moments do not agree with Nuttli9s (1983) moment versus m Lg data for large earthquakes. Resolution of the latter disagreement awaits the collection of instrumental data from large events in eastern North America.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a beam steering-based detection and source location scheme for single-station seismic data to analyze seismic events, which combines the signals on the horizontal components in a manner analogous to beam steering.
Abstract: The seismic event detection and source location scheme presented in this paper uses single-station seismic data to analyze seismic events. To detect events, “the detection scheme” combines the signals on the horizontal components in a manner analogous to beam steering. Once an event is detected, “the location scheme” estimates the location of the source with respect to the receiver by estimating the polarization direction of the Pn phase. The range estimate is then obtained from the relative timing of different phases.

Journal ArticleDOI
TL;DR: In this article, a three-level downhole array is being operated in a 1500m-deep borehole within the seismically active Newport-Inglewood fault zone, Los Angeles basin.
Abstract: A three-level downhole array is being operated in a 1500-m-deep borehole within the seismically active Newport-Inglewood fault zone, Los Angeles basin. The array consists of three three-component 4.5 Hz seismometers deployed at the surface, and at 420 and 1500 m depth. An M = 2.8 earthquake that occurred 0.9 km away from the array at a depth of 5.3 km on 31 July 1986 generated rays traveling almost vertically up the downhole array. The P- and S-wave pulse shapes show increasing pulse rise time with decreasing depth, and the initial pulse slope is less steep at the surface than at 1500 m. The average value of t_s/t_p between 1500 and 420 m depth is 1.7 and between 420 and 0 m is 3.4. A near-surface site response results in amplification on the P wave by a factor of four and S waves by a factor of nine. These data indicate a near-surface Q_α of 44 ± 13 for rays traveling almost vertically. In the case of S waves, most of the high frequency content of the waveform beyond ∼ 10 Hz observed at 1500 m depth is lost through attenuation before the waveform reaches 420 m depth. The average Q_β is 25 ± 10 between 1500 and 420 m depth and 108 ± 36 between 420 and 0 m depth. The spectra of the S waves observed at 420 and 0 m of the downward reflected S phases may overestimate Q_β, because they are limited to a narrow band between 5 and 10 Hz and affected by the near-surface amplification. A Q_c of 160 ± 30 at 6 Hz was determined from the decay rate of the coda waves at all three depths. The corner frequency as determined from displacement spectra may be higher (f_c ∼ 10 Hz) at 1500 m depth than at (f_c ∼ 7 Hz) 420 and 0 m depth. Similarly, f_(max) significantly decreases as the waveforms travel toward the earth's surface, indicating that f_(max) is affected by near-surface attenuation. Beyond f_c, the average slopes of the spectra falloff of P-wave spectra is ∼f^(−2) at 1500 m depth and ∼ f^(−3) at the surface.

Journal ArticleDOI
TL;DR: In this paper, the authors present a simple method to compute the confidence intervals of parameter b for earthquake magnitudes grouped in classes of equal width, a circumstance that happens quite frequently in the usual applications in seismology.
Abstract: This paper presents a simple method, that is a very practical tool, to compute the confidence intervals of parameter b for earthquake magnitudes grouped in classes of equal width—a circumstance that happens quite frequently in the usual applications in seismology. The problem is given a solution which is complete and covers very satisfactorily all cases of concern for the seismologist. The authentic, innovative contribution of the paper is that, by exploiting the strict connection between the frequency-magnitude Gutenberg-Richter law and the discrete geometric distribution, an approach is followed that allows for the evaluation of the confidence bands even for samples with a small number of earthquakes ( N

Journal ArticleDOI
TL;DR: The 28 October 1983 Borah Peak, Idaho, earthquake (M S = 7.3) occurred in an area of low historic seismicity within east-central Idaho along a segment of the Lost River fault active during the Holocene.
Abstract: The 28 October 1983 Borah Peak, Idaho, earthquake ( M S = 7.3) occurred in an area of low historic seismicity within east-central Idaho along a segment of the Lost River fault active during the Holocene. A dense network of portable short-period seismographs (up to 45 stations, station spacings of 2 to 10 km) was installed beginning several hours after the main shock and operated for 22 days. In addition to records from the portable instrumentation, data from permanent seismograph stations operating in Idaho, Utah, Montana, Oregon, Washington, and Wyoming provide a good regional data base. No foreshock activity above M C (coda magnitude) 2.0 was detected for the 2-month period preceding the main shock. The epicenter of the main shock is ∼ 14 km south-southwest of the end of the surface faulting. This relationship suggests unilateral rupture propagating to the northwest. The distribution of 421 aftershocks of M C > 2 defines an epicentral pattern, 75 km × 10 km, trending north-northwest parallel to the surface rupture but displaced laterally southwest by 5 to 10 km. Aftershocks extend to depths of approximately 16 km and in the central and southeastern portion of the aftershock pattern define a zone, dipping approximately 45° southwest, that intersects the surface near the fault scarp. The entire aftershock zone as observed during the first 3 weeks was active shortly after the main shock occurred . Fault plane solutions for 47 aftershocks show predominantly normal faulting with varying amounts of strike-slip motion. Tension axis orientations indicate a dominant extension direction of NNE-SSW during the aftershock sequence. There is considerable diversity among the aftershock focal mechanisms, even along the central and southeast portions of the fault where the hypocenters appear to outline the main fault break. We therefore interpret most of the aftershocks to represent complex fracturing on subsidiary structures adjacent to the main fault .

Journal ArticleDOI
TL;DR: In this article, the scaling relation for earthquakes in eastern North America and other continental interiors was constructed from measurements of seismic moment and source duration obtained by the waveform modeling of seismic body waves.
Abstract: Source scaling relations have been obtained for earthquakes in eastern North America and other continental interiors, and compared with a relation obtained for earthquakes in western North America. The scaling relation for eastern North American earthquakes was constructed from measurements of seismic moment and source duration obtained by the waveform modeling of seismic body waves. The events used include nine events of m_(bLg) magnitude 4.7 to 5.8 that occurred after 1960, and four earlier events with magnitudes between 5.5 and 6.6. The scaling relation for events in other continental interiors was used for comparative purposes and to provide constraints for large magnitudes. Detailed analysis of the uncertainties in the scaling relations has allowed the resolution of two important issues concerning the source scaling of earthquakes in eastern North America. First, the source characteristics of earthquakes in eastern North America and other continental interiors are consistent with constant stress drop scaling, and are inconsistent with nonconstant scaling models such as that of Nuttli (1983). Second, the stress drops of earthquakes in eastern North America and other continental interiors are not significantly different from those of earthquakes in western North America, and have median values of approximately 100 bars. The source parameters of earthquakes in eastern North America are consistent with a single constant stress drop scaling relation, whereas the source parameters of earthquakes in western North America are much more variable and show significant departures from an average scaling relation in which stress drop decreases slightly with seismic moment.

Journal ArticleDOI
TL;DR: In this article, the authors examined the apparent changes of the resonant frequencies and the relative contribution of the soil-structure interaction effects during forced vibration tests during the San Fernando earthquake of 1971.
Abstract: Apparent changes, over time, of the dynamic behavior of a nine-story reinforced concrete building are examined to determine their plausibility and possible sources. The apparent variations include changes of the resonant frequencies and of the relative contribution of the soil-structure interaction effects during forced vibration tests. Experiments by different researchers on the importance of the soil-structure interaction effects lead to conflicting interpretations of the contribution from the rocking response which differ by factors of 24 and 14 in the transverse and longitudinal directions. These differences are not realistic and may have resulted from errors in the data reduction for the initial experiments. The variations in resonant frequencies are considered to be mainly a result of stiffness degradation of the superstructure as a consequence of the San Fernando earthquake of 1971.

Journal ArticleDOI
TL;DR: In this paper, a frequency-dependent Lg-Q is determined which rises from 300 at 0.5 Hz to about 1400 at 10 Hz, at which it flattens at higher frequency.
Abstract: Using data from earthquakes in the 1982 Miramichi earthquake source zone, spectral excitation and attenuation of the Lg phase are studied. With data in the distance range of 135 to 994 km, interpretation is complicated by the presence of high-frequency Sn and Pn waves which interfere with the Lg wave. At the larger distances, the signal at frequencies above 7 Hz is completely dominated by the non- Lg arrivals. A frequency-dependent Lg-Q is determined which rises from 300 at 0.5 Hz to about 1400 at 10 Hz, at which it flattens at higher frequency. The Sn coda apparent Q rises above 3000 at frequencies higher than 10 Hz. Seismic moment and corner frequency estimates are made using Lg-Q corrected spectra. The moment estimates compare well with those obtained from long-period surface waves and short distance spectral estimates. The Lg corner frequency estimates are substantially lower than the short distance estimates. This discrepancy is the subject of discussion, but the Lg moment-corner frequency estimates do model observed data well using a Brune (1970) source model and the derived attenuation relation.

Journal ArticleDOI
TL;DR: In this paper, time-dependent conditional probabilities for the recurrence of large and great interplate earthquakes along the Mexican subduction zone are presented for time intervals of 5, 10, and 20 yr duration (i.e., 1986-1991, 1986-1996, and 1986-2006).
Abstract: Time-dependent conditional probabilities for the recurrence of large and great interplate earthquakes along the Mexican subduction zone are presented for time intervals of 5, 10, and 20 yr duration (i.e., 1986-1991, 1986-1996, and 1986-2006). At present, the central Oaxaca (97.3° to 97.7°W), Ometepec-San Marcos (98.2° to 99.5°W), and central Guerrero (100° to 101°W) segments stand out as having the highest probability for the recurrence of large and great earthquakes in the near future. Segmentation of the margin is delineated by the rupture zones of the most recent earthquakes occurring in each area. For segments of the Mexican margin with one or more known recurrence intervals, probability estimates are based on the observed average recurrence time for each segment and the lognormal probability distribution function. Use of a generic distribution function (Nishenko and Buland, 1987) allows a more stable estimate of average recurrence times than are available from a few observations. A long-term prediction time window is also defined, based on the 90 per cent confidence interval for our estimates of the recurrence time. Use of a predetermined confidence interval conveys valuable additional information as to the precision and information content of the forecast. For those segments of the margin with only one prior event, and hence, no historically observed recurrence times, repeat times are estimated by extrapolating the observed recurrence time behavior in Oaxaca, subject to the assumptions that recurrence time scales only as a function of the ratio of seismic displacement and convergence rate and that all events are characterized as simple sources (i.e., involve the rupture of a single asperity).

Journal ArticleDOI
TL;DR: In this paper, an examination of seismicity and late Quaternary faults in Montana and Idaho north of the Snake River Plain shows a geographic correspondence between high seismic activity and 24 faults that have experienced surface rupture during the late quaternary.
Abstract: Examination of seismicity and late Quaternary faults in Montana and Idaho north of the Snake River Plain shows a geographic correspondence between high seismicity and 24 faults that have experienced surface rupture during the late Quaternary. The Lewis and Clark Zone delineates the northern boundary of this tectonically active extensional region. Earthquakes greater than magnitude 5.5 and all identified late Quaternary faults are confined to the Montana-Idaho portion of the Basin and Range Province south of the Lewis and Clark Zone. Furthermore, all 12 Holocene faults are confined to a seismologic zone (Centennial Tectonic Belt) parallel to the northern flank of the Snake River Plain in extreme southwestern Montana and adjacent Idaho. Fault trends, strain data, and focal mechanisms suggest that both late Quaternary faulting and seismicity in the region are primarily products of two distinct stress regimes: (1) overall Basin and Range extension along a S45°W direction relative to the mid-continent; and (2) localized effects of the Yellowstone hot spot which appears to have the least principle stress axis oriented horizontally at about S4°W. The three largest historic earthquakes north of the Snake River Plain are indicative of three different stress provinces—the 1983 Borah Peak earthquake occurred in the Montana-Idaho basin and range; the 1959 Hebgen Lake earthquake is associated with the Yellowstone hot spot area; and the 1925 Clarkston Valley event appears to be associated with the stress field of the United States mid-continent.

Journal ArticleDOI
TL;DR: In this article, a nonlinear least square inversion method was developed to investigate the dislocation distribution and the character of rupture propagation on an earthquake fault plane, which is parameterized using a model in which the fault is composed of subfaults which start rupturing with arbitrary magnitudes at arbitrary times.
Abstract: A new and very practical nonlinear least-square inversion method has been developed to investigate the dislocation distribution and the character of rupture propagation on an earthquake fault plane. The problem is parameterized using a model in which the fault is composed of subfaults which start rupturing with arbitrary magnitudes at arbitrary times. The performance of the method is examined by numerical simulations, and the method proves useful for the analysis of source processes of earthquakes, including complicated cases such as multiple events. The source process of the Naganoken-Seibu earthquake of 1984 has been analyzed using strong motion seismograms. The dislocation distribution and the character of rupture propagation have been successfully obtained.