scispace - formally typeset
Search or ask a question
Author

Max Wyss

Bio: Max Wyss is an academic researcher from University of Alaska Fairbanks. The author has contributed to research in topics: Aftershock & Earthquake prediction. The author has an hindex of 51, co-authored 148 publications receiving 9975 citations. Previous affiliations of Max Wyss include Cooperative Institute for Research in Environmental Sciences & University of Colorado Boulder.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the minimum magnitude of complete reporting (Mc) was estimated based on the departure from the linear frequency-magnitude relation of the 250 closest earthquakes to grid nodes, spaced 10 km apart.
Abstract: We mapped the minimum magnitude of complete reporting, M c , for Alaska, the western United States, and for the JUNEC earthquake catalog of Japan. Mc was estimated based on its departure from the linear frequency-magnitude relation of the 250 closest earthquakes to grid nodes, spaced 10 km apart. In all catalogs studied, Mc was strongly heterogeneous. In offshore areas the Mc was typically one unit of magnitude higher than onshore. On land also, Mc can vary by one order of magnitude over distance less than 50 km. We recommend that seismicity studies that depend on complete sets of small earthquakes should be limited to areas with similar Mc, or the minimum magnitude for the analysis has to be raised to the highest com- mon value of Mc. We believe that the data quality, as reflected by the Mc level, should be used to define the spatial extent of seismicity studies where Mc plays a role. The method we use calculates the goodness of fit between a power law fit to the data and the observed frequency-magnitude distribution as a function of a lower cutoff of the magnitude data. Mc is defined as the magnitude at which 90% of the data can be modeled by a power law fit. Mc in the 1990s is approximately 1.2 0.4 in most parts of California, 1.8 0.4 in most of Alaska (Aleutians and Panhandle excluded), and at a higher level in the JUNEC catalog for Japan. Various sources, such as ex- plosions and earthquake families beneath volcanoes, can lead to distributions that cannot be fit well by power laws. For the Hokkaido region we demonstrate how neglecting the spatial variability of M c can lead to erroneous assumptions about deviations from self-similarity of earthquake scaling.

1,275 citations

Journal ArticleDOI
22 Sep 2005-Nature
TL;DR: It is shown that normal faulting events have the highest b values, thrust events the lowest and strike-slip events intermediate values, and that thrust faults tend to be under higher stress than normal faults, implying that the b value acts as a stress meter that depends inversely on differential stress.
Abstract: The earthquake size distribution follows, in most instances, a power law1,2, with the slope of this power law, the ‘b value’, commonly used to describe the relative occurrence of large and small events (a high b value indicates a larger proportion of small earthquakes, and vice versa). Statistically significant variations of b values have been measured in laboratory experiments, mines and various tectonic regimes such as subducting slabs, near magma chambers, along fault zones and in aftershock zones3. However, it has remained uncertain whether these differences are due to differing stress regimes, as it was questionable that samples in small volumes (such as in laboratory specimens, mines and the shallow Earth's crust) are representative of earthquakes in general. Given the lack of physical understanding of these differences, the observation that b values approach the constant 1 if large volumes are sampled4 was interpreted to indicate that b = 1 is a universal constant for earthquakes in general5. Here we show that the b value varies systematically for different styles of faulting. We find that normal faulting events have the highest b values, thrust events the lowest and strike-slip events intermediate values. Given that thrust faults tend to be under higher stress than normal faults we infer that the b value acts as a stress meter that depends inversely on differential stress.

804 citations

Journal ArticleDOI
TL;DR: In this paper, the cumulative frequency-magnitude relationship is replaced by the Cumulative frequency-moment equation in which the B-value takes the place of the b-value.
Abstract: Summary In this paper the cumulative frequency-magnitude relationship is replaced by the cumulative frequency-moment equation in which the B-value takes the place of the b-value where the constants α and B can be observed or derived from the magnitude-moment and frequency-magnitude relationships. The average and maximal moments of a given set of earthquakes are found to be directly related to B and α respectively: The total cumulative moment of an earthquake sequence can be expressed as which is approximately equal to twice M0(max) for most observed sets of earthquakes. Using the definition of the moment we then derive the general area frequency relationship which shows that B gives the distribution of the product of average displacement D and fault area A. We may reformulate this in terms of the product of stress-drop Δσ and fault area If we make the assumption that the stress-brop is a known function of the source dimension, which can be verified for a given sequence, we may express the frequency of earthquake occurrence as a function of one source parameter from which we can obtain the mean rupture area of a set as a function of B, γ and the smallest area in the set Alternatively we may eliminate the area and obtain From this we may calculate the ratio of the average stress-drops of two sets with different B. From the different b-values of Denver earthquakes during low and high injection pressure the stress-drop is computed as 30 per cent higher at low-pore pressure. This difference is in good agreement with the difference of the respectively necessary failure stresses, which is 14 per cent. In addition high apparent stresses and high stress-drop Δσ were found to correlate with low b- and low B-values, as they do in microfracture experiments. The combination of high average , high mean Δσ, large mean A and low b- or low B-value can be explained by high regional shear stress.

502 citations

Journal ArticleDOI
TL;DR: In this paper, the authors studied the source mechanism of earthquakes in the California-Nevada region using surface wave analyses, surface displacement observations in the source region, magnitude determinations, and accurate epicenter locations.
Abstract: The source mechanism of earthquakes in the California-Nevada region was studied using surface wave analyses, surface displacement observations in the source region, magnitude determinations, and accurate epicenter locations. Fourier analyses of surface waves from thirteen earthquakes in the Parkfield region have yielded the following relationship between seismic moment, M0 and Richter magnitude, ML: log M0 = 1.4 ML + 17.0, where 3 < ML < 6. The following relation between the surface wave envelope parameter AR and seismic moment was obtained: log M0 = log AR300 + 20.1. This relation was used to estimate the seismic moment of 259 additional earthquakes in the western United States. The combined data yield the following relationship between moment and local magnitude: log M0 = 1.7 ML + 15.1, where 3 < ML < 6. These data together with the Gutenberg-Richter energy-magnitude formula suggest that the average stress multiplied by the seismic efficiency is about 7 bars for small earthquakes at Parkfield and in the Imperial Valley, about 30 bars for small earthquakes near Wheeler Ridge on the White Wolf fault, and over 100 bars for small earthquakes in the Arizona-Nevada and Laguna Salada (Baja California) regions. Field observations of displacement associated with eight Parkfield shocks, along with estimates of fault area, indicate that fault dimensions similar to the values found earlier for the Imperial earthquake are the rule rather than the exception for small earthquakes along the San Andreas fault. Stress drops appear to be about 10% of the average stress multiplied by the seismic efficiency. The revised curve for the moment versus magnitude further emphasizes that small earthquakes are not important in strain release and indicate that the zone of shear may be about 6 km in vertical extent for the Imperial Valley and even less for oceanic transform faults.

498 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that the b value of the frequency-magnitude relationship of earthquakes is inversely proportional to stress by showing that it decreases with depth in the Parkfield segment of the San Andreas and along the Calaveras fault.
Abstract: We hypothesize that highly stressed asperities may be defined by mapping anomalously low b values. Along the San Andreas fault near Parkfield the asperity under Middle Mountain, with its b=0.46, can be distinguished from all other parts of the fault surface. Likewise, along the Calaveras fault the northern asperity of the Morgan Hill 1984 (M6.2) rupture can be identified by its low b of 0.5 as a high stress patch along the fault. We add further evidence to the observations that the b value of the frequency-magnitude relationship of earthquakes is inversely proportional to stress by showing that it decreases with depth in the Parkfield segment of the San Andreas and along the Calaveras fault. In both of these areas, b values above and below 5 km depth are ∼1.2 and 0.8, respectively. We propose that probabilistic recurrence times Tr, based on the seismicity parameters a and b, should be calculated from their values within asperities only, instead of from the values of the entire rupture area of the maximum expected earthquake. The strong patches on faults control the time of rupture because they are capable of accumulating larger stresses than the rest of the fault zone, which slips along passively when an asperity breaks. Therefore no information on Tr is contained in the passive fault segments, only in the asperities. At Parkfield the probabilistic estimates of Tr derived from the data in the whole rupture and in the asperity only are 72 (−18/+24) and 23 (−12/+18) years, respectively, compared to the historically observed repeat time of 22 years. At Morgan Hill the Tr estimates are 122 (−46/+76) and 78 (−47/+110) years, respectively, compared to the observed repeat time of 72 years.

464 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a series of empirical relationships among moment magnitude (M ), surface rupture length, subsurface rupture length and downdip rupture width, and average surface displacement per event are developed.
Abstract: Source parameters for historical earthquakes worldwide are compiled to develop a series of empirical relationships among moment magnitude ( M ), surface rupture length, subsurface rupture length, downdip rupture width, rupture area, and maximum and average displacement per event. The resulting data base is a significant update of previous compilations and includes the additional source parameters of seismic moment, moment magnitude, subsurface rupture length, downdip rupture width, and average surface displacement. Each source parameter is classified as reliable or unreliable, based on our evaluation of the accuracy of individual values. Only the reliable source parameters are used in the final analyses. In comparing source parameters, we note the following trends: (1) Generally, the length of rupture at the surface is equal to 75% of the subsurface rupture length; however, the ratio of surface rupture length to subsurface rupture length increases with magnitude; (2) the average surface displacement per event is about one-half the maximum surface displacement per event; and (3) the average subsurface displacement on the fault plane is less than the maximum surface displacement but more than the average surface displacement. Thus, for most earthquakes in this data base, slip on the fault plane at seismogenic depths is manifested by similar displacements at the surface. Log-linear regressions between earthquake magnitude and surface rupture length, subsurface rupture length, and rupture area are especially well correlated, showing standard deviations of 0.25 to 0.35 magnitude units. Most relationships are not statistically different (at a 95% significance level) as a function of the style of faulting: thus, we consider the regressions for all slip types to be appropriate for most applications. Regressions between magnitude and displacement, magnitude and rupture width, and between displacement and rupture length are less well correlated and have larger standard deviation than regressions between magnitude and length or area. The large number of data points in most of these regressions and their statistical stability suggest that they are unlikely to change significantly in response to additional data. Separating the data according to extensional and compressional tectonic environments neither provides statistically different results nor improves the statistical significance of the regressions. Regressions for cases in which earthquake magnitude is either the independent or the dependent parameter can be used to estimate maximum earthquake magnitudes both for surface faults and for subsurface seismic sources such as blind faults, and to estimate the expected surface displacement along a fault for a given size earthquake.

6,160 citations

Journal ArticleDOI
TL;DR: In this paper, an earthquake model is derived by considering the effective stress available to accelerate the sides of the fault, and the model describes near and far-field displacement-time functions and spectra and includes the effect of fractional stress drop.
Abstract: An earthquake model is derived by considering the effective stress available to accelerate the sides of the fault. The model describes near- and far-field displacement-time functions and spectra and includes the effect of fractional stress drop. It successfully explains the near- and far-field spectra observed for earthquakes and indicates that effective stresses are of the order of 100 bars. For this stress, the estimated upper limit of near-fault particle velocity is 100 cm/sec, and the estimated upper limit for accelerations is approximately 2g at 10 Hz and proportionally lower for lower frequencies. The near field displacement u is approximately given by u(t) = (σ/μ) βr(1 - e−t/r) where. σ is the effective stress, μ is the rigidity, β is the shear wave velocity, and τ is of the order of the dimension of the fault divided by the shear-wave velocity. The corresponding spectrum is Ω(ω)=σβμ1ω(ω2+τ−2)1/2(1) The rms average far-field spectrum is given by 〈 Ω(ω) 〉=〈 Rθϕ 〉σβμrRF(e)1ω2+α2(2) where 〈Rθϕ〉 is the rms average of the radiation pattern; r is the radius of an equivalent circular dislocation surface; R is the distance; F(e) = {[2 – 2e][1 – cos (1.21 eω/α)] +e2}1/2; e is the fraction of stress drop; and α = 2.21 β/r. The rms spectrum falls off as (ω/α)−2 at very high frequencies. For values of ω/α between 1 and 10 the rms spectrum falls off as (ω/α)−1 for e < ∼0.1. At low frequencies the spectrum reduces to the spectrum for a double-couple point source of appropriate moment. Effective stress, stress drop and source dimensions may be estimated by comparing observed seismic spectra with the theoretical spectra.

4,527 citations

Book
25 Jan 1991
TL;DR: The connection between faults and the seismicity generated is governed by the rate and state dependent friction laws -producing distinctive seismic styles of faulting and a gamut of earthquake phenomena including aftershocks, afterslip, earthquake triggering, and slow slip events.
Abstract: This essential reference for graduate students and researchers provides a unified treatment of earthquakes and faulting as two aspects of brittle tectonics at different timescales. The intimate connection between the two is manifested in their scaling laws and populations, which evolve from fracture growth and interactions between fractures. The connection between faults and the seismicity generated is governed by the rate and state dependent friction laws - producing distinctive seismic styles of faulting and a gamut of earthquake phenomena including aftershocks, afterslip, earthquake triggering, and slow slip events. The third edition of this classic treatise presents a wealth of new topics and new observations. These include slow earthquake phenomena; friction of phyllosilicates, and at high sliding velocities; fault structures; relative roles of strong and seismogenic versus weak and creeping faults; dynamic triggering of earthquakes; oceanic earthquakes; megathrust earthquakes in subduction zones; deep earthquakes; and new observations of earthquake precursory phenomena.

3,802 citations

Journal ArticleDOI
TL;DR: In this article, a review of polymer nanocomposites with single-wall or multi-wall carbon nanotubes is presented, and the current challenges to and opportunities for efficiently translating the extraordinary properties of carbon-nanotubes to polymer matrices are summarized.
Abstract: We review the present state of polymer nanocomposites research in which the fillers are single-wall or multiwall carbon nanotubes. By way of background we provide a brief synopsis about carbon nanotube materials and their suspensions. We summarize and critique various nanotube/polymer composite fabrication methods including solution mixing, melt mixing, and in situ polymerization with a particular emphasis on evaluating the dispersion state of the nanotubes. We discuss mechanical, electrical, rheological, thermal, and flammability properties separately and how these physical properties depend on the size, aspect ratio, loading, dispersion state, and alignment of nanotubes within polymer nanocomposites. Finally, we summarize the current challenges to and opportunities for efficiently translating the extraordinary properties of carbon nanotubes to polymer matrices in hopes of facilitating progress in this emerging area.

3,239 citations

Journal ArticleDOI
01 Mar 2000
TL;DR: In this paper, the authors present a review of the techniques of interferometry, systems and limitations, and applications in a rapidly growing area of science and engineering, including cartography, geodesy, land cover characterization, and natural hazards.
Abstract: Synthetic aperture radar interferometry is an imaging technique for measuring the topography of a surface, its changes over time, and other changes in the detailed characteristic of the surface. By exploiting the phase of the coherent radar signal, interferometry has transformed radar remote sensing from a largely interpretive science to a quantitative tool, with applications in cartography, geodesy, land cover characterization, and natural hazards. This paper reviews the techniques of interferometry, systems and limitations, and applications in a rapidly growing area of science and engineering.

3,042 citations