scispace - formally typeset
Search or ask a question
Author

Leon Knopoff

Bio: Leon Knopoff is an academic researcher from University of California, Los Angeles. The author has contributed to research in topics: Aftershock & Rayleigh wave. The author has an hindex of 46, co-authored 150 publications receiving 7376 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a laboratory and a numerical model have been constructed to explore the role of friction along a fault as a factor in the earthquake mechanism, and the laboratory model demonstrates that small shocks are necessary to the loading of potential energy into the focal structure; a large part, but not all, of the stored potential energy is later released in a major shock, at the end of a period of loading energy.
Abstract: A laboratory and a numerical model have been constructed to explore the role of friction along a fault as a factor in the earthquake mechanism. The laboratory model demonstrates that small shocks are necessary to the loading of potential energy into the focal structure; a large part, but not all, of the stored potential energy is later released in a major shock, at the end of a period of loading energy into the system. By the introduction of viscosity into the numerical model, aftershocks take place following a major shock. Both models have features which describe the statistics of shocks in the main sequence, the statistics of aftershocks and the energy-magnitude scale, among others.

1,203 citations

Journal ArticleDOI
TL;DR: In this paper, it was found that at least five structural provinces must be established; these include shields, aseismic continental platforms, rifts, ocean basins, and mountains.

256 citations

Journal ArticleDOI
TL;DR: In this paper, a model of earthquake occurrence is proposed that is based on results of statistical studies of earthquake catalogs, where each earthquake generates additional a probability that depends on time as t−(1+θ).
Abstract: A model of earthquake occurrence is proposed that is based on results of statistical studies of earthquake catalogs. We assume that each earthquake generates additional a probability that depends on time as t−(1+θ). This assumption together with one regarding the independence of branching events on adjacent branches of the event ‘tree,’ is sufficient to permit the generation of complete catalogs of earthquakes that have the same time-magnitude statistical properties as real earthquake catalogs. If θ is about 0.5, the process generates sequences that have statistical properties similar to those for shallow earthquakes: many well-known relations are reproduced including the magnitude-frequency law, Omori's law of the rate of aftershock and foreshock occurrence, the duration of a recorded seismic event versus its magnitude, the self-similarity or lack of scale of rate of earthquake occurrence in different magnitude ranges, etc. A value of θ closer to 0.8 or 0.9 seems to simulate the statistical properties of intermediate and deep shocks. A formula for seismic risk prediction is proposed,and the implications of the model for risk evaluation are outlined. The possibilities of the determination of long-term risk from real or synthetic catalogs that have the property of self-similarity are dim.

243 citations

Journal ArticleDOI
11 Oct 1974-Science
TL;DR: The inversion of Rayleigh wave dispersion data for the Pacific Ocean shows that lithospheric thickness increases systematically with age.
Abstract: The inversion of Rayleigh wave dispersion data for the Pacific Ocean shows that lithospheric thickness increases systematically with age. The lid to the low-velocity channel is very thin or absent near the ridge crest; the low-velocity channel may be absent in the oldest parts of the ocean.

223 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a procedure for handling inhomogeneous catalogs of large and intermediate-magnitude earthquakes using the theory of extreme value methods, and demonstrate that using all available data gives superior estimates of the parameters of seismicity than do extreme value method.
Abstract: Procedures of the theory of extremes give unacceptably large probable errors in determinations of return times, b values, and maximum magnitudes of large- and intermediate-magnitude earthquakes using limited runs of data. On all accounts, methods which utilize all available data give superior estimates of the parameters of seismicity than do extreme value methods and provide for a procedure for handling inhomogeneous catalogs as well.

165 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, a large data set consisting of about 1000 normal mode periods, 500 summary travel time observations, 100 normal mode Q values, mass and moment of inertia have been inverted to obtain the radial distribution of elastic properties, Q values and density in the Earth's interior.

9,266 citations

Journal ArticleDOI
TL;DR: In this paper, an earthquake model is derived by considering the effective stress available to accelerate the sides of the fault, and the model describes near and far-field displacement-time functions and spectra and includes the effect of fractional stress drop.
Abstract: An earthquake model is derived by considering the effective stress available to accelerate the sides of the fault. The model describes near- and far-field displacement-time functions and spectra and includes the effect of fractional stress drop. It successfully explains the near- and far-field spectra observed for earthquakes and indicates that effective stresses are of the order of 100 bars. For this stress, the estimated upper limit of near-fault particle velocity is 100 cm/sec, and the estimated upper limit for accelerations is approximately 2g at 10 Hz and proportionally lower for lower frequencies. The near field displacement u is approximately given by u(t) = (σ/μ) βr(1 - e−t/r) where. σ is the effective stress, μ is the rigidity, β is the shear wave velocity, and τ is of the order of the dimension of the fault divided by the shear-wave velocity. The corresponding spectrum is Ω(ω)=σβμ1ω(ω2+τ−2)1/2(1) The rms average far-field spectrum is given by 〈 Ω(ω) 〉=〈 Rθϕ 〉σβμrRF(e)1ω2+α2(2) where 〈Rθϕ〉 is the rms average of the radiation pattern; r is the radius of an equivalent circular dislocation surface; R is the distance; F(e) = {[2 – 2e][1 – cos (1.21 eω/α)] +e2}1/2; e is the fraction of stress drop; and α = 2.21 β/r. The rms spectrum falls off as (ω/α)−2 at very high frequencies. For values of ω/α between 1 and 10 the rms spectrum falls off as (ω/α)−1 for e < ∼0.1. At low frequencies the spectrum reduces to the spectrum for a double-couple point source of appropriate moment. Effective stress, stress drop and source dimensions may be estimated by comparing observed seismic spectra with the theoretical spectra.

4,527 citations

Book
25 Jan 1991
TL;DR: The connection between faults and the seismicity generated is governed by the rate and state dependent friction laws -producing distinctive seismic styles of faulting and a gamut of earthquake phenomena including aftershocks, afterslip, earthquake triggering, and slow slip events.
Abstract: This essential reference for graduate students and researchers provides a unified treatment of earthquakes and faulting as two aspects of brittle tectonics at different timescales. The intimate connection between the two is manifested in their scaling laws and populations, which evolve from fracture growth and interactions between fractures. The connection between faults and the seismicity generated is governed by the rate and state dependent friction laws - producing distinctive seismic styles of faulting and a gamut of earthquake phenomena including aftershocks, afterslip, earthquake triggering, and slow slip events. The third edition of this classic treatise presents a wealth of new topics and new observations. These include slow earthquake phenomena; friction of phyllosilicates, and at high sliding velocities; fault structures; relative roles of strong and seismogenic versus weak and creeping faults; dynamic triggering of earthquakes; oceanic earthquakes; megathrust earthquakes in subduction zones; deep earthquakes; and new observations of earthquake precursory phenomena.

3,802 citations

Journal ArticleDOI
TL;DR: In this paper, a simple cooling model and the plate model were proposed to account for the variation in depth and heat flow with increasing age of the ocean floor. But the results were limited to the North Pacific and North Atlantic basins.
Abstract: Two models, a simple cooling model and the plate model, have been advanced to account for the variation in depth and heat flow with increasing age of the ocean floor. The simple cooling model predicts a linear relation between depth and t½, and heat flow and 1/t½, where t is the age of the ocean floor. We show that the same t½ dependence is implicit in the solutions for the plate model for sufficiently young ocean floor. For larger ages these relations break down, and depth and heat flow decay exponentially to constant values. The two forms of the solution are developed to provide a simple method of inverting the data to give the model parameters. The empirical depth versus age relation for the North Pacific and North Atlantic has been extended out to 160 m.y. B.P. The depth initially increases as t½, but between 60 and 80 m.y. B.P. the variation of depth with age departs from this simple relation. For older ocean floor the depth decays exponentially with age toward a constant asymptotic value. Such characteristics would be produced by a thermal structure close to that of the plate model. Inverting the data gives a plate thickness of 125±10 km, a bottom boundary temperature of 1350°±275°C, and a thermal expansion coefficient of (3.2±1.1) × 10−5°C−1. Between 0 and 70 m.y. B.P. the depth can be represented by the relation d(t) = 2500 + 350t½ m, with t in m.y. B.P., and for regions older than 20 m.y. B.P. by the relation d(t) = 6400 - 3200 exp (−t/62.8) m. The heat flow data were treated in a similar, but less extensive manner. Although the data are compatible with the same model that accounts for the topography, their scatter prevents their use in the same quantitative fashion. Our analysis shows that the heat flow only responds to the bottom boundary at approximately twice the age at which the depth does. Within the scatter of the data, from 0 to 120 m.y. B.P., the heat flow pan be represented by the relation q(t) = 11.3/t½ μcal cm−2s−1. The previously accepted view that the heat flow observations approach a constant asymptotic value in the old ocean basins needs to be tested more stringently. The above results imply that a mechanism is required to supply heat at the base of the plate.

2,667 citations

Journal ArticleDOI
Francis Birch1
TL;DR: The velocity of compressional waves has been determined by measurement of travel time of pulses in specimens of rock at pressures to 10 kilobars and room temperature as mentioned in this paper, mainly igneous and metamorphic rocks, furnished three specimens oriented at right angles to one another.
Abstract: The velocity of compressional waves has been determined by measurement of travel time of pulses in specimens of rock at pressures to 10 kilobars and room temperature. Most of the samples, mainly igneous and metamorphic rocks, furnished three specimens oriented at right angles to one another. The present paper gives experimental details, modal analyses, and numerical tables of velocity as function of direction of propagation, initial density, and pressure. Discussion of various aspects of the measurements is reserved for part 2.

2,185 citations