scispace - formally typeset
Search or ask a question
Author

Ian Main

Bio: Ian Main is an academic researcher from University of Edinburgh. The author has contributed to research in topics: Acoustic emission & Fracture (geology). The author has an hindex of 49, co-authored 221 publications receiving 8848 citations. Previous affiliations of Ian Main include British Geological Survey & Natural Environment Research Council.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors provide guidelines for the accurate and practical estimation of exponents and fractal dimensions of natural fracture systems, including length, displacement and aperture power law exponents.
Abstract: Scaling in fracture systems has become an active field of research in the last 25 years motivated by practical applications in hazardous waste disposal, hy- drocarbon reservoir management, and earthquake haz- ard assessment. Relevant publications are therefore spread widely through the literature. Although it is rec- ognized that some fracture systems are best described by scale-limited laws (lognormal, exponential), it is now recognized that power laws and fractal geometry provide widely applicable descriptive tools for fracture system characterization. A key argument for power law and fractal scaling is the absence of characteristic length scales in the fracture growth process. All power law and fractal characteristics in nature must have upper and lower bounds. This topic has been largely neglected, but recent studies emphasize the importance of layering on all scales in limiting the scaling characteristics of natural fracture systems. The determination of power law expo- nents and fractal dimensions from observations, al- though outwardly simple, is problematic, and uncritical use of analysis techniques has resulted in inaccurate and even meaningless exponents. We review these tech- niques and suggest guidelines for the accurate and ob- jective estimation of exponents and fractal dimensions. Syntheses of length, displacement, aperture power law exponents, and fractal dimensions are found, after crit- ical appraisal of published studies, to show a wide vari- ation, frequently spanning the theoretically possible range. Extrapolations from one dimension to two and from two dimensions to three are found to be nontrivial, and simple laws must be used with caution. Directions for future research include improved techniques for gathering data sets over great scale ranges and more rigorous application of existing analysis methods. More data are needed on joints and veins to illuminate the differences between different fracture modes. The phys- ical causes of power law scaling and variation in expo- nents and fractal dimensions are still poorly understood.

1,153 citations

Journal ArticleDOI
TL;DR: A review of the results of some of the composite physical models that have been developed to simulate seismogenesis on different scales during (1) dynamic slip on a preexisting fault, (2) fault growth, and (3) fault nucleation is given in this paper.
Abstract: The scaling properties of earthquake populations show remarkable similarities to those observed at or near the critical point of other composite systems in statistical physics. This has led to the development of a variety of different physical models of seismogenesis as a critical phenomenon, involving locally nonlinear dynamics, with simplified rheologies exhibiting instability or avalanche-type behavior, in a material composed of a large number of discrete elements. In particular, it has been suggested that earthquakes are an example of a “self-organized critical phenomenon” analogous to a sandpile that spontaneously evolves to a critical angle of repose in response to the steady supply of new grains at the summit. In this stationary state of marginal stability the distribution of avalanche energies is a power law, equivalent to the Gutenberg-Richter frequency-magnitude law, and the behavior is relatively insensitive to the details of the dynamics. Here we review the results of some of the composite physical models that have been developed to simulate seismogenesis on different scales during (1) dynamic slip on a preexisting fault, (2) fault growth, and (3) fault nucleation. The individual physical models share some generic features, such as a dynamic energy flux applied by tectonic loading at a constant strain rate, strong local interactions, and fluctuations generated either dynamically or by fixed material heterogeneity, but they differ significantly in the details of the assumed dynamics and in the methods of numerical solution. However, all exhibit critical or near-critical behavior, with behavior quantitatively consistent with many of the observed fractal or multifractal scaling laws of brittle faulting and earthquakes, including the Gutenberg-Richter law. Some of the results are sensitive to the details of the dynamics and hence are not strict examples of self-organized criticality. Nevertheless, the results of these different physical models share some generic statistical properties similar to the “universal” behavior seen in a wide variety of critical phenomena, with significant implications for practical problems in probabilistic seismic hazard evaluation. In particular, the notion of self-organized criticality (or near-criticality) gives a scientific rationale for the a priori assumption of “stationarity” used as a first step in the prediction of the future level of hazard. The Gutenberg-Richter law (a power law in energy or seismic moment) is found to apply only within a finite scale range, both in model and natural seismicity. Accordingly, the frequency-magnitude distribution can be generalized to a gamma distribution in energy or seismic moment (a power law, with an exponential tail). This allows extrapolations of the frequency-magnitude distribution and the maximum credible magnitude to be constrained by observed seismic or tectonic moment release rates. The answers to other questions raised are less clear, for example, the effect of the a priori assumption of a Poisson process in a system with strong local interactions, and the impact of zoning a potentially multifractal distribution of epicentres with smooth polygons. The results of some models show premonitory patterns of seismicity which could in principle be used as mainshock precursors. However, there remains no consensus, on both theoretical and practical grounds, on the possibility or otherwise of reliable intermediate-term earthquake prediction.

413 citations

Journal ArticleDOI
TL;DR: In this paper, a b-value analysis was carried out on data recorded during a laboratory test on a reinforced concrete beam designed as representative of a bridge beam, and the results showed a good agreement with the development of the fracture process of the concrete.
Abstract: Concrete bridges in the United Kingdom represent a major legacy that is starting to show signs of distress. Therefore, the need for monitoring them is an urgent task. The acoustic emission ~AE! technique was proposed as a valid method for monitoring these bridges but more study is needed to develop methods of analyzing the data recorded during the monitoring. The writers would like to propose a b-value analysis as a possible way to process AE data obtained during a local monitoring. The b-value is defined as the log-linear slope of the frequency-magnitude distribution of acoustic emissions. This paper presents the results of a b-value analysis carried out on data recorded during a laboratory test on a reinforced concrete beam designed as representative of a bridge beam. During the experiment, the specimen was loaded cyclically and it was continuously monitored with an AE system. The data obtained were processed and a b-value analysis was carried out. The b-value was compared with the applied load, with a damage parameter, and with the cracks appearing on the beam. The damage parameter represents the cumulative damage in terms of total sum of acoustic emissions. The results showed a good agreement with the development of the fracture process of the concrete. From a study of the b-value calculated for a whole loading cycle and for each channel, some quantitative conclusions were also drawn. Further development work is needed to make the b-value technique suitable for practical use on a real bridge.

408 citations

Journal ArticleDOI
TL;DR: In this article, the authors focus on the short-term prediction and forecasting of tectonic earthquakes and indicate guidelines for utilization of possible forerunners of large earthquakes to drive civil protection actions, including the use of probabilistic seismic hazard analysis in the wake of a large earthquake.
Abstract: Following the 2009 L'Aquila earthquake, the Dipartimento della Protezione Civile Italiana (DPC), appointed an International Commission on Earthquake Forecasting for Civil Protection (ICEF) to report on the current state of knowledge of short-term prediction and forecasting of tectonic earthquakes and indicate guidelines for utilization of possible forerunners of large earthquakes to drive civil protection actions, including the use of probabilistic seismic hazard analysis in the wake of a large earthquake. The ICEF reviewed research on earthquake prediction and forecasting, drawing from developments in seismically active regions worldwide. A prediction is defined as a deterministic statement that a future earthquake will or will not occur in a particular geographic region, time window, and magnitude range, whereas a forecast gives a probability (greater than zero but less than one) that such an event will occur. Earthquake predictability, the degree to which the future occurrence of earthquakes can be determined from the observable behavior of earthquake systems, is poorly understood. This lack of understanding is reflected in the inability to reliably predict large earthquakes in seismically active regions on short time scales. Most proposed prediction methods rely on the concept of a diagnostic precursor; i.e., some kind of signal observable before earthquakes that indicates with high probability the location, time, and magnitude of an impending event. Precursor methods reviewed here include changes in strain rates, seismic wave speeds, and electrical conductivity; variations of radon concentrations in groundwater, soil, and air; fluctuations in groundwater levels; electromagnetic variations near and above Earth's surface; thermal anomalies; anomalous animal behavior; and seismicity patterns. The search for diagnostic precursors has not yet produced a successful short-term prediction scheme. Therefore, this report focuses on operational earthquake forecasting as the principle means for gathering and disseminating authoritative information about time-dependent seismic hazards to help communities prepare for potentially destructive earthquakes. On short time scales of days and weeks, earthquake sequences show clustering in space and time, as indicated by the aftershocks triggered by large events. Statistical descriptions of clustering explain many features observed in seismicity catalogs, and they can be used to construct forecasts that indicate how earthquake probabilities change over the short term. Properly applied, short-term forecasts have operational utility; for example, in anticipating aftershocks that follow large earthquakes. Although the value of long-term forecasts for ensuring seismic safety is clear, the interpretation of short-term forecasts is problematic, because earthquake probabilities may vary over orders of magnitude but typically remain low in an absolute sense (< 1% per day). Translating such low-probability forecasts into effective decision-making is a difficult challenge. Reports on the current utilization operational forecasting in earthquake risk management were compiled for six countries with high seismic risk: China, Greece, Italy, Japan, Russia, United States. Long-term models are currently the most important forecasting tools for civil protection against earthquake damage, because they guide earthquake safety provisions of building codes, performance-based seismic design, and other risk-reducing engineering practices, such as retrofitting to correct design flaws in older buildings. Short-term forecasting of aftershocks is practiced by several countries among those surveyed, but operational earthquake forecasting has not been fully implemented (i.e., regularly updated and on a national scale) in any of them. Based on the experience accumulated in seismically active regions, the ICEF has provided to DPC a set of recommendations on the utilization of operational forecasting in Italy, which may also be useful in other countries. The public should be provided with open sources of information about the short-term probabilities of future earthquakes that are authoritative, scientific, consistent, and timely. Advisories should be based on operationally qualified, regularly updated seismicity forecasting systems that have been rigorously reviewed and updated by experts in the creation, delivery, and utility of earthquake information. The quality of all operational models should be evaluated for reliability and skill by retrospective testing, and they should be under continuous prospective testing against established long-term forecasts and alternative time-dependent models. Alert procedures should be standardized to facilitate decisions at different levels of government and among the public. Earthquake probability thresholds should be established to guide alert levels based on objective analysis of costs and benefits, as well as the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. The principles of effective public communication established by social science research should be applied to the delivery of seismic hazard information.

363 citations

Journal ArticleDOI
TL;DR: In this paper, the authors used the stress-stepping technique to investigate the influence of confining pressure at effective confining pressures of 10, 30, and 50 MPa (while maintaining a constant 20 MPa pore fluid pressure).
Abstract: [1] The characterization of time-dependent brittle rock deformation is fundamental to understanding the long-term evolution and dynamics of the Earth's crust. The chemical influence of pore water promotes time-dependent deformation through stress corrosion cracking that allows rocks to deform at stresses far below their short-term failure strength. Here, we report results from a study of time-dependent brittle creep in water-saturated samples of Darley Dale sandstone (initial porosity, 13%) under triaxial stress conditions. Results from conventional creep experiments show that axial strain rate is heavily dependent on the applied differential stress. A reduction of only 10% in differential stress results in a decrease in strain rate of more than two orders of magnitude. However, natural sample variability means that multiple experiments must be performed to yield consistent results. Hence we also demonstrate that the use of stress-stepping creep experiments can successfully overcome this issue. We have used the stress-stepping technique to investigate the influence of confining pressure at effective confining pressures of 10, 30, and 50 MPa (while maintaining a constant 20 MPa pore fluid pressure). Our results demonstrate that the stress corrosion process appears to be significantly inhibited at higher effective pressures, with the creep strain rate reduced by multiple orders of magnitude. The influence of doubling the pore fluid pressure, however, while maintaining a constant effective confining pressure, is shown to influence the rate of stress corrosion within the range expected from sample variability. We discuss these results in the context of microstructural analysis, acoustic emission hypocenter locations, and fits to proposed macroscopic creep laws.

307 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: In this article, a review of the relationship between friction and the properties of earthquake faults is presented, as well as an interpretation of the friction state variable, including its interpretation as a measure of average asperity contact time and porosity within granular fault gouge.
Abstract: This paper reviews rock friction and the frictional properties of earthquake faults. The basis for rate- and state-dependent friction laws is reviewed. The friction state variable is discussed, including its interpretation as a measure of average asperity contact time and porosity within granular fault gouge. Data are summarized showing that friction evolves even during truly stationary contact, and the connection between modern friction laws and the concept of “static” friction is discussed. Measurements of frictional healing, as evidenced by increasing static friction during quasistationary contact, are reviewed, as are their implications for fault healing. Shear localization in fault gouge is discussed, and the relationship between microstructures and friction is reviewed. These data indicate differences in the behavior of bare rock surfaces as compared to shear within granular fault gouge that can be attributed to dilation within fault gouge. Physical models for the characteristic friction distance are discussed and related to the problem of scaling this parameter to seismic faults. Earthquake afterslip, its relation to laboratory friction data, and the inverse correlation between afterslip and shallow coseismic slip are discussed in the context of a model for afterslip. Recent observations of the absence of afterslip are predicted by the model.

1,714 citations

11 Jun 2010
Abstract: The validity of the cubic law for laminar flow of fluids through open fractures consisting of parallel planar plates has been established by others over a wide range of conditions with apertures ranging down to a minimum of 0.2 µm. The law may be given in simplified form by Q/Δh = C(2b)3, where Q is the flow rate, Δh is the difference in hydraulic head, C is a constant that depends on the flow geometry and fluid properties, and 2b is the fracture aperture. The validity of this law for flow in a closed fracture where the surfaces are in contact and the aperture is being decreased under stress has been investigated at room temperature by using homogeneous samples of granite, basalt, and marble. Tension fractures were artificially induced, and the laboratory setup used radial as well as straight flow geometries. Apertures ranged from 250 down to 4µm, which was the minimum size that could be attained under a normal stress of 20 MPa. The cubic law was found to be valid whether the fracture surfaces were held open or were being closed under stress, and the results are not dependent on rock type. Permeability was uniquely defined by fracture aperture and was independent of the stress history used in these investigations. The effects of deviations from the ideal parallel plate concept only cause an apparent reduction in flow and may be incorporated into the cubic law by replacing C by C/ƒ. The factor ƒ varied from 1.04 to 1.65 in these investigations. The model of a fracture that is being closed under normal stress is visualized as being controlled by the strength of the asperities that are in contact. These contact areas are able to withstand significant stresses while maintaining space for fluids to continue to flow as the fracture aperture decreases. The controlling factor is the magnitude of the aperture, and since flow depends on (2b)3, a slight change in aperture evidently can easily dominate any other change in the geometry of the flow field. Thus one does not see any noticeable shift in the correlations of our experimental results in passing from a condition where the fracture surfaces were held open to one where the surfaces were being closed under stress.

1,557 citations

01 Mar 1995
TL;DR: This thesis applies neural network feature selection techniques to multivariate time series data to improve prediction of a target time series and results indicate that the Stochastics and RSI indicators result in better prediction results than the moving averages.
Abstract: : This thesis applies neural network feature selection techniques to multivariate time series data to improve prediction of a target time series. Two approaches to feature selection are used. First, a subset enumeration method is used to determine which financial indicators are most useful for aiding in prediction of the S&P 500 futures daily price. The candidate indicators evaluated include RSI, Stochastics and several moving averages. Results indicate that the Stochastics and RSI indicators result in better prediction results than the moving averages. The second approach to feature selection is calculation of individual saliency metrics. A new decision boundary-based individual saliency metric, and a classifier independent saliency metric are developed and tested. Ruck's saliency metric, the decision boundary based saliency metric, and the classifier independent saliency metric are compared for a data set consisting of the RSI and Stochastics indicators as well as delayed closing price values. The decision based metric and the Ruck metric results are similar, but the classifier independent metric agrees with neither of the other metrics. The nine most salient features, determined by the decision boundary based metric, are used to train a neural network and the results are presented and compared to other published results. (AN)

1,545 citations

Journal ArticleDOI
TL;DR: In this paper, the authors introduced the concept of self-organized criticality to explain the behavior of the sandpile model, where particles are randomly dropped onto a square grid of boxes and when a box accumulates four particles they are redistributed to the four adjacent boxes or lost off the edge of the grid.
Abstract: The concept of self-organized criticality was introduced to explain the behaviour of the sandpile model. In this model, particles are randomly dropped onto a square grid of boxes. When a box accumulates four particles they are redistributed to the four adjacent boxes or lost off the edge of the grid. Redistributions can lead to further instabilities with the possibility of more particles being lost from the grid, contributing to the size of each ‘avalanche’. These model ‘avalanches’ satisfied a power-law frequency‐area distribution with a slope near unity. Other cellular-automata models, including the slider-block and forest-fire models, are also said to exhibit self-organized critical behaviour. It has been argued that earthquakes, landslides, forest fires, and species extinctions are examples of self-organized criticality in nature. In addition, wars and stock market crashes have been associated with this behaviour. The forest-fire model is particularly interesting in terms of its relation to the critical-point behaviour of the sitepercolation model. In the basic forest-fire model, trees are randomly planted on a grid of points. Periodically in time, sparks are randomly dropped on the grid. If a spark drops on a tree, that tree and adjacent trees burn in a model fire. The fires are the ‘avalanches’ and they are found to satisfy power-law frequency‐area distributions with slopes near unity. This forest-fire model is closely related to the site-percolation model, that exhibits critical behaviour. In the forest-fire model there is an inverse cascade of trees from small clusters to large clusters, trees are lost primarily from model fires that destroy the largest clusters. This quasi steady-state cascade gives a power-law frequency‐area distribution for both clusters of trees and smaller fires. The site-percolation model is equivalent to the forest-fire model without fires. In this case there is a transient cascade of trees from small to large clusters and a power-law distribution is found only at a critical density of trees.

1,384 citations