scispace - formally typeset
Search or ask a question
Book

Manual of sedimentary petrography

About: The article was published on 1938-01-01 and is currently open access. It has received 744 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: GRADISTAT as discussed by the authors is a computer program for the rapid analysis of grain size statistics from any of the standard measuring techniques, such as sieving and laser granulometry.
Abstract: Grain size analysis is an essential tool for classifying sedimentary environments. The calculation of statistics for many samples can, however, be a laborious process. A computer program called GRADISTAT has been written for the rapid analysis of grain size statistics from any of the standard measuring techniques, such as sieving and laser granulometry. Mean, mode, sorting, skewness and other statistics are calculated arithmetically and geometrically (in metric units) and logarithmically (in phi units) using moment and Folk and Ward graphical methods. Method comparison has allowed Folk and Ward descriptive terms to be assigned to moments statistics. Results indicate that Folk and Ward measures, expressed in metric units, appear to provide the most robust basis for routine comparisons of compositionally variable sediments. The program runs within the Microsoft Excel spreadsheet package and is extremely versatile, accepting standard and non-standard size data, and producing a range of graphical outputs including frequency and ternary plots. Copyright © 2001 John Wiley & Sons, Ltd.

3,419 citations


Cites methods from "Manual of sedimentary petrography"

  • ...…but available with some Coulter sizing instruments), geometrically (based on a log-normal distribution with metric size values) and logarithmically (based on a log-normal distribution with phi size values), following the terminology and formulae suggested by Krumbein and Pettijohn (1938)....

    [...]

  • ...The convention used by some authors (e.g. Krumbein and Pettijohn, 1938) to subtract 3Ð0 from the moments value to standardize the measure around zero is not followed here....

    [...]

  • ...The mathematical ‘method of moments’ (Krumbein and Pettijohn, 1938; Friedman and Johnson, 1982) is the most accurate since it employs the entire sample population....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors provide a formalism for the macroscopic study of kinetic and physical processes affecting crystallization, within which the explicit affect of chemical and physical process on the CSD can be analytically tested.
Abstract: Crystal-size in crystalline rocks is a fundamental measure of growth rate and age. And if nucleation spawns crystals over a span of time, a broad range of crystal sizes is possible during crystallization. A population balance based on the number density of crystals of each size generally predicts a log-linear distribution with increasing size. The negative slope of such a distribution is a measure of the product of overall population growth rate and mean age and the zero size intercept is nucleation density. Crystal size distributions (CSDs) observed for many lavas are smooth and regular, if not actually linear, when so plotted and can be interpreted using the theory of CSDs developed in chemical engineering by Randolph and Larson (1971). Nucleation density, nucleation and growth rates, and orders of kinetic reactions can be estimated from such data, and physical processes affecting the CSD (e.g. crystal fractionation and accumulation, mixing of populations, annealing in metamorphic and plutonic rocks, and nuclei destruction) can be gauged through analytical modeling. CSD theory provides a formalism for the macroscopic study of kinetic and physical processes affecting crystallization, within which the explicit affect of chemical and physical processes on the CSD can be analytically tested. It is a means by which petrographic information can be quantitatively linked to the kinetics of crystallization, and on these grounds CSDs furnish essential information supplemental to laboratory kinetic studies. In this three part series of papers, Part I provides the general CSD theory in a geological context, while applications to igneous and metamorphic rocks are given, respectively, in Parts II and III.

726 citations

Journal ArticleDOI
TL;DR: A consideration of a large number of procedures for the collection and analysis of benthic samples, with particular emphasis on stream investigations and the importance of substrate particle size, reveals that only certain techniques are suitable.
Abstract: A consideration of a large number of procedures for the collection and analysis of benthic samples, with particular emphasis on stream investigations and the importance of substrate particle size as a common denominator in benthic ecology, reveals that only certain techniques are suitable. Although either systematic or stratified random samplings are appropriate for faunal surveys, the careful selection of sample sites in singlespecies studies can provide maximum information per unit sampling effort. In order to adequately describe the micro-distribution of benthic organisms, investigations must be conducted on a year-round basis. Only bottom samplers, such as the core-type, which retain the entire sediment sample for analysis are desirable. Measurements of current velocity should be made close to the substrate-water interface. The removal of the fauna by elutriation and hand sorting allows for further physical and chemical analyses. Physical analysis of stream sediments can be accomplished through the decantation of silt and clay followed by dry sieving of the coarser material. In addition, a new photographic technique for substrate analysis, described in detail, can provide information on the surface sediments. Indications of the organic content of sediments can be obtained by the dry combustion carbon train method or, when clay content is low, from loss of weight on ignition values. However, new techniques are called for, especially those directed toward the food habits of particular species. The Wentworth classification, modified to include a gravel category, should be followed, and the size classes converted to the phi scale in graphic presentations of sediment data. Since Shelford (1914) became interested in the ecology of benthic macro-invertebrates, Needham (1928) conducted surveys of New York streams, and Shelford and Eddy (1929) formulated some fundamental approaches for the study of stream communities, a great many benthic investigations have appeared in the literature. The facts accumulated to date show that the orientation of aquatic invertebrates to various environmental parameters results in nonuniform distributions in which given animal groups are associated with measurable ranges of environmental conditions. Benithic ecologists have spent considerable effort measuring the most obvious parameters in the aquatic environment to determine their effects on the distribution of various groups of animals. Substrate, current velocity, and food materials have been shown to -be of primary importance, although the way in which these interrelated parameters determine distribution remains to be completely delineated. Undoubtedly, some parameters are more critical than others, but it may be that all physical " Present address, Department of Biological Sciences, Northwestern University.

650 citations

Book ChapterDOI
11 Sep 2018

513 citations

Journal ArticleDOI
01 Nov 2004
TL;DR: The Bruun Rule has no power for predicting shoreline behaviour under rising sea level and should be abandoned as mentioned in this paper. But, despite the lack of understanding, many appraisals have been undertaken that employ a concept known as the "Bruun Rule" and many studies disprove it in the field.
Abstract: In the face of a global rise in sea level, understanding the response of the shoreline to changes in sea level is a critical scientific goal to inform policy makers and managers. A body of scientific information exists that illustrates both the complexity of the linkages between sea-level rise and shoreline response, and the comparative lack of understanding of these linkages. In spite of the lack of understanding, many appraisals have been undertaken that employ a concept known as the “Bruun Rule”. This is a simple two-dimensional model of shoreline response to rising sea level. The model has seen near global application since its original formulation in 1954. The concept provided an advance in understanding of the coastal system at the time of its first publication. It has, however, been superseded by numerous subsequent findings and is now invalid. Several assumptions behind the Bruun Rule are known to be false and nowhere has the Bruun Rule been adequately proven; on the contrary several studies disprove it in the field. No universally applicable model of shoreline retreat under sea-level rise has yet been developed. Despite this, the Bruun Rule is in widespread contemporary use at a global scale both as a management tool and as a scientific concept. The persistence of this concept beyond its original assumption base is attributed to the following factors: 1. Appeal of a simple, easy to use analytical model that is in widespread use. 2. Difficulty of determining the relative validity of ‘proofs’ and ‘disproofs’. 3. Ease of application. 4. Positive advocacy by some scientists. 5. Application by other scientists without critical appraisal. 6. The simple numerical expression of the model. 7. Lack of easy alternatives. The Bruun Rule has no power for predicting shoreline behaviour under rising sea level and should be abandoned. It is a concept whose time has passed. The belief by policy makers that it offers a prediction of future shoreline position may well have stifled much-needed research into the coastal response to sea-level rise.

427 citations