scispace - formally typeset
Search or ask a question

Showing papers on "Sampling (statistics) published in 1994"


Journal ArticleDOI
TL;DR: In this article, the authors propose to respecify the model so that they are solely a function of the independent variables included in the model, and then estimate the model using sampling weights.
Abstract: Most major population surveys used by social scientists are based on complex sampling designs where sampling units have different probabilities of being selected. Although sampling weights must generally be used to derive unbiased estimates of univariate population characteristics, the decision about their use in regression analysis is more complicated. Where sampling weights are solely a function of independent variables included in the model, unweighted OLS estimates are preferred because they are unbiased, consistent, and have smaller standard errors than weighted OLS estimates. Where sampling weights are a function of the dependent variable (and thus of the error term), we recommend first attempting to respecify the model so that they are solely a function of the independent variables. If this can be accomplished, then unweighted OLS is again preferred. If the model cannot be respecified, then estimation of the model using sampling weights may be appropriate. In this case, however, the formula used by...

1,372 citations


Book
17 Feb 1994
TL;DR: Quantitative social research relies heavily on data that originate either from surveys with sampling designs that depart from simple random sampling, or from observational studies with no formal sampling design, so the standard assumptions of independence are inappropriate.
Abstract: Quantitative social research relies heavily on data that originate either from surveys with sampling designs that depart from simple random sampling, or from observational studies with no formal sampling design. Simple random sampling is often not feasible, or its use would yield data with less information about certain features of interest, and it is often economically prohibitive. For example, in studies of school effectiveness it may be difficult to secure the cooperation of a school or a classroom. Therefore it would be rather wasteful to collect data from a small number of students in such a classroom. Data from a larger proportion, or from all the students, could be collected at a small additional expense, thus reducing the number of classrooms required for a sample to contain sufficient information for the intended purposes. Similarly, in household surveys, having contacted a selected individual, it would make sense to collect data from the rest of the members of the household at the same time. When this is done, we usually end up with data for which the standard assumptions of independence (such as in ordinary regression) are inappropriate.

898 citations


Journal ArticleDOI
TL;DR: A subclass of dynamic linear models with unknown hyperparameters called d-inverse-gamma models is defined and it is proved that the regularity conditions for convergence hold.
Abstract: We define a subclass of dynamic linear models with unknown hyperparameters called d-inverse-gamma models. We then approximate the marginal p.d.f.s of the hyperparameter and the state vector by the data augmentation algorithm of Tanner/Wong. We prove that the regularity conditions for convergence hold. A sampling based scheme for practical implementation is discussed. Finally, we illustrate how to obtain an iterative importance sampling estimate of the model likelihood. (author's abstract)

884 citations


Journal ArticleDOI
TL;DR: This work describes some general purpose software that is currently developing for implementing Gibbs sampling: BUGS (Bayesian inference using Gibbs sampling), written in Modula-2 and runs under both DOS and UNIX.
Abstract: Gibbs sampling has enormous potential for analysing complex data sets However, routine use of Gibbs sampling has been hampered by the lack of general purpose software for its implementation Until now all applications have involved writing one-off computer code in low or intermediate level languages such as C or Fortran We describe some general purpose software that we are currently developing for implementing Gibbs sampling: BUGS (Bayesian inference using Gibbs sampling) The BUGS system comprises three components: first, a natural language for specifying complex models; second, an 'expert system' for deciding appropriate methods for obtaining samples required by the Gibbs sampler; third, a sampling module containing numerical routines to perform the sampling S objects are used for data input and output BUGS is written in Modula-2 and runs under both DOS and UNIX

691 citations


Book ChapterDOI
10 Jul 1994
TL;DR: On four datasets, it is shown that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater.
Abstract: With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term “prototypes” refers to the reference instances used in a nearest neighbor computation — the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes.

510 citations


Journal ArticleDOI
TL;DR: The authors first describe the general class of approximation spaces generated by translation of a function /spl psi/(x), and provide a full characterization of their basis functions, and present a general sampling theorem for computing the approximation of signals in these subspaces based on a simple consistency principle.
Abstract: The authors first describe the general class of approximation spaces generated by translation of a function /spl psi/(x), and provide a full characterization of their basis functions. They then present a general sampling theorem for computing the approximation of signals in these subspaces based on a simple consistency principle. The theory puts no restrictions on the system input which can be an arbitrary finite energy signal; bandlimitedness is not required. In contrast to previous approaches, this formulation allows for an independent specification of the sampling (analysis) and approximation (synthesis) spaces. In particular, when both spaces are identical, the theorem provides a simple procedure for obtaining the least squares approximation of a signal. They discuss the properties of this new sampling procedure and present some examples of applications involving bandlimited, and polynomial spline signal representations. They also define a spectral coherence function that measures the "similarity" between the sampling and approximation spaces, and derive a relative performance bound for the comparison with the least squares solution. >

414 citations


OtherDOI
01 Jan 1994
TL;DR: The U.S. Geological Survey's National Water-Quality Assessment program includes extensive data-collection efforts to assess the quality of the Nation's streams as discussed by the authors, which require analyses of stream samples for major ions, nutrients, sediments, and organic contaminants.
Abstract: The U.S. Geological Survey's National Water-Quality Assessment program includes extensive data-collection efforts to assess the quality of the Nation's streams. These studies require analyses of stream samples for major ions, nutrients, sediments, and organic contaminants. For the information to be comparable among studies in different parts of the Nation, consistent procedures specifically designed to produce uncontaminated samples for trace analysis in the laboratory are critical. This field guide describes the standard procedures for collecting and processing samples for major ions, nutrients, organic contaminants, sediment, and field analyses of conductivity, pH, alkalinity, and dissolved oxygen. Samples are collected and processed using modified and newly designed equipment made of Teflon to avoid contamination, including nonmetallic samplers (D-77 and DH-81) and a Teflon sample splitter. Field solid-phase extraction procedures developed to process samples for organic constituent analyses produce an extracted sample with stabilized compounds for more accurate results. Improvements to standard operational procedures include the use of processing chambers and capsule filtering systems. A modified collecting and processing procedure for organic carbon is designed to avoid contamination from equipment cleaned with methanol. Quality assurance is maintained by strict collecting and processing procedures, replicate sampling, equipment blank samples, and a rigid cleaning procedure using detergent, hydrochloric acid, and methanol.

382 citations


BookDOI
01 Jan 1994
TL;DR: This book discusses Sampling Arthropod Populations, Sampling techniques and Initial Program Development, and Implementation of Sampling Programs: Training Specialists in Sampling Procedures.
Abstract: Introduction to Sampling Arthropod Populations (L.P. Pedigo). Sampling Principles: Arthropod Sampling in Agricultural Landscapes: Ecological Considerations (D.A. Landis). Statistics for Describing Populations (P.M. Davis). Bias and Variability in Statistical Estimates (D.E. Legg and R.D. Moon). Sampling Techniques and Initial Program Development: Techniques for Sampling Arthropods in IPM (S.H. Hutchins). Developing a Primary Sampling Program (G.D. Buntin). Improving Sampling Program Efficiency: Initiating Sampling Programs (L.G. Higley and R.K.D. Peterson). Sequential Sampling for Classifying Pest Status (M.R. Binns). Sequential Estimation and Classification Procedures for Binomial Counts (V.P. Jones). Sequential Sampling to Determine Population Density (W.D. Hutchison). Sampling to Predict or Monitor Biological Control (J.P. Nyrop and W. van der Werf). Time-Sequential Sampling for Taking Tactical Action (L.P. Pedigo). Sampling Programs: Sampling Methods for Insect Management in Alfalfa (R. Berberet and W.D. Hutchison). Sampling Pest and Beneficial Arthropods of Apple (E.H. Beers, L.A. Hull, and V.P. Jones). Sampling Arthropod Pests in Citrus (J.D. Hare). Sampling Arthropod Pests in Corn (J.J. Tollefson and D.D. Calvin). Estimating Abundance Impact, and Interactions among Arthropods in Cotton Agroecosystems (L.T. Wilson). Sampling Arthropods in Livestock Management Systems (T.J. Lysyk and R.D. Moon). Sampling Programs for Soybean Arthropods (M.R. Zeiss and T.H. Klubertanz). Sampling Arthropod Pests in Vegetables (J.T. Trumble). Sampling Arthropod Pests of Wheat and Rice (N.E. Elliott, G.L. Hein, and B.M. Shepard). Implementation of Sampling Programs: Training Specialists in Sampling Procedures (G.W. Cuperus and R. Berberet). Designing and Delivering In-The-Field Scouting Programs (E.J. Bechinski).

379 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities in turbine blade reliability analysis problems, and demonstrated the proposed AIS methodology is demonstrated using a turbine blade failure analysis problem.
Abstract: This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

358 citations


Journal ArticleDOI
TL;DR: A chart of models of visual word recognition is presented in this article that facilitates formal comparisons between models of different formats and sets of criteria for the evaluation of models are discussed, as well as strategies for model construction.
Abstract: A chart of models of visual word recognition is presented that facilitates formal comparisons between models of different formats. In the light of the theoretical contributions to this special section, sets of criteria for the evaluation of models are discussed, as well as strategies for model construction.

312 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that under suitable conditions on a generating function λ e L 2, the approximation problem onto the space V = {v:v(x) = (c * λ)(x)c e l 2} produces a sampling procedure similar to the classical one.
Abstract: We view Shannon's sampling procedure as a problem of approximation in the space S = {s: s (x) = (c * sinc)(x)c e l 2}. We show that under suitable conditions on a generating function λ e L 2, the approximation problem onto the space V = {v:v(x) = (c * λ)(x)c e l 2} produces a sampling procedure similar to the classical one. It consists of an optimal prefiltering, a pure jitter-stable sampling, and a postfiltering for the reconstruction. We describe equivalent signal representations using generic, dual, cardinal, and orthogonal basis functions and give the expression of the corresponding filters. We then consider sequences λn, where λn denotes the n-fold convolution of λ. They provide a sequence of increasingly regular sampling schemes as the value of n increases. We show that the cardinal and orthogonal pre- and postfilters associated with these sequences asymptotically converge to the ideal lowpass filter of Shannon. The theory is illustrated using several eamples.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method for estimating substitution rates for DNA sequence data using likelihood techniques using a recursion satisfied by the sampling probabilities to construct a Markov chain with absorbing states in such a way that the required sampling distribution is the mean of a functional of the process up to the absorption time.

Journal ArticleDOI
TL;DR: A new computer-aided molecular design approach using genetic algorithms that performs a guided stochastic search where improved solutions are achieved by sampling areas of the parameter space that have a higher probability for good solutions.

Journal ArticleDOI
TL;DR: It is proposed that an important first step for functional studies is to examine accurate, time‐dynamic maps of the brain's electrical fields at the head surface, given an adequate spatial sampling of the surface potentials.
Abstract: Psychological studies with reaction time methodology show that there is meaningful variability in the performance of cognitive operations when responses are measured in milliseconds. Temporal precision is also required to reveal the rapid neurophysiological events in cortical networks. Sampling the brain's electrical activity at the scalp surface characterizes regional brain function with millisecond temporal resolution. The problem with electroencephalographic (EEG) data is localizing the areas of the cortex that generate the observed scalp fields. Although the eventual goal will be to specify the neural generators of the EEG, we propose that an important first step for functional studies is to examine accurate, time-dynamic maps of the brain's electrical fields at the head surface. Given an adequate spatial sampling of the surface potentials, accurate electrical studies require measures that are independent of the location of the reference sensor. The 2D Laplacian of the potential field may be used to define the local features of the scalp current flow. Because the electrical fields are dynamic, brain mapping with electrical data requires animations rather than static images. © 1994 Wiley-Liss, Inc.

Proceedings ArticleDOI
23 May 1994
TL;DR: It is shown that the sparse graph, or skeleton, that arises when the authors randomly sample a graph's edges will accurately approximate the value of all cuts in the original graph with high probability, which makes sampling effective for problems involving cuts in graphs.
Abstract: We use random sampling as a tool for solving undirected graph problems. We show that the sparse graph, or skeleton, that arises when we randomly sample a graph's edges will accurately approximate the value of all cuts in the original graph with high probability. This makes sampling effective for problems involving cuts in graphs. We present fast randomized (Monte Carlo and Las Vegas) algorithms for approximating and exactly finding minimum cuts and maximum flows in unweighted, undirected graphs. Our cut-approximation algorithms extend unchanged to weighted graphs while our weighted-graph flow algorithms are somewhat slower. Our approach gives a general paradigm with potential applications to any packing problem. It has since been used in a near-linear time algorithm for finding minimum cuts, as well as faster cut and flow algorithms. Our sampling theorems also yield faster algorithms for several other cut-based problems, including approximating the best balanced cut of a graph, finding a k-connected orientation of a 2k-connected graph, and finding integral multicommodity flows in graphs with a great deal of excess capacity. Our methods also improve the efficiency of some parallel cut and flow algorithms. Our methods also apply to the network design problem, where we wish to build a network satisfying certain connectivity requirements between vertices. We can purchase edges of various costs and wish to satisfy the requirements at minimum total cost. Since our sampling theorems apply even when the sampling probabilities are different for different edges, we can apply randomized rounding to solve network design problems. This gives approximation algorithms that guarantee much better approximations than previous algorithms whenever the minimum connectivity requirement is large. As a particular example, we improve the best approximation bound for the minimum k-connected subgraph problem from 1.85 to 1 + O(V/log n)lk).


Journal ArticleDOI
TL;DR: It is proved that the failure biasing heuristic in its original form may not give bounded relative error for a large class of systems and that a modification of the heuristic works for the general case.
Abstract: In this paper we investigate importance sampling techniques for the simulation of Markovian systems with highly reliable components. The need for simulation arises because the state space of such s...

Journal ArticleDOI
TL;DR: In this paper, the dependence of short-period radar rainfall accumulations on periodic sampling of the rainfield is investigated, and it is shown that errors due to sampling can be greater than all the other errors combined if accumulations are improperly computed.

Proceedings ArticleDOI
28 Nov 1994
TL;DR: The BER degradation caused by a given frequency offset between receiver and transmitter clock, as compared with the case of ideal sampling, is calculated.
Abstract: In fully digital receivers, carrier and timing information is derived from samples of the (anti-aliasing-filtered) received continuous-time signal. In case of synchronized sampling, this information is used to align the sampling clock of the receiver with the remote transmit clock. In nonsynchronized sampling systems, the sampling at the receiver is performed by means of a fixed free-running clock, and additional post-processing is necessary to perform timing correction in the digital domain. We investigate the effect of non-synchronized sampling on the BER performance of OFDM systems. We calculate the BER degradation caused by a given frequency offset between receiver and transmitter clock, as compared with the case of ideal sampling. The obtained results are compared with the performance of synchronized sampling systems.

Journal ArticleDOI
TL;DR: A sampling strategy developed for AIDS prevention research in Montgomery County (Dayton) Ohio that combines targeted sampling with the systematic derivation of proportional sampling quotas is described in this paper. But the sampling strategy has a range of potential applications.
Abstract: Recruiting large samples from "hidden populations" of injection drug and crack-cocaine users who are not in drug treatment has been a troubling methodological problem that has increased in significance in the face of the AIDS pandemic. This article describes a sampling strategy developed for AIDS prevention research in Montgomery County (Dayton) Ohio that combines targeted sampling with the systematic derivation of proportional sampling quotas. The ethnographic mapping of drug use indicators by outreach workers plays a key role in the sampling design. The triangulation of independent drug use indicators enables us to estimate the relative density of drug users throughout the county and generate proportional sampling quotas for three sampling zones. The sampling strategy has a range of potential applications.

Proceedings ArticleDOI
09 Oct 1994
TL;DR: A new method of "farthest point strategy" (FPS) for progressive image acquisition is presented, retaining its uniformity with the increased density, providing an efficient means for sparse image sampling and display.
Abstract: A new method of "farthest point strategy" (FPS) for progressive image acquisition is presented. Its main advantage is in retaining its uniformity with the increased density, providing an efficient means for sparse image sampling and display. In contrast to previously presented stochastic approaches, the FPS guarantees the uniformity in a deterministic min-max sense. Within this uniformity criterion, the sampling points are irregularly spaced, exhibiting superior antialiasing properties. A straightforward modification of the FPS yields an image-dependent adaptive sampling scheme. An efficient, O(N log(N)), algorithm for both versions is introduced, and several applications of the FPS are discussed.

Journal ArticleDOI
TL;DR: Two capture-recapture models that imply that the logit of the probability of capture is an additive function of an animal catchability parameter and a parameter reflecting the sampling effort are considered.
Abstract: We consider two capture-recapture models that imply that the logit of the probability of capture is an additive function of an animal catchability parameter and a parameter reflecting the sampling effort. The models are special cases of the Rasch model, and satisfy the property of quasi-symmetry. One model is log-linear and the other is a latent class model. For the log-linear model, point and interval estimates of the population size are easily obtained using standard software, such as GLIM.

Book ChapterDOI
01 Jan 1994
TL;DR: Wide use of digital controllers has placed onto the research agenda the generalization of sliding mode control methodology to discrete-time control systems.
Abstract: Wide use of digital controllers has placed onto the research agenda the generalization of sliding mode control methodology to discrete-time control systems. In the first studies, control algorithms intended for continuous- time systems were applied to discrete-time problems; resulting in chattering since the switching frequency can not exceed that of sampling. Then methods for reducing chattering were developed in many publications.

Journal ArticleDOI
TL;DR: The paper compares the trace-sampling techniques of set sampling and time sampling using the multi-billion reference traces of A.A. Borg et al. (1990) and applies both techniques to multi-megabyte caches, where sampling is most valuable, to find that set sampling meets the 10% sampling goal, while time sampling does not.
Abstract: The paper compares the trace-sampling techniques of set sampling and time sampling Using the multi-billion reference traces of A Borg et al (1990), we apply both techniques to multi-megabyte caches, where sampling is most valuable We evaluate whether either technique meets a 10% sampling goal: a method meets this goal if, at least 90% of the time, it estimates the trace's true misses per instruction with /spl les/10% relative error using /spl les/10% of the trace Results for these traces and caches show that set sampling meets the 10% sampling goal, while time sampling does not We also find that cold-start bias in time samples is most effectively reduced by the technique of DA Wood et al (1991) Nevertheless, overcoming cold-start bias requires tens of millions of consecutive references >

Journal ArticleDOI
TL;DR: The authors show that the iterative algorithm converges if the density of the sampling set exceeds a certain minimum value which naturally increases with the bandwidth of the data, and suggest those for which the convergence rate of the recovery algorithm is maximum or minimum.
Abstract: Analyze the performance of an iterative algorithm, similar to the discrete Papoulis-Gerchberg algorithm, and which can be used to recover missing samples in finite-length records of band-limited data. No assumptions are made regarding the distribution of the missing samples, in contrast with the often studied extrapolation problem, in which the known samples are grouped together. Indeed, it is possible to regard the observed signal as a sampled version of the original one, and to interpret the reconstruction result studied as a sampling result. The authors show that the iterative algorithm converges if the density of the sampling set exceeds a certain minimum value which naturally increases with the bandwidth of the data. They give upper and lower bounds for the error as a function of the number of iterations, together with the signals for which the bounds are attained. Also, they analyze the effect of a relaxation constant present in the algorithm on the spectral radius of the iteration matrix. From this analysis they infer the optimum value of the relaxation constant. They also point out, among all sampling sets with the same density, those for which the convergence rate of the recovery algorithm is maximum or minimum. For low-pass signals it turns out that the best convergence rates result when the distances among the missing samples are a multiple of a certain integer. The worst convergence rates generally occur when the missing samples are contiguous. >

01 Jan 1994
TL;DR: This document has been subjected to the Agency's peer and administrative review, and it has been approved for publication as an EPA document.
Abstract: It has been subjected to the Agency's peer and administrative review, and it has been approved for publication as an EPA document. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.

13 Jul 1994
TL;DR: In this article, guidelines for the development and evaluation of sampling and analytical methods for airborne contaminants are presented, which include selection of compounds for method development, development of the analytical method, evaluation of the method, and preparation of a technical report.
Abstract: Guidelines were presented for the development and evaluation of sampling and analytical methods for airborne contaminants. Steps included selection of compounds for method development and evaluation, development of the sampling and analytical method, evaluation of the method, preparation of a written version of the method, and preparation of a technical report on the development and evaluation. During method development the analyte was identified, a literature search was performed to determine measurement and evaluation concentration ranges, physical and chemical properties, information about the use of the analyte and method availability. During method evaluation, several steps were performed including the recovery of the analyte from the medium, stability of analyte on the sampling medium, stability of the sampling medium itself, sample generation, capacity of the sampler, sampling and analysis evaluation, pressure drop across the sampler, sample stability, precision, bias, and accuracy. In the documentation stage the final report was developed and the format for presenting the analytical method defined.

Journal ArticleDOI
TL;DR: The results of the study indicate that the two-step method provides pressure data more representative of the midgait method, and different values for pressure and force will be obtained, depending on the method of pressure data collection selected by the clinician.
Abstract: The purpose of this study was to determine if pressure data, collected after taking one step or two steps, were similar to values obtained by using the traditional midgait method. Ten healthy subjects, with a mean age of 27 years, walked across a sensor platform sampling at 70 Hz. Each subject was randomly assigned to take one step, two steps, or multiple steps (midgait method) across the sensor platform. The results of the study indicate that the two-step method, in comparison with the one-step method, provides pressure data more representative of the midgait method, and different values for pressure and force will be obtained, depending on the method of pressure data collection selected by the clinician.

Journal ArticleDOI
01 Jan 1994
TL;DR: In this article, the authors used the species-area curve to link the biological with the geographical Larger areas of land would seem to contain more species as a result of both the effect of sampling (i more samples are taken to represent larger areas) and ecological processes (ie island biogeography theory and hypotheses relating to habitat diversity, successional development, species-energy, target-area, incidence function, small island habitat and disturbance) Unfortunately, the species area curve is usually interpreted as though it was due entirely to ecological processes when it could be due largely to sampling Modelled
Abstract: The species-area curve has been used to link the biological with the geographical Larger areas of land would seem to contain more species as a result of both the effect of sampling (ie more samples are taken to represent larger areas) and ecological processes (ie island biogeography theory and hypotheses relating to habitat diversity, successional development, species-energy, target-area, incidence function, small island habitat and disturbance) Unfortunately, the species-area curve is usually interpreted as though it was due entirely to ecological processes when it could be due largely to sampling Modelled and real data (for forests in Ghana) demonstrated that while the effect of both ecological processes alone and sampling alone increased species number with area, only ecological processes could be expected to increase the number of species per unit area These results suggest that before a species-area curve could be used as an indicator of ecological processes the effect of sampling on the species-area curve must first be removed

BookDOI
04 Jan 1994
TL;DR: In this paper, the authors present examples for sampling: Air, Water, Soils and Sediments, Plants and Animals Literature Survey (PASL) and the American Natural History Survey (ANN).
Abstract: Historical Aspects General Aspects Examples for Sampling: Air, Water, Soils and Sediments, Plants and Animals Literature Survey.