scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Large scale wildlife monitoring studies: statistical methods for design and analysis

TL;DR: Basic concepts based on actual avian, amphibian, and fish monitoring studies are presented in this article and it is believed that the estimation of detection probability should be built into the monitoring design through a double sampling approach.
Abstract: Techniques for estimation of absolute abundance of wildlife populations have received a lot of attention in recent years. The statistical research has been focused on intensive small-scale studies. Recently, however, wildlife biologists have desired to study populations of animals at very large scales for monitoring purposes. Population indices are widely used in these extensive monitoring programs because they are inexpensive compared to estimates of absolute abundance. A crucial underlying assumption is that the population index (C) is directly proportional to the population density (D). The proportionality constant, β, is simply the probability of ‘detection’ for animals in the survey. As spatial and temporal comparisons of indices are crucial, it is necessary to also assume that the probability of detection is constant over space and time. Biologists intuitively recognize this when they design rigid protocols for the studies where the indices are collected. Unfortunately, however, in many field studies the assumption is clearly invalid. We believe that the estimation of detection probability should be built into the monitoring design through a double sampling approach. A large sample of points provides an abundance index, and a smaller sub-sample of the same points is used to estimate detection probability. There is an important need for statistical research on the design and analysis of these complex studies. Some basic concepts based on actual avian, amphibian, and fish monitoring studies are presented in this article. Copyright © 2002 John Wiley & Sons, Ltd.
Citations
More filters
Journal ArticleDOI
01 Aug 2002-Ecology
TL;DR: In this paper, a model and likelihood-based method for estimating site occupancy rates when detection probabilities are 0.3 was proposed for American toads (Bufo americanus) and spring peepers (Pseudacris crucifer).
Abstract: Nondetection of a species at a site does not imply that the species is absent unless the probability of detection is 1. We propose a model and likelihood-based method for estimating site occupancy rates when detection probabilities are 0.3). We estimated site occupancy rates for two anuran species at 32 wetland sites in Maryland, USA, from data collected during 2000 as part of an amphibian monitoring program, Frogwatch USA. Site occupancy rates were estimated as 0.49 for American toads (Bufo americanus), a 44% increase over the proportion of sites at which they were actually observed, and as 0.85 for spring peepers (Pseudacris crucifer), slightly above the observed proportion of 0.83.

3,918 citations

Journal ArticleDOI
TL;DR: This paper comments on a number of general issues related to designing occupancy studies, including the need for clear objectives that are explicitly linked to science or management, selection of sampling units, timing of repeat surveys and allocation of survey effort, and found that an optimal removal design will generally be the most efficient.
Abstract: Summary 1 The fraction of sampling units in a landscape where a target species is present (occupancy) is an extensively used concept in ecology Yet in many applications the species will not always be detected in a sampling unit even when present, resulting in biased estimates of occupancy Given that sampling units are surveyed repeatedly within a relatively short timeframe, a number of similar methods have now been developed to provide unbiased occupancy estimates However, practical guidance on the efficient design of occupancy studies has been lacking 2 In this paper we comment on a number of general issues related to designing occupancy studies, including the need for clear objectives that are explicitly linked to science or management, selection of sampling units, timing of repeat surveys and allocation of survey effort Advice on the number of repeat surveys per sampling unit is considered in terms of the variance of the occupancy estimator, for three possible study designs 3 We recommend that sampling units should be surveyed a minimum of three times when detection probability is high (> 0·5 survey−1), unless a removal design is used 4 We found that an optimal removal design will generally be the most efficient, but we suggest it may be less robust to assumption violations than a standard design 5 Our results suggest that for a rare species it is more efficient to survey more sampling units less intensively, while for a common species fewer sampling units should be surveyed more intensively 6 Synthesis and applications Reliable inferences can only result from quality data To make the best use of logistical resources, study objectives must be clearly defined; sampling units must be selected, and repeated surveys timed appropriately; and a sufficient number of repeated surveys must be conducted Failure to do so may compromise the integrity of the study The guidance given here on study design issues is particularly applicable to studies of species occurrence and distribution, habitat selection and modelling, metapopulation studies and monitoring programmes

1,177 citations

Journal ArticleDOI
TL;DR: A method that eliminates the requirement for individual recognition of animals by modelling the underlying process of contact between animals and cameras is developed, opening the possibility of reduced labour costs for estimating wildlife density and may make estimation possible where it has not been previously.
Abstract: Summary 1Density estimation is of fundamental importance in wildlife management. The use of camera traps to estimate animal density has so far been restricted to capture–recapture analysis of species with individually identifiable markings. This study developed a method that eliminates the requirement for individual recognition of animals by modelling the underlying process of contact between animals and cameras. 2The model provides a factor that linearly scales trapping rate with density, depending on two key biological variables (average animal group size and day range) and two characteristics of the camera sensor (distance and angle within which it detects animals). 3We tested the approach in an enclosed animal park with known abundances of four species, obtaining accurate estimates in three out of four cases. Inaccuracy in the fourth species was because of biased placement of cameras with respect to the distribution of this species. 4Synthesis and applications. Subject to unbiased camera placement and accurate measurement of model parameters, this method opens the possibility of reduced labour costs for estimating wildlife density and may make estimation possible where it has not been previously. We provide guidelines on the trapping effort required to obtain reasonably precise estimates.

658 citations

Journal ArticleDOI
TL;DR: A simple mechanistic model for predicting tiger density as a function of prey density is developed and provides evidence of a functional relationship between abundances of large carnivores and their prey under a wide range of ecological conditions.
Abstract: The goal of ecology is to understand interactions that determine the distribution and abundance of organisms. In principle, ecologists should be able to identify a small number of limiting resources for a species of interest, estimate densities of these resources at different locations across the landscape, and then use these estimates to predict the density of the focal species at these locations. In practice, however, development of functional relationships between abundances of species and their resources has proven extremely difficult, and examples of such predictive ability are very rare. Ecological studies of prey requirements of tigers Panthera tigris led us to develop a simple mechanistic model for predicting tiger density as a function of prey density. We tested our model using data from a landscape-scale long-term (1995–2003) field study that estimated tiger and prey densities in 11 ecologically diverse sites across India. We used field techniques and analytical methods that specifically addressed sampling and detectability, two issues that frequently present problems in macroecological studies of animal populations. Estimated densities of ungulate prey ranged between 5.3 and 63.8 animals per km2. Estimated tiger densities (3.2–16.8 tigers per 100 km2) were reasonably consistent with model predictions. The results provide evidence of a functional relationship between abundances of large carnivores and their prey under a wide range of ecological conditions. In addition to generating important insights into carnivore ecology and conservation, the study provides a potentially useful model for the rigorous conduct of macroecological science.

580 citations

References
More filters
Book
19 Jun 2013
TL;DR: The second edition of this book is unique in that it focuses on methods for making formal statistical inference from all the models in an a priori set (Multi-Model Inference).
Abstract: Introduction * Information and Likelihood Theory: A Basis for Model Selection and Inference * Basic Use of the Information-Theoretic Approach * Formal Inference From More Than One Model: Multi-Model Inference (MMI) * Monte Carlo Insights and Extended Examples * Statistical Theory and Numerical Results * Summary

36,993 citations


"Large scale wildlife monitoring stu..." refers background in this paper

  • ...The recent book by Burnham and Anderson (1998) on model selection is very important....

    [...]

Journal ArticleDOI
TL;DR: Mark as discussed by the authors provides parameter estimates from marked animals when they are re-encountered at a later time as dead recoveries, or live recaptures or re-sightings.
Abstract: MARK provides parameter estimates from marked animals when they are re-encountered at a later time as dead recoveries, or live recaptures or re-sightings. The time intervals between re-encounters do not have to be equal. More than one attribute group of animals can be modelled. The basic input to MARK is the encounter history for each animal. MARK can also estimate the size of closed populations. Parameters can be constrained to be the same across re-encounter occasions, or by age, or group, using the parameter index matrix. A set of common models for initial screening of data are provided. Time effects, group effects, time x group effects and a null model of none of the above, are provided for each parameter. Besides the logit function to link the design matrix to the parameters of the model, other link functions include the log—log, complimentary log—log, sine, log, and identity. The estimates of model parameters are computed via numerical maximum likelihood techniques. The number of parameters that are...

7,128 citations

Book
01 Nov 1998
TL;DR: Information theory and log-likelihood models - a basis for model selection and inference practical use of the information theoretic approach model selection uncertainty with examples Monte Carlo insights and extended examples statistical theory.
Abstract: Information theory and log-likelihood models - a basis for model selection and inference practical use of the information theoretic approach model selection uncertainty with examples Monte Carlo insights and extended examples statistical theory.

4,340 citations

Journal ArticleDOI
TL;DR: A recent survey of capture-recapture models can be found in this article, with an emphasis on flexibility in modeling, model selection, and the analysis of multiple data sets.
Abstract: The understanding of the dynamics of animal populations and of related ecological and evolutionary issues frequently depends on a direct analysis of life history parameters. For instance, examination of trade-offs between reproduction and survival usually rely on individually marked animals, for which the exact time of death is most often unknown, because marked individuals cannot be followed closely through time. Thus, the quantitative analysis of survival studies and experiments must be based on capture- recapture (or resighting) models which consider, besides the parameters of primary interest, recapture or resighting rates that are nuisance parameters. Capture-recapture models oriented to estimation of survival rates are the result of a recent change in emphasis from earlier approaches in which population size was the most important parameter, survival rates having been first introduced as nuisance parameters. This emphasis on survival rates in capture-recapture models developed rapidly in the 1980s and used as a basic structure the Cormack-Jolly-Seber survival model applied to an homogeneous group of animals, with various kinds of constraints on the model parameters. These approaches are conditional on first captures; hence they do not attempt to model the initial capture of unmarked animals as functions of population abundance in addition to survival and capture probabilities. This paper synthesizes, using a common framework, these recent developments together with new ones, with an emphasis on flexibility in modeling, model selection, and the analysis of multiple data sets. The effects on survival and capture rates of time, age, and categorical variables characterizing the individuals (e.g., sex) can be considered, as well as interactions between such effects. This "analysis of variance" philosophy emphasizes the structure of the survival and capture process rather than the technical characteristics of any particular model. The flexible array of models encompassed in this synthesis uses a common notation. As a result of the great level of flexibility and relevance achieved, the focus is changed from fitting a particular model to model building and model selection. The following procedure is recommended: (1) start from a global model compatible with the biology of the species studied and with the design of the study, and assess its fit; (2) select a more parsimonious model using Akaike's Information Criterion to limit the number of formal tests; (3) test for the most important biological questions by comparing this model with neighboring ones using likelihood ratio tests; and (4) obtain maximum likelihood estimates of model parameters with estimates of precision. Computer software is critical, as few of the models now available have parameter estimators that are in closed form. A comprehensive table of existing computer software is provided. We used RELEASE for data summary and goodness-of-fit tests and SURGE for iterative model fitting and the computation of likelihood ratio tests. Five increasingly complex examples are given to illustrate the theory. The first, using two data sets on the European Dipper (Cinclus cinclus), tests for sex-specific parameters,

4,038 citations

01 Jan 2010
TL;DR: This paper synthesizes, using a common framework, recent developments of capture-recapture models oriented to estimation of survival rates together with new ones, with an emphasis on flexibility in modeling, model selection, and the analysis of multiple data sets.

4,011 citations


Additional excerpts

  • ...An important reference is Lebreton et al. (1992)....

    [...]