Papers published on a yearly basis
Papers
More filters
••
TL;DR: Modelling approaches are explored that aim to minimize extrapolation errors and assess predictions against prior biological knowledge to promote methods appropriate to range‐shifting species.
Abstract: Summary
1. Species are shifting their ranges at an unprecedented rate through human transportation and environmental change. Correlative species distribution models (SDMs) are frequently applied for predicting potential future distributions of range-shifting species, despite these models’ assumptions that species are at equilibrium with the environments used to train (fit) the models, and that the training data are representative of conditions to which the models are predicted. Here we explore modelling approaches that aim to minimize extrapolation errors and assess predictions against prior biological knowledge. Our aim was to promote methods appropriate to range-shifting species.
2. We use an invasive species, the cane toad in Australia, as an example, predicting potential distributions under both current and climate change scenarios. We use four SDM methods, and trial weighting schemes and choice of background samples appropriate for species in a state of spread. We also test two methods for including information from a mechanistic model. Throughout, we explore graphical techniques for understanding model behaviour and reliability, including the extent of extrapolation.
3. Predictions varied with modelling method and data treatment, particularly with regard to the use and treatment of absence data. Models that performed similarly under current climatic conditions deviated widely when transferred to a novel climatic scenario.
4. The results highlight problems with using SDMs for extrapolation, and demonstrate the need for methods and tools to understand models and predictions. We have made progress in this direction and have implemented exploratory techniques as new options in the free modelling software, MaxEnt. Our results also show that deliberately controlling the fit of models and integrating information from mechanistic models can enhance the reliability of correlative predictions of species in non-equilibrium and novel settings.
5.Implications. The biodiversity of many regions in the world is experiencing novel threats created by species invasions and climate change. Predictions of future species distributions are required for management, but there are acknowledged problems with many current methods, and relatively few advances in techniques for understanding or overcoming these. The methods presented in this manuscript and made accessible in MaxEnt provide a forward step.
2,013 citations
••
01 Jan 2003TL;DR: This chapter overviews some of the recent work on boosting including analyses of AdaBoost's training error and generalization error; boosting’s connection to game theory and linear programming; the relationship between boosting and logistic regression; extensions of Ada boost for multiclass classification problems; methods of incorporating human knowledge into boosting; and experimental and applied work using boosting.
Abstract: Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm, this chapter overviews some of the recent work on boosting including analyses of AdaBoost’s training error and generalization error; boosting’s connection to game theory and linear programming; the relationship between boosting and logistic regression; extensions of AdaBoost for multiclass classification problems; methods of incorporating human knowledge into boosting; and experimental and applied work using boosting.
1,979 citations
•
01 Jan 1999TL;DR: It is shown that using multiple transmit antennas and space-time block coding provides remarkable performance at the expense of almost no extra processing.
Abstract: We document the performance of space-time block codes, which provide a new paradigm for transmission over Rayleigh fading channels using multiple transmit antennas. Data is encoded using a space-time block code, and the encoded data is split into n streams which are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. Maximum likelihood decoding is achieved in a simple way through decoupling of the signals transmitted from different antennas rather than joint detection. This uses the orthogonal structure of the space-time block code and gives a maximum likelihood decoding algorithm which is based only on linear processing at the receiver. We review the encoding and decoding algorithms for various codes and provide simulation results demonstrating their performance. It is shown that using multiple transmit antennas and space-time block coding provides remarkable performance at the expense of almost no extra processing.
1,958 citations
••
04 Jul 2004TL;DR: This work proposes the use of maximum-entropy techniques for this problem, specifically, sequential-update algorithms that can handle a very large number of features, and investigates the interpretability of models constructed using maxent.
Abstract: We study the problem of modeling species geographic distributions, a critical problem in conservation biology. We propose the use of maximum-entropy techniques for this problem, specifically, sequential-update algorithms that can handle a very large number of features. We describe experiments comparing maxent with a standard distribution-modeling tool, called GARP, on a dataset containing observation data for North American breeding birds. We also study how well maxent performs as a function of the number of training examples and training time, analyze the use of regularization to avoid overfitting when the number of examples is small, and explore the interpretability of models constructed using maxent.
1,956 citations
••
TL;DR: A general method for combining the classifiers generated on the binary problems is proposed, and a general empirical multiclass loss bound is proved given the empirical loss of the individual binary learning algorithms.
Abstract: We present a unifying framework for studying the solution of multiclass categorization problems by reducing them to multiple binary problems that are then solved using a margin-based binary learning algorithm. The proposed framework unifies some of the most popular approaches in which each class is compared against all others, or in which all pairs of classes are compared to each other, or in which output codes with error-correcting properties are used. We propose a general method for combining the classifiers generated on the binary problems, and we prove a general empirical multiclass loss bound given the empirical loss of the individual binary learning algorithms. The scheme and the corresponding bounds apply to many popular classification learning algorithms including support-vector machines, AdaBoost, regression, logistic regression and decision-tree algorithms. We also give a multiclass generalization error analysis for general output codes with AdaBoost as the binary learner. Experimental results with SVM and AdaBoost show that our scheme provides a viable alternative to the most commonly used multiclass algorithms.
1,949 citations
Authors
Showing all 1881 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yoshua Bengio | 202 | 1033 | 420313 |
Scott Shenker | 150 | 454 | 118017 |
Paul Shala Henry | 137 | 318 | 35971 |
Peter Stone | 130 | 1229 | 79713 |
Yann LeCun | 121 | 369 | 171211 |
Louis E. Brus | 113 | 347 | 63052 |
Jennifer Rexford | 102 | 394 | 45277 |
Andreas F. Molisch | 96 | 777 | 47530 |
Vern Paxson | 93 | 267 | 48382 |
Lorrie Faith Cranor | 92 | 326 | 28728 |
Ward Whitt | 89 | 424 | 29938 |
Lawrence R. Rabiner | 88 | 378 | 70445 |
Thomas E. Graedel | 86 | 348 | 27860 |
William W. Cohen | 85 | 384 | 31495 |
Michael K. Reiter | 84 | 380 | 30267 |