scispace - formally typeset
Search or ask a question
Institution

University of Oklahoma

EducationNorman, Oklahoma, United States
About: University of Oklahoma is a education organization based out in Norman, Oklahoma, United States. It is known for research contribution in the topics: Population & Radar. The organization has 25269 authors who have published 52609 publications receiving 1821706 citations. The organization is also known as: OU & Oklahoma University.


Papers
More filters
Journal ArticleDOI
TL;DR: This work proposes several strategies for utilizing external data (such as might be obtained using GIS) to aid in the completion of species lists, and demonstrates the potential of these approaches using simulation and case studies from Oklahoma.
Abstract: A substantial body of literature has accumulated on the topic of the estimation of species richness by extrapolation. However, most of these methods rely on an objective sampling of nature. This condition is difficult to meet and seldom achieved for large regions. Furthermore, scientists conducting biological surveys often already have preliminary but subjectively gathered species lists, and would like to assess the completeness of such lists, and/or to find a way to perfect them. We propose several strategies for utilizing external data (such as might be obtained using GIS) to aid in the completion of species lists. These include: (i) using existing species lists to develop predictive models; (ii) using the uniqueness of the environment as a guide to find underrepresented species; (iii) using spectral heterogeneity to locate environmentally heterogeneous regions; (iv) combining surveys with statistical model-building in an iterative manner. We demonstrate the potential of these approaches using simulation and case studies from Oklahoma. Copyright © 2002 John Wiley & Sons, Ltd.

342 citations

Journal ArticleDOI
TL;DR: In this article, the authors review seven of the most reliable indicators used for deriving distances to galaxies as far way as 100 Mpc: globular-cluster luminosity functions (GCLF), novae, Type Ia supernovae (SN Ia), H I-line-width relations, planetary-nebula luminosity function (PNLF), surface-brightness fluctuations (SBF), and fundamental plane relationship for elliptical galaxies (Dn - sigma).
Abstract: We review seven of the most reliable indicators used for deriving distances to galaxies as far way as 100 Mpc: globular-cluster luminosity functions (GCLF), novae, Type Ia supernovae (SN Ia), H I-line-width relations, planetary-nebula luminosity functions (PNLF), surface-brightness fluctuations (SBF), and fundamental-plane relationships for elliptical galaxies (Dn - sigma). In addition, we examine the use of Cepheid variables since these serve to set zero points for most of the methods. We pay particular attention to the uncertainties inherent in these methods, both internal and external. We then test these uncertainties by comparing distances derived with each technique to distances derived from surface-brightness fluctuations. We find that there are small systematic offsets between the PNLF, GCLF, and SBF methods, with the PNLF and GCLF distances being on average 6% and 13% larger than those of the SBF method. The dispersion between the PNLF and SBF distances is 8%; the GCLF-SBF dispersion is 16%, the SN Ia-SBF dispersion is 28%, Dn-sigma-SBF dispersion is 26%, and the Tully-Fisher-SBF dispersion is 32%. The latter value drops to 14%, however, when one considers only well-mixed groups, suggesting that the spiral galaxies measured with Tully-Fisher are not always spatially coincident with the groups' elliptical galaxies. In the mean, all the methods agree extremely well. We also present a summary of distances to the Virgo Cluster. Weighted and unweighted averages of the seven methods yield Virgo distances of 16 ± 1.7 and 17.6 ± 2.2 Mpc, respectively. The overlap among all the indicators is well within the expected accuracies of the methods. Using the weighted or unweighted Virgo distances to bootstrap to the Coma Cluster, we find the Hubble constant to be either 80 ± 11 or 73 ± 11 km s-1 Mpc-1 respectively.

342 citations

Journal ArticleDOI
TL;DR: The purpose of this tutorial is to describe procedures for estimating sample size for a variety of different experimental designs that are common in strength and conditioning research, using the G*Power software package.
Abstract: The statistical power, or sensitivity of an experiment, is defined as the probability of rejecting a false null hypothesis. Only 3 factors can affect statistical power: (a) the significance level ([alpha]), (b) the magnitude or size of the treatment effect (effect size), and (c) the sample size (n). Of these 3 factors, only the sample size can be manipulated by the investigator because the significance level is usually selected before the study, and the effect size is determined by the effectiveness of the treatment. Thus, selection of an appropriate sample size is one of the most important components of research design but is often misunderstood by beginning researchers. The purpose of this tutorial is to describe procedures for estimating sample size for a variety of different experimental designs that are common in strength and conditioning research. Emphasis is placed on selecting an appropriate effect size because this step fully determines sample size when power and the significance level are fixed. There are many different software packages that can be used for sample size estimation. However, I chose to describe the procedures for the G*Power software package (version 3.1.4) because this software is freely downloadable and capable of estimating sample size for many of the different statistical tests used in strength and conditioning research. Furthermore, G*Power provides a number of different auxiliary features that can be useful for researchers when designing studies. It is my hope that the procedures described in this article will be beneficial for researchers in the field of strength and conditioning.

341 citations

Book
04 Sep 2006
TL;DR: In this paper, the authors present a broad introduction to the history, development and philosophy of data assimilation, illustrated by examples, both linear and nonlinear, and a set of exercises with instructive hints.
Abstract: Dynamic data assimilation is the assessment, combination and synthesis of observational data, scientific laws and mathematical models to determine the state of a complex physical system, for instance as a preliminary step in making predictions about the system's behaviour. The topic has assumed increasing importance in fields such as numerical weather prediction where conscientious efforts are being made to extend the term of reliable weather forecasts beyond the few days that are presently feasible. This book is designed to be a basic one-stop reference for graduate students and researchers. It is based on graduate courses taught over a decade to mathematicians, scientists, and engineers, and its modular structure accommodates the various audience requirements. Thus Part I is a broad introduction to the history, development and philosophy of data assimilation, illustrated by examples; Part II considers the classical, static approaches, both linear and nonlinear; and Part III describes computational techniques. Parts IV to VII are concerned with how statistical and dynamic ideas can be incorporated into the classical framework. Key themes covered here include estimation theory, stochastic and dynamic models, and sequential filtering. The final part addresses the predictability of dynamical systems. Chapters end with a section that provides pointers to the literature, and a set of exercises with instructive hints.

341 citations

Journal ArticleDOI
TL;DR: In this paper, the authors summarized the recent progress and discussed some of the challenges for future advancement in the use of high-resolution numerical weather prediction (NWP) for nowcasting.
Abstract: Traditionally, the nowcasting of precipitation was conducted to a large extent by means of extrapolation of observations, especially of radar ref lectivity. In recent years, the blending of traditional extrapolation-based techniques with high-resolution numerical weather prediction (NWP) is gaining popularity in the nowcasting community. The increased need of NWP products in nowcasting applications poses great challenges to the NWP community because the nowcasting application of high-resolution NWP has higher requirements on the quality and content of the initial conditions compared to longer-range NWP. Considerable progress has been made in the use of NWP for nowcasting thanks to the increase in computational resources, advancement of high-resolution data assimilation techniques, and improvement of convective-permitting numerical modeling. This paper summarizes the recent progress and discusses some of the challenges for future advancement.

341 citations


Authors

Showing all 25490 results

NameH-indexPapersCitations
Ronald C. Kessler2741332328983
Michael A. Strauss1851688208506
Derek R. Lovley16858295315
Ashok Kumar1515654164086
Peter J. Schwartz147647107695
Peter Buchholz143118192101
Robert Hirosky1391697106626
Elizabeth Barrett-Connor13879373241
Brad Abbott137156698604
Lihong V. Wang136111872482
Itsuo Nakano135153997905
Phillip Gutierrez133139196205
P. Skubic133157397343
Elizaveta Shabalina133142192273
Richard Brenner133110887426
Network Information
Related Institutions (5)
University of Texas at Austin
206.2K papers, 9M citations

95% related

Pennsylvania State University
196.8K papers, 8.3M citations

94% related

University of Washington
305.5K papers, 17.7M citations

93% related

University of Southern California
169.9K papers, 7.8M citations

92% related

University of Minnesota
257.9K papers, 11.9M citations

92% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
202392
2022348
20212,425
20202,481
20192,433
20182,396