scispace - formally typeset
Search or ask a question
JournalISSN: 0169-7161

Handbook of Statistics 

Elsevier BV
About: Handbook of Statistics is an academic journal published by Elsevier BV. The journal publishes majorly in the area(s): Estimator & Population. It has an ISSN identifier of 0169-7161. Over the lifetime, 909 publications have been published receiving 25085 citations.


Papers
More filters
Book ChapterDOI
TL;DR: The Black-Scholes model predicts a flat term structure of volatilities as mentioned in this paper, which is typically upward sloping when short term volatility is low and the reverse when they are high.
Abstract: Publisher Summary The class of stochastic volatility (SV) models has its roots in both, mathematical finance and financial econometrics. In fact, several variations of SV models originated from research looking at very different issues. Volatility plays a central role in the pricing of derivative securities. The Black-Scholes model for the pricing of an European option is by far the most widely used formula even when the underlying assumptions are known to be violated. The Black-Scholes model predicts a flat term structure of volatilities. In reality, the term structure of at-the-money implied volatilities is typically upward sloping when short term volatilities are low and the reverse when they are high. The Black-Scholes model is taken as a reference point from which several notions of volatility are presented. Several stylized facts regarding volatility and option prices are also presented. Both sections set the scene for a formal framework defining stochastic volatility. The chapter introduces the statistical models of stochastic volatility.

1,466 citations

Book ChapterDOI
TL;DR: In this paper, the authors describe the commonly used multidimensional item response theory (MIRT) models and the important methods needed for their practical application, including ways to determine the number of dimensions required to adequately model data, procedures for estimating model parameters, ways to define the space for a MIRT model, and procedures for transforming calibrations from different samples to put them in the same space.
Abstract: Multidimensional Item Response Theory is the first book to give thorough coverage to this emerging area of psychometrics. The book describes the commonly used multidimensional item response theory (MIRT) models and the important methods needed for their practical application. These methods include ways to determine the number of dimensions required to adequately model data, procedures for estimating model parameters, ways to define the space for a MIRT model, and procedures for transforming calibrations from different samples to put them in the same space. A full chapter is devoted to methods for multidimensional computerized adaptive testing. The text is appropriate for an advanced course in psychometric theory or as a reference work for those interested in applying MIRT methodology. A working knowledge of unidimensional item response theory and matrix algebra is assumed. Knowledge of factor analysis is also helpful.

868 citations

Book ChapterDOI
TL;DR: It is shown that as the number of samples increases, not only does the designer have more confidence in the performance of the classifier, but also more measurements can be incorporated in the design of the classify without the fear of peaking in its performance.
Abstract: Publisher Summary This chapter discusses the role that the relationship between the number of measurements and the number of training patterns plays at various stages in the design of a pattern recognition system. The designer of a pattern recognition system should make every possible effort to obtain as many samples as possible. As the number of samples increases, not only does the designer have more confidence in the performance of the classifier, but also more measurements can be incorporated in the design of the classifier without the fear of peaking in its performance. However, there are many pattern classification problems where either the number of samples is limited or obtaining a large number of samples is extremely expensive. If the designer chooses to take the optimal Bayesian approach, the average performance of the classifier improves monotonically as the number of measurements is increased. Most practical pattern recognition systems employ a non-Bayesian decision rule because the use of optimal Bayesian approach requires knowledge of prior densities, and besides, their complexity precludes the development of real-time recognition systems. The peaking behavior of practical classifiers is caused principally by their nonoptimal use of measurements.

597 citations

Book ChapterDOI
TL;DR: This chapter aims to introduce the prior modeling, estimation, and evaluation of mixture distributions in a Bayesian paradigm, and shows that mixture distributions provide a flexible, parametric framework for statistical modeling and analysis.
Abstract: Publisher Summary Mixture distributions comprise a finite or infinite number of components, possibly of different distributional types, that can describe different features of data. The Bayesian paradigm allows for probability statements to be made directly about the unknown parameters, prior or expert opinion to be included in the analysis, and hierarchical descriptions of both local-scale and global features of the model. This chapter aims to introduce the prior modeling, estimation, and evaluation of mixture distributions in a Bayesian paradigm. The chapter shows that mixture distributions provide a flexible, parametric framework for statistical modeling and analysis. Focus is on the methods rather than advanced examples, in the hope that an understanding of the practical aspects of such modeling can be carried into many disciplines. It also points out the fundamental difficulty in doing inference with such objects, along with a discussion about prior modeling, which is more restrictive than usual, and the constructions of estimators, which also is more involved than the standard posterior mean solution. Finally, this chapter gives some pointers to the related models and problems like mixtures of regressions and hidden Markov models as well as Dirichlet priors.

466 citations

Book ChapterDOI
TL;DR: In this paper, the authors focus on the inequalities, small-ball probabilities, and application of Gaussian processes, and find that the small ball probability is a key step in studying the lower limits of the Gaussian process.
Abstract: Publisher Summary This chapter focuses on the inequalities, small ball probabilities, and application of Gaussian processes. It is well-known that the large deviation result plays a fundamental role in studying the upper limits of Gaussian processes, such as the Strassen type law of the iterated logarithm. However, the complexity of the small ball estimate is well-known, and there are only a few Gaussian measures for which the small ball probability can be determined completely. The small ball probability is a key step in studying the lower limits of the Gaussian process. It has been found that the small ball estimate has close connections with various approximation quantities of compact sets and operators, and has a variety of applications in studies of Hausdorff dimensions, rate of convergence in Strassen's law of the iterated logarithm, and empirical processes.

442 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
20232
20228
20211
20204
20193
201714