Other affiliations: University of South Florida, University of Maryland, College Park, Temple University ...read more
Bio: Samuel Kotz is an academic researcher from George Washington University. The author has contributed to research in topics: Multivariate statistics & Probability distribution. The author has an hindex of 60, co-authored 439 publications receiving 35303 citations. Previous affiliations of Samuel Kotz include University of South Florida & University of Maryland, College Park.
Papers published on a yearly basis
01 Jan 1994
TL;DR: Continuous Distributions (General) Normal Distributions Lognormal Distributions Inverse Gaussian (Wald) Distributions Cauchy Distribution Gamma Distributions Chi-Square Distributions Including Chi and Rayleigh Exponential Distributions Pareto Distributions Weibull Distributions Abbreviations Indexes
Abstract: Continuous Distributions (General) Normal Distributions Lognormal Distributions Inverse Gaussian (Wald) Distributions Cauchy Distribution Gamma Distributions Chi-Square Distributions Including Chi and Rayleigh Exponential Distributions Pareto Distributions Weibull Distributions Abbreviations Indexes
•01 Nov 1989
TL;DR: In this article, the authors define marginal distributions, moments and density marginal distributions moments density the relationship between (phi and f) conditional distributions properties of elliptically symmetric distributions mixtures of normal distributions robust statistics and regression model robust statistics regression model log-elliptical and additive logistic elliptical distributions multivariate log elliptical distribution additive logistics elliptical distribution complex elliptical symmetric distribution.
Abstract: Part 1 Preliminaries: construction of symmetric multivariate distributions notation of algebraic entities and characteristics of random quantities the "d" operator groups and invariance dirichlet distribution problems 1. Part 2 Spherically and elliptically symmetric distributions: introduction and definition marginal distributions, moments and density marginal distributions moments density the relationship between (phi) and f conditional distributions properties of elliptically symmetric distributions mixtures of normal distributions robust statistics and regression model robust statistics regression model log-elliptical and additive logistic elliptical distributions multivariate log-elliptical distribution additive logistic elliptical distributions complex elliptically symmetric distributions. Part 3 Some subclasses of elliptical distributions: multiuniform distribution the characteristic function moments marginal distribution conditional distributions uniform distribution in the unit sphere discussion symmetric Kotz type distributions definition distribution of R(2) moments multivariate normal distributions the c.f. of Kotz type distributions symmetric multivariate Pearson type VII distributions definition marginal densities conditional distributions moments conditional distributions moments some examples extended Tn family relationships between Ln and Tn families of distributions order statistics mixtures of exponential distributions independence, robustness and characterizations problems V. Part 6 Multivariate Liouville distributions: definitions and properties examples marginal distributions conditional distribution characterizations scale-invariant statistics survival functions inequalities and applications.
01 Jan 1992
TL;DR: In this paper, the authors propose a family of Discrete Distributions, which includes Hypergeometric, Mixture, and Stopped-Sum Distributions (see Section 2.1).
Abstract: Preface. 1. Preliminary Information. 2. Families of Discrete Distributions. 3. Binomial Distributions. 4. Poisson Distributions. 5. Neggative Binomial Distributions. 6. Hypergeometric Distributions. 7. Logarithmic and Lagrangian Distributions. 8. Mixture Distributions. 9. Stopped-Sum Distributions. 10. Matching, Occupancy, Runs, and q-Series Distributions. 11. Parametric Regression Models and Miscellanea. Bibliography. Abbreviations. Index.
TL;DR: The hierarchical model of Lonnstedt and Speed (2002) is developed into a practical approach for general microarray experiments with arbitrary numbers of treatments and RNA samples and the moderated t-statistic is shown to follow a t-distribution with augmented degrees of freedom.
Abstract: The problem of identifying differentially expressed genes in designed microarray experiments is considered. Lonnstedt and Speed (2002) derived an expression for the posterior odds of differential expression in a replicated two-color experiment using a simple hierarchical parametric model. The purpose of this paper is to develop the hierarchical model of Lonnstedt and Speed (2002) into a practical approach for general microarray experiments with arbitrary numbers of treatments and RNA samples. The model is reset in the context of general linear models with arbitrary coefficients and contrasts of interest. The approach applies equally well to both single channel and two color microarray experiments. Consistent, closed form estimators are derived for the hyperparameters in the model. The estimators proposed have robust behavior even for small numbers of arrays and allow for incomplete data arising from spot filtering or spot quality weights. The posterior odds statistic is reformulated in terms of a moderated t-statistic in which posterior residual standard deviations are used in place of ordinary standard deviations. The empirical Bayes approach is equivalent to shrinkage of the estimated sample variances towards a pooled estimate, resulting in far more stable inference when the number of arrays is small. The use of moderated t-statistics has the advantage over the posterior odds that the number of hyperparameters which need to estimated is reduced; in particular, knowledge of the non-null prior for the fold changes are not required. The moderated t-statistic is shown to follow a t-distribution with augmented degrees of freedom. The moderated t inferential approach extends to accommodate tests of composite null hypotheses through the use of moderated F-statistics. The performance of the methods is demonstrated in a simulation study. Results are presented for two publicly available data sets.
TL;DR: This work proposes a principled statistical framework for discerning and quantifying power-law behavior in empirical data by combining maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios.
Abstract: Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data, while in others the power law is ruled out.
TL;DR: In this article, a unified approach to fitting two-stage random-effects models, based on a combination of empirical Bayes and maximum likelihood estimation of model parameters and using the EM algorithm, is discussed.
Abstract: Models for the analysis of longitudinal data must recognize the relationship between serial observations on the same unit. Multivariate models with general covariance structure are often difficult to apply to highly unbalanced data, whereas two-stage random-effects models can be used easily. In two-stage models, the probability distributions for the response vectors of different individuals belong to a single family, but some random-effects parameters vary across individuals, with a distribution specified at the second stage. A general family of models is discussed, which includes both growth models and repeated-measures models as special cases. A unified approach to fitting these models, based on a combination of empirical Bayes and maximum likelihood estimation of model parameters and using the EM algorithm, is discussed. Two examples are taken from a current epidemiological study of the health effects of air pollution.
TL;DR: In this paper, a closed-form solution for the price of a European call option on an asset with stochastic volatility is derived based on characteristi c functions and can be applied to other problems.
Abstract: I use a new technique to derive a closed-form solution for the price of a European call option on an asset with stochastic volatility. The model allows arbitrary correlation between volatility and spotasset returns. I introduce stochastic interest rates and show how to apply the model to bond options and foreign currency options. Simulations show that correlation between volatility and the spot asset’s price is important for explaining return skewness and strike-price biases in the BlackScholes (1973) model. The solution technique is based on characteristi c functions and can be applied to other problems.
•01 Jan 2003
TL;DR: In this paper, the authors describe the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation, and compare simulation-assisted estimation procedures, including maximum simulated likelihood, method of simulated moments, and methods of simulated scores.
Abstract: This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum simulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. No other book incorporates all these fields, which have arisen in the past 20 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.