scispace - formally typeset
Search or ask a question
Author

Hari Mukerjee

Other affiliations: City University of New York
Bio: Hari Mukerjee is an academic researcher from Wichita State University. The author has contributed to research in topics: Estimator & Stochastic ordering. The author has an hindex of 9, co-authored 31 publications receiving 957 citations. Previous affiliations of Hari Mukerjee include City University of New York.

Papers
More filters
Journal ArticleDOI
TL;DR: Theoretical results for comparing coherent systems are derived for comparing systems of a given order when components are assumed to have independent and identically distributed lifetimes as mentioned in this paper, and sufficient conditions are provided for the lifetime of one system to be larger than that of another system in three different senses: stochastic ordering, hazard rate ordering, and likelihood ratio ordering.
Abstract: Various methods and criteria for comparing coherent systems are discussed. Theoretical results are derived for comparing systems of a given order when components are assumed to have independent and identically distributed lifetimes. All comparisons rely on the representation of a system's lifetime distribution as a function of the system's “signature,” that is, as a function of the vector p= (p1, … , pn), where pi is the probability that the system fails upon the occurrence of the ith component failure. Sufficient conditions are provided for the lifetime of one system to be larger than that of another system in three different senses: stochastic ordering, hazard rate ordering, and likelihood ratio ordering. Further, a new preservation theorem for hazard rate ordering is established. In the final section, the notion of system signature is used to examine a recently published conjecture regarding componentwise and systemwise redundancy. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 507–523, 1999

268 citations

Journal ArticleDOI
TL;DR: In this paper, a hybrid procedure that produces monotone estimators with properties similar to those of nonparametric regression estimators was proposed, and the analytic and asymptotic properties of the estimator are superior in the latter case.
Abstract: In monotone regression procedures one utilizes only the monotonicity of the regression function. In nonparametric regression one utilizes only the assumed smoothness. The analytic and asymptotic properties of the estimator are superior in the latter case; however, monotonicity is not guaranteed. We study a hybrid procedure that produces monotone estimators with properties similar to those of nonparametric regression estimators.

204 citations

Journal ArticleDOI
TL;DR: In this article, the authors considered the k-sample case and derived the nonparametric maximum likelihood estimators of F1 and F2 under this order restriction, with strict inequality in some cases.
Abstract: If X1 and X2 are random variables with distribution functions F1 and F2, then X1 is said to be stochastically larger than X2 if F1 ≤F2. Statistical inferences under stochastic ordering for the two-sample case has a long and rich history. In this article we consider the k-sample case; that is, we have k populations with distribution functions F1, F2, … , Fk,k ≥ 2, and we assume that F1 ≤ F2 ≤ ˙˙˙ ≤ Fk. For k = 2, the nonparametric maximum likelihood estimators of F1 and F2 under this order restriction have been known for a long time; their asymptotic distributions have been derived only recently. These results have very complicated forms and are hard to deal with when making statistical inferences. We provide simple estimators when k ≥ 2. These are strongly uniformly consistent, and their asymptotic distributions have simple forms. If and are the empirical and our restricted estimators of Fi, then we show that, asymptotically, for all x and all u > 0, with strict inequality in some cases. This clearly show...

64 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that the nonparametric maximum likelihood estimator (NPMLE) of S under USO when T is known is inconsistent, and that these too are inconsistent in general.
Abstract: If S and T are survival functions for two life distributions, then S is said to be uniformly stochastically smaller than T, denoted by S ≪ T, if θ(x) ≡ S(x)/T(x) is nonincreasing in x on {x: T(x) > 0}. This ordering is transitive. Uniform stochastic ordering (USO) has found important applications in nonparametric accelerated life testing, among other areas. It has been shown that the nonparametric maximum likelihood estimator (NPMLE) of S under USO when T is known is inconsistent. Dykstra, Kochar Robertson derived the restricted NPMLE's of several unknown survival functions linearly ordered by USO. This article shows that these too are inconsistent in general. Rojo and Samaniego gave excellent ad hoc estimators of S and T when the other is known. Based on their idea for the one-sample problem, they gave two ad hoc estimators (one of them only implied) of S and T when they are both unknown. These are consistent, but they lack some desirable properties. This article introduces a one-parameter famil...

30 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

Journal ArticleDOI
TL;DR: There are a large number of different definitions used for sample quantiles in statistical computer packages Often within the same package one definition will be used to compute a quantile explicitly, while other definitions may be used when producing a boxplot, a probability plot, or a QQ plot as mentioned in this paper.
Abstract: There are a large number of different definitions used for sample quantiles in statistical computer packages Often within the same package one definition will be used to compute a quantile explicitly, while other definitions may be used when producing a boxplot, a probability plot, or a QQ plot We compare the most commonly implemented sample quantile definitions by writing them in a common notation and investigating their motivation and some of their properties We argue that there is a need to adopt a standard definition for sample quantiles so that the same answers are produced by different packages and within each package We conclude by recommending that the median-unbiased estimator be used because it has most of the desirable properties of a quantile estimator and can be defined independently of the underlying distribution

911 citations

Journal ArticleDOI
TL;DR: In this article, a multivariate extreme value threshold model for joint tail estimation is proposed, which overcomes the problems encountered with existing techniques when the variables are near independence, and tests for independence of extremes of the marginal variables, both when the thresholds are fixed and when they increase with the sample size.
Abstract: We propose a multivariate extreme value threshold model for joint tail estimation which overcomes the problems encountered with existing techniques when the variables are near independence. We examine inference under the model and develop tests for independence of extremes of the marginal variables, both when the thresholds are fixed, and when they increase with the sample size. Motivated by results obtained from this model, we give a new and widely applicable characterisation of dependence in the joint tail which includes existing models as special cases. A new parameter which governs the form of dependence is of fundamental importance to this characterisation. By estimating this parameter, we develop a diagnostic test which assesses the applicability of bivariate extreme value joint tail models. The methods are demonstrated through simulation and by analysing two previously published data sets.

802 citations

Book
01 Jan 2002
TL;DR: In this paper, the authors proposed a mean-variance framework for measuring financial risk, which is used to measure the value at risk and the coherent risk measures in financial markets.
Abstract: Preface to the Second EditionAcknowledgements1 The Rise of Value at Risk1.1 The emergence of financial risk management1.2 Market risk management1.3 Risk management before VaR1.4 Value at riskAppendix 1: Types of Market Risk2 Measures of Financial Risk2.1 The Mean-Variance framework for measuring financial risk2.2 Value at risk2.3 Coherent risk measures2.4 ConclusionsAppendix 1: Probability FunctionsAppendix 2: Regulatory Uses of VaR3 Estimating Market Risk Measures: An Introduction and Overview3.1 Data3.2 Estimating historical simulation VaR3.3 Estimating parametric VaR3.4 Estimating coherent risk measures3.5 Estimating the standard errors of risk measure estimators3.6 OverviewAppendix 1: Preliminary Data AnalysisAppendix 2: Numerical Integration Methods4 Non-parametric Approaches4.1 Compiling historical simulation data4.2 Estimation of historical simulation VaR and ES4.3 Estimating confidence intervals for historical simulation VaR and ES4.4 Weighted historical simulation4.5 Advantages and disadvantages of non-parametric methods4.6 ConclusionsAppendix 1: Estimating Risk Measures with Order StatisticsAppendix 2: The BootstrapAppendix 3: Non-parametric Density EstimationAppendix 4: Principal Components Analysis and Factor Analysis5 Forecasting Volatilities, Covariances and Correlations5.1 Forecasting volatilities5.2 Forecasting covariances and correlations5.3 Forecasting covariance matricesAppendix 1: Modelling Dependence: Correlations and Copulas6 Parametric Approaches (I)6.1 Conditional vs unconditional distributions6.2 Normal VaR and ES6.3 The t-distribution6.4 The lognormal distribution6.5 Miscellaneous parametric approaches6.6 The multivariate normal variance-covariance approach6.7 Non-normal variance-covariance approaches6.8 Handling multivariate return distributions with copulas6.9 ConclusionsAppendix 1: Forecasting longer-term Risk Measures7 Parametric Approaches (II): Extreme Value7.1 Generalised extreme-value theory7.2 The peaks-over-threshold approach: the generalised pareto distribution7.3 Refinements to EV approaches7.4 Conclusions8 Monte Carlo Simulation Methods8.1 Uses of monte carlo simulation8.2 Monte carlo simulation with a single risk factor8.3 Monte carlo simulation with multiple risk factors8.4 Variance-reduction methods8.5 Advantages and disadvantages of monte carlo simulation8.6 Conclusions9 Applications of Stochastic Risk Measurement Methods9.1 Selecting stochastic processes9.2 Dealing with multivariate stochastic processes9.3 Dynamic risks9.4 Fixed-income risks9.5 Credit-related risks9.6 Insurance risks9.7 Measuring pensions risks9.8 Conclusions10 Estimating Options Risk Measures10.1 Analytical and algorithmic solutions m for options VaR10.2 Simulation approaches10.3 Delta-gamma and related approaches10.4 Conclusions11 Incremental and Component Risks11.1 Incremental VaR11.2 Component VaR11.3 Decomposition of coherent risk measures12 Mapping Positions to Risk Factors12.1 Selecting core instruments12.2 Mapping positions and VaR estimation13 Stress Testing13.1 Benefits and difficulties of stress testing13.2 Scenario analysis13.3 Mechanical stress testing13.4 Conclusions14 Estimating Liquidity Risks14.1 Liquidity and liquidity risks14.2 Estimating liquidity-adjusted VaR14.3 Estimating liquidity at risk (LaR)14.4 Estimating liquidity in crises15 Backtesting Market Risk Models15.1 Preliminary data issues15.2 Backtests based on frequency tests15.3 Backtests based on tests of distribution equality15.4 Comparing alternative models15.5 Backtesting with alternative positions and data15.6 Assessing the precision of backtest results15.7 Summary and conclusionsAppendix 1: Testing Whether Two Distributions are Different16 Model Risk16.1 Models and model risk16.2 Sources of model risk16.3 Quantifying model risk16.4 Managing model risk16.5 ConclusionsBibliographyAuthor IndexSubject Index

519 citations