scispace - formally typeset
Search or ask a question
Author

Vladimir F. Strelchonok

Bio: Vladimir F. Strelchonok is an academic researcher from Baltic International Academy. The author has contributed to research in topics: Parametric statistics & Order statistic. The author has an hindex of 3, co-authored 12 publications receiving 28 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a new approach to constructing lower and upper tolerance limits on order statistics in future samples, which requires a quantile of the F distribution and is conceptually simple and easy to use.
Abstract: It is often desirable to have statistical tolerance limits available for the distributions used to describe time-to-failure data in reliability problems. For example, one might wish to know if at least a certain proportion, say β, of a manufactured product will operate at least T hours. This question cannot usually be answered exactly, but it may be possible to determine a lower tolerance limit L(X), based on a random sample X, such that one can say with a certain confidence γ that at least 100β% of the product will operate longer than L(X). Then reliability statements can be made based on L(X), or, decisions can be reached by comparing L(X) to T. Tolerance limits of the type mentioned above are considered in this paper, which presents a new approach to constructing lower and upper tolerance limits on order statistics in future samples. Attention is restricted to invariant families of distributions under parametric uncertainty. The approach used here emphasizes pivotal quantities relevant for obtaining tolerance factors and is applicable whenever the statistical problem is invariant under a group of transformations that acts transitively on the parameter space. It does not require the construction of any tables and is applicable whether the past data are complete or Type II censored. The proposed approach requires a quantile of the F distribution and is conceptually simple and easy to use. For illustration, the Pareto distribution is considered. The discussion is restricted to one-sided tolerance limits. A practical example is given.

9 citations

Journal ArticleDOI
24 Jun 2016
TL;DR: In this article, a new approach to construct lower and upper tolerance limits on order statistics in future samples is proposed, which is applicable whenever the statistical problem is invariant under a group of transformations that acts transitively on the parameter space.
Abstract: Although the concept of statistical tolerance limits has been well recognized for long time, surprisingly, it seems that their applications remain still limited. Analytic formulas for the tolerance limits are available in only simple cases, for example, for the upper or lower tolerance limit for a univariate normal population. Thus it becomes necessary to use new or innovative approaches which will allow one to construct tolerance limits on future order statistics for many populations. In this paper, a new approach to constructing lower and upper tolerance limits on order statistics in future samples is proposed. Attention is restricted to invariant families of distributions under parametric uncertainty. The approach used here emphasizes pivotal quantities relevant for obtaining tolerance factors and is applicable whenever the statistical problem is invariant under a group of transformations that acts transitively on the parameter space. It does not require the construction of any tables and is applicable whether the past data are complete or Type II censored. The proposed approach requires a quantile of the F distribution and is conceptually simple and easy to use. For illustration, the normal distribution is considered. The discussion is restricted to one-sided tolerance limits. A practical example of finding a warranty assessment of image quality is given.

8 citations

Proceedings ArticleDOI
09 Jun 2006
TL;DR: In this article, the problem of determining optimal booking policy for multiple fare classes in a pool of identical seats for multi-leg flights is considered, and the dynamic policy uses the most recent demand and capacity information and allows one to allocate seats dynamically with anticipation over time.
Abstract: In this paper, the problem of determining optimal booking policy for multiple fare classes in a pool of identical seats for multi‐leg flights is considered. For large commercial airlines, efficiently setting and updating seat allocation targets for each passenger category on each multi‐leg flight is an extremely difficult problem. This paper presents static and dynamic policies of allocation of airline seats for multi‐leg flights with multiple fare classes, which allow one to maximize an expected contribution to profit. The dynamic policy uses the most recent demand and capacity information and allows one to allocate seats dynamically with anticipation over time. A numerical example is given.

5 citations

Journal ArticleDOI
TL;DR: A model for predicting the fatigue crack growth by ANN is presented, which does not need all kinds of materials and environment parameters, and only needs to measure the relation between a length of crack and N in-service.
Abstract: Failure analysis and prevention are important to all of the engineering disciplines, especially for the aerospace industry. Aircraft accidents are remembered by the public because of the unusually high loss of life and broad extent of damage. In this paper, the artificial neural network (ANN) technique for the data processing of on-line fatigue crack growth monitoring is proposed after analyzing the general technique for fatigue crack growth data. A model for predicting the fatigue crack growth by ANN is presented, which does not need all kinds of materials and environment parameters, and only needs to measure the relation between a (length of crack) and N (cyclic times of loading) in-service. The feasibility of this model was verified by some examples. It makes up the inadequacy of data processing for current technique and on-line monitoring. Hence it has definite realistic meaning for engineering application.

4 citations

Proceedings ArticleDOI
03 Dec 2010
TL;DR: In life testing studies, the lifetime is usually assumed to be distributed as either a oneparameter exponential distribution, or a two-parameter Weibull distribution with the assumption that the shape parameter is known.
Abstract: Life test sampling plan is a technique, which consists of sampling, inspection, and decision making in determining the acceptance or rejection of a batch of products by experiments for examining the continuous usage time of the products. In life testing studies, the lifetime is usually assumed to be distributed as either a one‐parameter exponential distribution, or a two‐parameter Weibull distribution with the assumption that the shape parameter is known. Such oversimplified assumptions can facilitate the follow‐up analyses, but may overlook the fact that the lifetime distribution can significantly affect the estimation of the failure rate of a product. Moreover, sampling costs, inspection costs, warranty costs, and rejection costs are all essential, and ought to be considered in choosing an appropriate sampling plan. The choice of an appropriate life test sampling plan is a crucial decision problem because a good plan not only can help producers save testing time, and reduce testing cost; but it also can...

1 citations


Cited by
More filters
Posted Content
01 Jan 2010
TL;DR: The authors presented a model and method for isolating managerial intuition in cross-validated model analyses, and found that a combination of model and manager always outperforms either of these decision inputs in isolation, an average R2 increase of 0.09 (16%) above the best single decision input in crossvalidation model analyses.
Abstract: We focus on ways of combining simple database models with managerial intuition. We present a model and method for isolating managerial intuition. For five different business forecasting situations, our results indicate that a combination of model and manager always outperforms either of these decision inputs in isolation, an average R2 increase of 0.09 (16%) above the best single decision input in cross-validated model analyses. We assess the validity of an equal weighting heuristic, 50% model + 50% manager, and then discuss why our results might differ from previous research on expert judgment.

400 citations

Journal ArticleDOI
J. H. Sheesley1
TL;DR: In this paper, a method for statistical analysis of Reliability and life data is presented, which is based on the methods for Statistical Analysis of Reliable and Life Data (SARL).
Abstract: (1977). Methods for Statistical Analysis of Reliability and Life Data. Journal of Quality Technology: Vol. 9, No. 1, pp. 44-45.

76 citations

Journal ArticleDOI
TL;DR: A comprehensive review of fatigue modeling methods using neural networks is presented in this article . But, the review is limited to five applications: fatigue life prediction, fatigue crack, fatigue damage diagnosis, fatigue strength, and fatigue load.
Abstract: Neural network (NN) models have significantly impacted fatigue-related engineering communities and are expected to increase rapidly due to the recent advancements in machine learning and artificial intelligence. A comprehensive review of fatigue modeling methods using NNs is lacking and will help to recognize past achievements and suggest future research directions. Thus, this paper presents a survey of 251 publications between 1990 and July 2021. The NN modeling in fatigue is classified into five applications: fatigue life prediction, fatigue crack, fatigue damage diagnosis, fatigue strength, and fatigue load. A wide range of NN architectures are employed in the literature and are summarized in this review. An overview of important considerations and current limitations for the application of NNs in fatigue is provided. Statistical analysis for the past and the current trend is provided with representative examples. Existing gaps and future research directions are also presented based on the reviewed articles.

40 citations

Journal ArticleDOI
TL;DR: This paper presents a new technique for constructing exact statistical tolerance limits on outcomes (for example, on order statistics) in future samples based on a probability transformation and pivotal quantity averaging based on the two-parameter Weibull distribution.
Abstract: The logical purpose for a statistical tolerance limit is to predict future outcomes for some (say, production) process. The coverage value γ is the percentage of the future process outcomes to be captured by the prediction, and the confidence level (1 – α) is the proportion of the time we hope to capture that percentage γ. Tolerance limits of the type mentioned above are considered in this paper, which presents a new technique for constructing exact statistical (lower and upper) tolerance limits on outcomes (for example, on order statistics) in future samples. Attention is restricted to the two-parameter Weibull distribution under parametric uncertainty. The technique used here emphasizes pivotal quantities relevant for obtaining tolerance factors and is applicable whenever the statistical problem is invariant under a group of transformations that acts transitively on the parameter space. It does not require the construction of any tables and is applicable whether the experimental data are complete or Type II censored. The exact tolerance limits on order statistics associated with sampling from underlying distributions can be found easily and quickly making tables, simulation, Monte Carlo estimated percentiles, special computer programs, and approximation unnecessary. The proposed technique is based on a probability transformation and pivotal quantity averaging. It is conceptually simple and easy to use. The discussion is restricted to one-sided tolerance limits. Finally, we give numerical examples, where the proposed analytical methodology is illustrated in terms of the two-parameter Weibull distribution. Applications to other log-location-scale distributions could follow directly.

16 citations

Journal ArticleDOI
TL;DR: This paper provides a new technique for constructing unbiased statistical prediction limits on order statistics of future samples using the results of a previous sample from the same underlying inverse Gaussian distribution.
Abstract: This paper provides a new technique for constructing unbiased statistical prediction limits on order statistics of future samples using the results of a previous sample from the same underlying inverse Gaussian distribution. Statistical prediction limits for the inverse Gaussian distribution are obtained from a classical frequentist viewpoint. The results have direct application in reliability theory, where the time until the first failure in a group of several items in service provides a measure of assurance regarding the operation of the items. The statistical prediction limits are required as specifications on future life for components, as warranty limits for the future performance of a specified number of systems with standby units, and in various other applications. Prediction limit is an important statistical tool in the area of quality control. The lower prediction limits are often used as warranty criteria by manufacturers. The technique used here does not require the construction of any tables. It requires a quantile of the beta distribution and is conceptually simple and easy to use. The discussion is restricted to one-sided tolerance limits. For illustration, a numerical example is given.

15 citations