scispace - formally typeset
Search or ask a question

Showing papers on "Markov chain published in 1988"


Book
14 Nov 1988
TL;DR: In this article, Markov Chain Hitting Times (MCHT) is used to measure the extremity of local Brownian Processes (LBP) in a Markov chain hitting times.
Abstract: A The Heuristic.- B Markov Chain Hitting Times.- C Extremes of Stationary Processes.- D Extremes of Locally Brownian Processes.- E Simple Combinatorics.- F Combinatorics for Processes.- G Exponential Combinatorial Extrema.- H Stochastic Geometry.- I Multi-Dimensional Diffusions.- J Random Fields.- K Brownian Motion: Local Distributions.- L Miscellaneous Examples.- M The Eigenvalue Method.- Postscript.

576 citations


Book
01 Jan 1988
TL;DR: The General Theory of Markov Processes, by M. Sharpe, San Diego, 1988, 420 pp.
Abstract: 8. General Theory of Markov Processes. By M. Sharpe. ISBN 0 12 63 9060 6. Academic Press, San Diego, 1988. 420 pp. $49.50.

493 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived efficient computational procedures and numerically investigated the following fluid model which is of interest in manufacturing and communications: m producing machines supply a buffer, n consuming machines feed off it, each machine independently alternates between exponentially distributed random periods in the ‘in service' and 'failed' states.
Abstract: This paper analyzes, derives efficient computational procedures and numerically investigates the following fluid model which is of interest in manufacturing and communications: m producing machines supply a buffer, n consuming machines feed off it. Each machine independently alternates between exponentially distributed random periods in the ‘in service' and ‘failed' states. Producers/consumers have their own failure/repair rates and working capacities. When the buffer is either full or empty some of the machines in service are not utilized to capacity; otherwise they are fully utilized. Our main result is for the state distribution of the Markovian system in equilibrium which is the solution of a system of differential equations. The spectral expansion for its solution is obtained. Two important decompositions are obtained: the eigenvectors have the Kronecker-product form in lower-dimensional vectors; the characteristic polynomial is factored with each factor an explicitly given polynomial of degree at most 4. All eigenvalues are real. For each of various cases of the model, a system of linear equations is derived from the boundary conditions; their solution complete the spectral expansion. The count in operations of the entire procedure is O(m 3 n 3): independence from buffer size exemplifies an important attraction of fluid models. Computations have revealed several interesting features, such as the benefit of small machines and the inelasticity of production rate to inventory. We also give results on the eigenvalues of a more general fluid model, reversible Markov drift processes.

461 citations


Journal ArticleDOI
TL;DR: A quasi-likelihood (QL) approach to regression analysis with time series data is discussed, analogous to QL for independent observations, large-sample properties of the regression coefficients depend only on correct specification of the first conditional moment.
Abstract: This paper discusses a quasi-likelihood (QL) approach to regression analysis with time series data. We consider a class of Markov models, referred to by Cox (1981, Scandinavian Journal of Statistics 8, 93-115) as "observation-driven" models in which the conditional means and variances given the past are explicit functions of past outcomes. The class includes autoregressive and Markov chain models for continuous and categorical observations as well as models for counts (e.g., Poisson) and continuous outcomes with constant coefficient of variation (e.g., gamma). We focus on Poisson and gamma data for illustration. Analogous to QL for independent observations, large-sample properties of the regression coefficients depend only on correct specification of the first conditional moment.

425 citations



Journal ArticleDOI
TL;DR: In this article, a general version of Cheeger's inequality for Markov chains and continuous-time Markovian jump processes, both reversible and nonreversible, with general state space was proved.
Abstract: We prove a general version of Cheeger's inequality for discrete-time Markov chains and continuous-time Markovian jump processes, both reversible and nonreversible, with general state space. We also prove a version of Cheeger's inequality for Markov chains and processes with killing. As an application, we prove L 2 exponential convergence to equilibrium for random walk with inward drift on a class of countable rooted graphs

347 citations


Journal ArticleDOI
TL;DR: The numerical evaluation of Markov model transient behavior is considered, with a focus on the general problem of finding the state probability vector of a large, continuous-time, discrete-state Markov chain.

324 citations


Journal ArticleDOI
TL;DR: Numerical results indicate that distributions of cumulative performance measures over finite intervals reveal behavior of multiprocessor systems not indicates by either steady-state or expected values alone.
Abstract: The behavior of the multiprocessor system is described as a continuous Markov chain, and a reward rate (performance measure) is associated with each state. The distribution of performability is evaluated for analytical models of a multiprocessor system using a polynomial-time algorithm that obtains the distribution of performability for repairable, as well as nonrepairable, systems with heterogeneous components with a substantial speedup over earlier work. Numerical results indicate that distributions of cumulative performance measures over finite intervals reveal behavior of multiprocessor systems not indicates by either steady-state or expected values alone. >

281 citations


Proceedings ArticleDOI
A. Poritz1
11 Apr 1988
TL;DR: The main tool in hidden Markov modeling is the Baum-Welch algorithm for maximum likelihood estimation of the model parameters, which is discussed both from an intuitive point of view as an exercise in the art of counting and from a formalpoint of view via the information-theoretic Q-function.
Abstract: Hidden Markov modeling is a probabilistic technique for the study of time series. Hidden Markov theory permits modeling with any of the classical probability distributions. The costs of implementation are linear in the length of data. Models can be nested to reflect hierarchical sources of knowledge. These and other desirable features have made hidden Markov methods increasingly attractive for problems in language, speech and signal processing. The basic ideas are introduced by elementary examples in the spirit of the Polya urn models. The main tool in hidden Markov modeling is the Baum-Welch (or forward-backward) algorithm for maximum likelihood estimation of the model parameters. This iterative algorithm is discussed both from an intuitive point of view as an exercise in the art of counting and from a formal point of view via the information-theoretic Q-function. Selected examples drawn from the literature illustrate how the Baum-Welch technique places a rich variety of computational models at the disposal of the researcher. >

276 citations


Journal ArticleDOI
TL;DR: Some properties of multivariate GMRF for multi-dimensional lattice are given and estimation procedures are discussed and a numerical example from the area of image processing is given.

271 citations


Journal ArticleDOI
TL;DR: In this paper, a stable recursive scheme was proposed to compute the steady state probability vector for the M/G/1 case. But this scheme is not applicable to the Gauss-Seidel iterative scheme.
Abstract: For the matrix analogues of Markov chains of the M/G/1 type, we derive a stable recursive scheme to compute the steady state probability vector. This scheme, which is the natural generalization of a clever device attributed to P.J. Burke in the M/G/1 case, is substantially superior to the Gauss-Seidel iterative scheme.

Journal Article
TL;DR: On considere un processus de Markov en temps discret sur un espace metrique localement compact obtenu par iteration aleatoire des cartes de Lipschitz w 1, w 2,, w n
Abstract: On considere un processus de Markov en temps discret sur un espace metrique localement compact obtenu par iteration aleatoire des cartes de Lipschitz w 1 , w 2 , , w n

Proceedings ArticleDOI
11 Apr 1988
TL;DR: An automatic technique for constructing Markov word models is described and results are included of experiments with speaker-dependent and speaker-independent models on several isolated-word recognition tasks.
Abstract: The Speech Recognition Group at IBM Research has developed a real-time, isolated-word speech recognizer called Tangora, which accepts natural English sentences drawn from a vocabulary of 20000 words. Despite its large vocabulary, the Tangora recognizer requires only about 20 minutes of speech from each new user for training purposes. The accuracy of the system and its ease of training are largely attributable to the use of hidden Markov models in its acoustic match component. An automatic technique for constructing Markov word models is described and results are included of experiments with speaker-dependent and speaker-independent models on several isolated-word recognition tasks. >

Proceedings ArticleDOI
01 Jan 1988
TL;DR: The permanent function arises naturally in a number of fields, including algebra, combinatorial enumeration and the physical sciences, and has been an object of study by mathematicians for many years (see [14] for background).
Abstract: The permanent of an n x n matrix A with 0-1 entries aij is defined by per (A) = S/s P/n-1/i=oais(i), where the sum is over all permutations s of [n] = {0, …, n - 1}. Evaluating per (A) is equivalent to counting perfect matchings (1-factors) in the bipartite graph G = (V1, V2, E), where V1 = V2 = [n] and (i,j) ∈ E iff aij = 1. The permanent function arises naturally in a number of fields, including algebra, combinatorial enumeration and the physical sciences, and has been an object of study by mathematicians for many years (see [14] for background). Despite considerable effort, and in contrast with the syntactically very similar determinant, no efficient procedure for computing this function is known.Convincing evidence for the inherent intractability of the permanent was provided in the late 1970s by Valiant [19], who demonstrated that it is complete for the class #P of enumeration problems and thus as hard as counting any NP structures. Interest has therefore recently turned to finding computationally feasible approximation algorithms (see, e.g., [11], [17]). The notion of approximation we shall use in this paper is as follows: let ƒ be a function from input strings to natural numbers. A fully-polynomial randomised approximation scheme (fpras) for ƒ is a probabilistic algorithm which, when presented with a string x and a real number e > 0, runs in time polynomial in |x| and 1/e and outputs a number which with high probability estimates ƒ(x) to within a factor of (1 + e).A promising approach to finding a fpras for the permanent was recently proposed by Broder [7], and involves reducing the problem of counting perfect matchings in a graph to that of generating them randomly from an almost uniform distribution. The latter problem is then amenable to the following dynamic stochastic technique: construct a Markov chain whose states correspond to perfect and 'near-perfect' matchings, and which converges to a stationary distribution which is uniform over the states. Transitions in the chain correspond to simple local perturbations of the structures. Then, provided convergence is fast enough, we can generate matchings by simulating the chain for a small number of steps and outputting the structure corresponding to the final state.When applying this technique, one is faced with the task of proving that a given Markov chain is rapidly mixing, i.e., that after a short period of evolution the distribution of the final state is essentially independent of the initial state. 'Short' here means bounded by a polynomial in the input size; since the state space itself may be exponentially large, the chain must typically be close to stationarity after visiting only a small fraction of the space.Recent work on the rate of convergence of Markov chains has focussed on stochastic concepts such as coupling [1] and stopping times [3]. While these methods are intuitively appealing and yield tight bounds for simple chains, the analysis involved becomes extremely complicated for more interesting processes which lack a high degree of symmetry. Using a complex coupling argument, Broder [7] claims that the perfect matchings chain above is rapidly mixing provided the bipartite graph is dense, i.e., has minimum vertex degree at least n/2. This immediately yields a fpras for the dense permanent. However, the coupling proof is hard to penetrate; more seriously, as has been observed by Mihail [13], it contains a fundamental error which is not easily correctable.In this paper, we propose an alternative technique for analysing the rate of convergence of Markov chains based on a structural property of the underlying weighted graph. Under fairly general conditions, a finite ergodic Markov chain is rapidly mixing iff the conductance of its underlying graph is not too small. This characterisation is related to recent work by Alon [4] and Alon and Milman [5] on eigenvalues and expander graphs.While similar characterisations of rapid mixing have been noted before (see, e.g., [2]), independent estimates of the conductance have proved elusive for non-trivial chains. Using a novel method of analysis, we are able to derive a lower bound on the conductance of Broder's perfect matchings chain under the same density assumption, thus verifying that it is indeed rapidly mixing. The existence of a fpras for the dense permanent is therefore established.Reductions from approximate counting to almost uniform generation similar to that mentioned above for perfect matchings also hold for the large class of combinatorial structures which are self-reducible [10]. Consequently, the Markov chain approach is potentially a powerful general tool for obtaining approximation algorithms for hard combinatorial enumeration problems. Moreover, our proof technique for rapid mixing also seems to generalise to other interesting chains. We substantiate this claim by considering an example from the field of statistical physics, namely the monomer-dimer problem (see, e.g., [8]). Here a physical system is modelled by a set of combinatorial structures, or configurations, each of which has an associated weight. Most interesting properties of the model can be computed from the partition function, which is just the sum of the weights of the configurations. By means of a reduction to the associated generation problem, in which configurations are selected with probabilities proportional to their weights, we are able to show the existence of a fpras for the monomer-dimer partition function under quite general conditions. Significantly, in such applications the generation problem is often of interest in its own right.Our final result concerns notions of approximate counting and their robustness. We show that, for all self-reducible NP structures, randomised approximate counting to within a factor of (1 + nb), where n is the input size, is possible in polynomial time either for all b ∈ R or for no b ∈ R. We are therefore justified in calling such a counting problem approximable iff there exists a polynomial time randomised procedure which with high probability estimates the number of structures within ratio (1 + nb) for some arbitrary b ∈ R. The connection with the earlier part of the paper is our use of a Markov chain simulation to reduce almost uniform generation to approximate counting within any factor of the above form: once again, the proof that the chain is rapidly mixing follows from the conductance characterisation.

Journal ArticleDOI
TL;DR: The proposed algorithm speeds generation of truncated Poisson variates and the computation of expected terminal reward in continuous-time, uniformizable Markov chains and can be used to evaluate formulas involving Poisson probabilities.
Abstract: We propose an algorithm to compute the set of individual (nonnegligible) Poisson probabilities, rigorously bound truncation error, and guarantee no overflow or underflow. Work and space requirements are modest, both proportional to the square root of the Poisson parameter. Our algorithm appears numerically stable. We know no other algorithm with all these (good) features. Our algorithm speeds generation of truncated Poisson variates and the computation of expected terminal reward in continuous-time, uniformizable Markov chains. More generally, our algorithm can be used to evaluate formulas involving Poisson probabilities.

PatentDOI
TL;DR: In this paper, a Markov model speech recognition system is presented, in which each vocabulary word is represented as a baseform constructed of a sequence of Markov models, and each word likelihood based on acoustic characteristics is determined by matching a string of labels generated by the acoustic processor against the probabilities stored for each word baseform.
Abstract: In a Markov model speech recognition system, an acoustic processor generates one label after another selected from an alphabet of labels. Each vocabulary word is represented as a baseform constructed of a sequence of Markov models. Each Markov model is stored in a computer memory as (a) a plurality of states; (b) a plurality of arcs, each extending from a state to a state with a respective stored probability; and (c) stored label output probabilities, each indicating the likelihood of a given label being produced at a certain arc. Word likelihood based on acoustic characteristics is determined by matching a string of labels generated by the acoustic processor against the probabilities stored for each word baseform. The present invention involves the specifying of label parameters and the constructing of word baseforms interdependently in a Markov model speech recognition system to improve system performance.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the reinforced random walk can vary from transient to recurrent, depending on the value of an adjustable parameter measuring the strength of the feedback, which is calculated at the phase transition.
Abstract: A random walk on an infinite tree is given a particular kind of positive feedback so edges already traversed are more likely to be traversed in the future. Using exchangeability theory, the process is shown to be equivalent to a random walk in a random environment (RWRE), that is to say, a mixture of Markov chains. Criteria are given to determine whether a RWRE is transient or recurrent. These criteria apply to show that the reinforced random walk can vary from transient to recurrent, depending on the value of an adjustable parameter measuring the strength of the feedback. The value of the parameter at the phase transition is calculated.

01 Jan 1988
TL;DR: In this article, a simple reparametrization of the left truncation model as a three-state Markov process is proposed, and the derivation of a nonparametric estimator is a distribution function under random truncation is then a special case of results on the statistical theory of counting processes.
Abstract: Random left truncation is modelled by the conditional distribution of the random variable $X$ of interest, given that it is larger than the truncating random variable $Y$; usually $X$ and $Y$ are assumed independent. The present paper is based on a simple reparametrization of the left truncation model as a three-state Markov process. The derivation of a nonparametric estimator is a distribution function under random truncation is then a special case of results on the statistical theory of counting processes by Aalen and Johansen. This framework also clarifies the status of the estimator as a nonparametric maximum likelihood estimator, and consistency, asymptotic normality and efficiency may be derived directly as special cases of Aalen and Johansen's general theorems and later work. Although we do not carry through these here, we note that the present framework also allows several generalizations: censoring may be incorporated; the independence hypothesis underlying the truncation models may be tested; ties (occurring when the distributions of $F$ and $G$ have discrete components) may be handled.

Journal Article
TL;DR: In this article, a bridge performance prediction model using the Markov chain was developed, which can be used to predict the percentages of bridges with different condition ratings as well as to develop performance curves of bridges.
Abstract: As part of a study to develop a comprehensive bridge management system for the Indiana Department of Highways (IDOH), a bridge performance prediction model using the Markov chain was developed. The model can be used to predict the percentages of bridges with different condition ratings as well as to develop performance curves of bridges. The Markov chain, a probability-based method, was used in the model to reflect the stochastic nature of bridge conditions. The study exhibited the power of the Markov chain approach in prediction or estimation of future bridge conditions. The procedure, although simple, was found to provide a high level of accuracy in predicting bridge conditions.

Journal ArticleDOI
TL;DR: In this paper, a construction for a general class of measure-valued Markov branching processes is given, where the underlying spatial motion process is an arbitrary Borel right Markov process and state-dependent offspring laws are allowed.
Abstract: A construction is given for a general class of measure-valued Markov branching processes. The underlying spatial motion process is an arbitrary Borel right Markov process, and state-dependent offspring laws are allowed. It is shown that such processes are Hunt processes in the Ray weak* topology, and have continuous paths if and only if the total mass process is continuous. The entrance spaces of such processes are described explicitly.

Journal ArticleDOI
TL;DR: In this paper, extremal behavior of stationary Markov chains is studied and convergence criterion for convergence of extremes of general stationary sequences is derived for waiting times in the GI/G/1 queue and for autoregressive processes.
Abstract: Recent work by Athreya and Ney and by Nummelin on the limit theory for Markov chains shows that the close connection with regeneration theory holds also for chains on a general state space. Here this is used to study extremal behaviour of stationary (or asymptotically stationary) Markov chains. Many of the results center on the ‘clustering’ of extremes of adjacent values of the chains. In addition one criterion for convergence of extremes of general stationary sequences is derived. The results are applied to waiting times in the GI/G/1 queue and to autoregressive processes.

Journal ArticleDOI
TL;DR: A survey of nonstandard Markov decision process criteria (i.e., those which do not seek simply to optimize expected returns per unit time or expected discounted return) can be found in this article.
Abstract: This paper is a survey of papers which make use of nonstandard Markov decision process criteria (i.e., those which do not seek simply to optimize expected returns per unit time or expected discounted return). It covers infinite-horizon nondiscounted formulations, infinite-horizon discounted formulations, and finite-horizon formulations. For problem formulations in terms solely of the probabilities of being in each state and taking each action, policy equivalence results are given which allow policies to be restricted to the class of Markov policies or to the randomizations of deterministic Markov policies. For problems which cannot be stated in such terms, in terms of the primitive state setI, formulations involving a redefinition of the states are examined.

Journal ArticleDOI
TL;DR: In this article, an iterative imputation procedure based on the idea of Markov chain is proposed, where the incomplete values are filled in through sampling from their predictive distribution, which is a theoretically sound method to fill in incomplete values.
Abstract: Broadly speaking, imputation means filling in incomplete values. A theoretically sound method is to impute the incomplete values through sampling from their predictive distribution. In this paper, an iterative imputation procedure, based on the idea of Markov chain, is proposed. Examples are presented to illustrate its applications.

Journal ArticleDOI
TL;DR: A new algorithm is developed which constructs q- Markov covariance equivalent realizations (q-Markov COVERs) of discrete systems by leading to a parameterization of a class of q-MarkOV COVERs which can be constructed from this basic data.

Book
25 Aug 1988
TL;DR: A Poisson model of equipment wearout of reactor safety studies, and the application of point processes to a theory of safety assessment.
Abstract: 1 Introduction.- 1.1 Arrivals in time.- 1.2 Reliability.- 1.3 Safety assessment.- 1.4 Random stress and strength.- Notes on the literature.- Problems.- 2 Point processes.- 2.1 The probabilistic context.- 2.2 Two methods of representation.- 2.3 Parameters of point processes.- 2.4 Transformation to a process with constant arrival rate.- 2.5 Time between arrivals.- Notes on the literature.- Problems.- 3 Homogeneous Poisson processes.- 3.1 Definition.- 3.2 Characterization.- 3.3 Time between arrivals for the hP process.- 3.4 Relations to the uniform distribution.- 3.5 A process with simultaneous arrivals.- Notes on the literature.- Problems.- 4 Application of point processes to a theory of safety assessment.- 4.1 The Reactor Safety Study.- 4.2 The annual probability of a reactor accident.- 4.3 A stochastic consequence model.- 4.4 A concept of rare events.- 4.5 Common mode failures.- 4.6 Conclusion.- Notes on the literature.- Problems.- 5 Renewal processes.- 5.1 Probabilistic theory.- 5.2 The renewal process cannot model equipment wearout.- Notes on the literature.- Problems.- 6 Poisson processes.- 6.1 The Poisson model.- 6.2 Characterization of regular Poisson processes.- 6.3 Time between arrivals for Poisson processes.- 6.4 Further observations on software error detection.- Notes on the literature.- Problems.- 7 Superimposed processes.- Notes on the literature.- Problems.- 8 Markov point processes.- 8.1 Theory.- 8.2 The Poisson process.- 8.3 Facilitation and hindrance.- Notes on the literature.- Problems.- 9 Applications of Markov point processes.- 9.1 Egg-laying dispersal of the bean weevil.- 9.2 Application of facilitation - hindrance to the spatial distribution of benthic invertebrates.- 9.3 The Luria-Delbruck model.- 9.4 Chance placement of balls in cells.- 9.5 A model for multiple vehicle automobile accidents.- 9.6 Engels' model.- Notes on the literature.- Problems.- 10 The order statistics process.- 10.1 The sampling of lifetimes.- 10.2 Derivation from the Poisson process.- 10.3 A Poisson model of equipment wearout.- Notes on the literature.- Problems.- 11 Competing risk theory.- 11.1 Markov chain model.- 11.2 Classical competing risks.- 11.3 Competing risk presentation of reactor safety studies.- 11.4 Delayed fatalities.- 11.5 Proportional hazard rates.- Notes on the literature.- Problems.- Further reading.- Appendix 1 Probability background.- A1.1 Probability distributions.- A1.2 Expectation.- A1.3 Transformation of variables.- A1.4 The distribution of order statistics.- A1.5 Conditional probability.- A1.6 Operational methods in probability.- A1.7 Convergence concepts and results in the theory of probability.- Notes on the literature.- Appendix 2 Technical topics.- A2.1 Existence of point process parameters.- A2.2 No simultaneous arrivals.- Solutions to a few of the problems.- References.- Author index.

Journal ArticleDOI
TL;DR: A procedure for assessing the state of a system in which the presence of a particular feature is tested on each trial is described, resulting in a modification of a class of plausible states.

Journal ArticleDOI
TL;DR: For the coupled processor model with exponential service times, an approach to calculate the stationary distribution of the queue length is presented in this paper, where the stationary probabilities are expressed as power series in the parameter $\rho $, the traffic intensity of the system.
Abstract: For the coupled processor model with exponential service times, an approach is presented to calculate the stationary distribution of the queue length. In this approach the stationary probabilities are expressed as power series in the parameter $\rho $, the traffic intensity of the system. The method is not restricted to state spaces (of the underlying continuous time Markov chain) of dimension two, but applies equally well to higher-dimensional state spaces.

Journal ArticleDOI
Y.-C. Ho1, Shu Li1
TL;DR: In this article, an extension of the concept of perturbation analysis (PA) is introduced that can be applied to cases where sample path discontinuity with respect to system parameters has prevented the application of simple PA rules.
Abstract: An extension of the concept of perturbation analysis (PA) is introduced that can be applied to cases where sample path discontinuity with respect to system parameters has prevented the application of simple PA rules. Both Markov and generalized semi-Markov processes are considered. The robustness of the extended PA is examined and it is found to be sufficiently robust. >

Journal ArticleDOI
01 May 1988
TL;DR: The extension of Markov models to include parametric sensitivity analysis is discussed, which can guide system optimization, identify parts of a system model sensitive to error, and find system reliability and performability bottlenecks.
Abstract: Traditional evaluation techniques for multiprocessor systems use Markov chains and Markov reward models to compute measures such as mean time to failure, reliability, performance, and performability. In this paper, we discuss the extension of Markov models to include parametric sensitivity analysis. Using such analysis, we can guide system optimization, identify parts of a system model sensitive to error, and find system reliability and performability bottlenecks.As an example we consider three models of a 16 processor. 16 memory system. A network provides communication between the processors and the memories. Two crossbar-network models and the Omega network are considered. For these models, we examine the sensitivity of the mean time to failure, unreliability, and performability to changes in component failure rates. We use the sensitivities to identify bottlenecks in the three system models.

Journal ArticleDOI
TL;DR: A more detailed analysis showed that the Markov model was not significantly better than the fractal model for the corneal endothelium channels, and the inability to discriminate the models definitively in this case was shown to be due in part to the small size of the data set.