scispace - formally typeset
Search or ask a question

Showing papers in "Statistics and Computing in 2001"


Journal ArticleDOI
TL;DR: In this article, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler, while the use of importance weights ensures that the estimates found converge to the correct values as the number of anneeling runs increases.
Abstract: Simulated annealing—moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions—has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling It is also related to thermodynamic integration methods for estimating ratios of normalizing constants Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers

1,083 citations


Journal ArticleDOI
TL;DR: A propagation scheme for Bayesian networks with conditional Gaussian distributions that does not have the numerical weaknesses of the scheme derived in Lauritzen and Spiegelhalter is described.
Abstract: This article describes a propagation scheme for Bayesian networks with conditional Gaussian distributions that does not have the numerical weaknesses of the scheme derived in Lauritzen (Journal of the American Statistical Association 87: 1098–1108, 1992). The propagation architecture is that of Lauritzen and Spiegelhalter (Journal of the Royal Statistical Society, Series B 50: 157– 224, 1988). In addition to the means and variances provided by the previous algorithm, the new propagation scheme yields full local marginal distributions. The new scheme also handles linear deterministic relationships between continuous variables in the network specification. The computations involved in the new propagation scheme are simpler than those in the previous scheme and the method has been implemented in the most recent version of the HUGIN software.

285 citations


Journal ArticleDOI
TL;DR: A Bayesian approach to nonparametric regression initially proposed by Smith and Kohn is discussed, with new sampling schemes introduced to carry out the variable selection methodology.
Abstract: This paper discusses a Bayesian approach to nonparametric regression initially proposed by Smith and Kohn (1996. Journal of Econometrics 75: 317–344). In this approach the regression function is represented as a linear combination of basis terms. The basis terms can be univariate or multivariate functions and can include polynomials, natural splines and radial basis functions. A Bayesian hierarchical model is used such that the coefficient of each basis term can be zero with positive prior probability. The presence of basis terms in the model is determined by latent indicator variables. The posterior mean is estimated by Markov chain Monte Carlo simulation because it is computationally intractable to compute the posterior mean analytically unless a small number of basis terms is used. The present article updates the work of Smith and Kohn (1996. Journal of Econometrics 75: 317–344) to take account of work by us and others over the last three years. A careful discussion is given to all aspects of the model specification, function estimation and the use of sampling schemes. In particular, new sampling schemes are introduced to carry out the variable selection methodology.

195 citations


Journal ArticleDOI
TL;DR: Robust automatic selection techniques for the smoothing parameter of a smoothing spline are introduced based on a robust predictive error criterion and can be viewed as robust versions of Cp and cross-validation.
Abstract: Robust automatic selection techniques for the smoothing parameter of a smoothing spline are introduced. They are based on a robust predictive error criterion and can be viewed as robust versions of Cp and cross-validation. They lead to smoothing splines which are stable and reliable in terms of mean squared error over a large spectrum of model distributions.

85 citations


Journal ArticleDOI
TL;DR: Simulations and an application to a data set on East–West German migration illustrate similarities and dissimilarities of the estimators and test statistics of the generalized partial linear model.
Abstract: A particular semiparametric model of interest is the generalized partial linear model (GPLM) which extends the generalized linear model (GLM) by a nonparametric component. The paper reviews different estimation procedures based on kernel methods as well as test procedures on the correct specification of this model (vs. a parametric generalized linear model). Simulations and an application to a data set on East–West German migration illustrate similarities and dissimilarities of the estimators and test statistics.

56 citations


Journal ArticleDOI
TL;DR: A method for estimating likelihood ratios for stochastic compartment models when only times of removals from a population are observed and some general properties of the likelihoods typically arising in this scenario, and their implications for inference, are illustrated and discussed.
Abstract: This paper presents a method for estimating likelihood ratios for stochastic compartment models when only times of removals from a population are observed. The technique operates by embedding the models in a composite model parameterised by an integer k which identifies a switching time when dynamics change from one model to the other. Likelihood ratios can then be estimated from the posterior density of k using Markov chain methods. The techniques are illustrated by a simulation study involving an immigration-death model and validated using analytic results derived for this case. They are also applied to compare the fit of stochastic epidemic models to historical data on a smallpox epidemic. In addition to estimating likelihood ratios, the method can be used for direct estimation of likelihoods by selecting one of the models in the comparison to have a known likelihood for the observations. Some general properties of the likelihoods typically arising in this scenario, and their implications for inference, are illustrated and discussed.

32 citations


Journal ArticleDOI
TL;DR: This paper proposes an adaptive and simultaneous estimation procedure for all additive components in additive regression models and proposes a regularization algorithm which guarantees an adaptive solution to the multivariate estimation problem.
Abstract: It is well-known that multivariate curve estimation suffers from the “curse of dimensionality.” However, reasonable estimators are possible, even in several dimensions, under appropriate restrictions on the complexity of the curve. In the present paper we explore how much appropriate wavelet estimators can exploit a typical restriction on the curve such as additivity. We first propose an adaptive and simultaneous estimation procedure for all additive components in additive regression models and discuss rate of convergence results and data-dependent truncation rules for wavelet series estimators. To speed up computation we then introduce a wavelet version of functional ANOVA algorithm for additive regression models and propose a regularization algorithm which guarantees an adaptive solution to the multivariate estimation problem. Some simulations indicate that wavelets methods complement nicely the existing methodology for nonparametric multivariate curve estimation.

26 citations


Journal ArticleDOI
TL;DR: This paper develops an extension of the Riemann sum techniques of Philippe in the setup of MCMC algorithms, showing that these techniques apply equally well to the output of these algorithms, with similar speeds of convergence which improve upon the regular estimator.
Abstract: This paper develops an extension of the Riemann sum techniques of Philippe (J. Statist. Comput. Simul. 59: 295–314) in the setup of MCMC algorithms. It shows that these techniques apply equally well to the output of these algorithms, with similar speeds of convergence which improve upon the regular estimator. The restriction on the dimension associated with Riemann sums can furthermore be overcome by Rao–Blackwellization methods. This approach can also be used as a control variate technique in convergence assessment of MCMC algorithms, either by comparing the values of alternative versions of Riemann sums, which estimate the same quantity, or by using genuine control variate, that is, functions with known expectations, which are available in full generality for constants and scores.

25 citations


Journal ArticleDOI
TL;DR: The ideas discussed in this paper incorporate aspects of both analytic model approximations and Monte Carlo arguments to gain some efficiency in the generation and use of ensembles.
Abstract: Ensemble forecasting involves the use of several integrations of a numerical model. Even if this model is assumed to be known, ensembles are needed due to uncertainty in initial conditions. The ideas discussed in this paper incorporate aspects of both analytic model approximations and Monte Carlo arguments to gain some efficiency in the generation and use of ensembles. Efficiency is gained through the use of importance sampling Monte Carlo. Once ensemble members are generated, suggestions for their use, involving both approximation and statistical notions such as kernel density estimation and mixture modeling are discussed. Fully deterministic procedures derived from the Monte Carlo analysis are also described. Examples using the three-dimensional Lorenz system are described.

21 citations


Journal ArticleDOI
TL;DR: This paper considers the data from 6 separate dominant-lethal assay experiments and discriminate between the competing models which could be used to describe them, adopting a Bayesian approach and illustrating how a variety of different models may be considered, using Markov chain Monte Carlo simulation techniques and comparing the results with the corresponding maximum likelihood analyses.
Abstract: When the results of biological experiments are tested for a possible difference between treatment and control groups, the inference is only valid if based upon a model that fits the experimental results satisfactorily. In dominant-lethal testing, foetal death has previously been assumed to follow a variety of models, including a Poisson, Binomial, Beta-binomial and various mixture models. However, discriminating between models has always been a particularly difficult problem. In this paper, we consider the data from 6 separate dominant-lethal assay experiments and discriminate between the competing models which could be used to describe them. We adopt a Bayesian approach and illustrate how a variety of different models may be considered, using Markov chain Monte Carlo (MCMC) simulation techniques and comparing the results with the corresponding maximum likelihood analyses. We present an auxiliary variable method for determining the probability that any particular data cell is assigned to a given component in a mixture and we illustrate the value of this approach. Finally, we show how the Bayesian approach provides a natural and unique perspective on the model selection problem via reversible jump MCMC and illustrate how probabilities associated with each of the different models may be calculated for each data set. In terms of estimation we show how, by averaging over the different models, we obtain reliable and robust inference for any statistic of interest.

21 citations


Journal ArticleDOI
TL;DR: It is shown that small deviations from the model can wipe out the nominal improvements of the accuracy obtained at the model by second-order approximations of the distribution of classical statistics.
Abstract: We discuss the effects of model misspecifications on higher-order asymptotic approximations of the distribution of estimators and test statistics. In particular we show that small deviations from the model can wipe out the nominal improvements of the accuracy obtained at the model by second-order approximations of the distribution of classical statistics. Although there is no guarantee that the first-order robustness properties of robust estimators and tests will carry over to second-order in a neighbourhood of the model, the behaviour of robust procedures in terms of second-order accuracy is generally more stable and reliable than that of their classical counterparts. Finally, we discuss some related work on robust adjustments of the profile likelihood and outline the role of computer algebra in this type of research.

Journal ArticleDOI
TL;DR: This paper shows how procedures for computing moments and cumulants may themselves be computed from a few elementary identities, which permit the calculation of results which would otherwise involve complementary set partitions, k-statistics, and pattern functions.
Abstract: This paper shows how procedures for computing moments and cumulants may themselves be computed from a few elementary identities. Many parameters, such as variance, may be expressed or approximated as linear combinations of products of expectations. The estimates of such parameters may be expressed as the same linear combinations of products of averages. The moments and cumulants of such estimates may be computed in a straightforward way if the terms of the estimates, moments and cumulants are represented as lists and the expectation operation defined as a transformation of lists. Vector space considerations lead to a unique representation of terms and hence to a simplification of results. Basic identities relating variables and their expectations induce transformations of lists, which transformations may be computed from the identities. In this way procedures for complex calculations are computed from basic identities. The procedures permit the calculation of results which would otherwise involve complementary set partitions, k-statistics, and pattern functions. The examples include the calculation of unbiased estimates of cumulants, of cumulants of these, and of moments of bootstrap estimates.

Journal ArticleDOI
TL;DR: It is shown that one may apply statistics from nonlinear dynamical systems theory, in particular those derived from the correlation integral, as test statistics for the hypothesis that an observed time series is consistent with each of these three linear classes of dynamical system.
Abstract: The technique of surrogate data analysis may be employed to test the hypothesis that an observed data set was generated by one of several specific classes of dynamical system. Current algorithms for surrogate data analysis enable one, in a generic way, to test for membership of the following three classes of dynamical system: (0) independent and identically distributed noise, (1) linearly filtered noise, and (2) a monotonic nonlinear transformation of linearly filtered noise. We show that one may apply statistics from nonlinear dynamical systems theory, in particular those derived from the correlation integral, as test statistics for the hypothesis that an observed time series is consistent with each of these three linear classes of dynamical system. Using statistics based on the correlation integral we show that it is also possible to test much broader (and not necessarily linear) hypotheses. We illustrate these methods with radial basis models and an algorithm to estimate the correlation dimension. By exploiting some special properties of this correlation dimension estimation algorithm we are able to test very specific hypotheses. Using these techniques we demonstrate the respiratory control of human infants exhibits a quasi-periodic orbit (the obvious inspiratory/expiratory cycle) together with cyclic amplitude modulation. This cyclic amplitude modulation manifests as a stable focus in the first return map (equivalently, the sequence of successive peaks).

Journal ArticleDOI
TL;DR: In general, it is found that for the binomial distribution and the first-order moving-average time series model the mean likelihood estimator outperforms the maximum likelihood estimators and the Bayes estimator with a Jeffrey's noninformative prior.
Abstract: The use of Mathematica in deriving mean likelihood estimators is discussed. Comparisons are made between the mean likelihood estimator, the maximum likelihood estimator, and the Bayes estimator based on a Jeffrey's noninformative prior. These estimators are compared using the mean-square error criterion and Pitman measure of closeness. In some cases it is possible, using Mathematica, to derive exact results for these criteria. Using Mathematica, simulation comparisons among the criteria can be made for any model for which we can readily obtain estimators. In the binomial and exponential distribution cases, these criteria are evaluated exactly. In the first-order moving-average model, analytical comparisons are possible only for n e 2. In general, we find that for the binomial distribution and the first-order moving-average time series model the mean likelihood estimator outperforms the maximum likelihood estimator and the Bayes estimator with a Jeffrey's noninformative prior. Mathematica was used for symbolic and numeric computations as well as for the graphical display of results. A Mathematica notebook which provides the Mathematica code used in this article is available: http://www.stats.uwo.ca/mcleod/epubs/mele. Our article concludes with our opinions and criticisms of the relative merits of some of the popular computing environments for statistics researchers.

Journal ArticleDOI
TL;DR: The effectiveness of symbolic computation is illustrated to evaluate the saddlepoint approximation for the likelihood ratio, the exponential score, and the Wald-Wolfowitz test statistics.
Abstract: A conditional saddlepoint approximation was provided by Gatto and Jammalamadaka (1999) for computing the distribution function of many test statistics based on dependent quantities like multinomial frequencies, spacing frequencies, etc. The considerable complexity of the formulas involved can be bypassed by symbolic computation. This article illustrates the effectiveness of symbolic computation to evaluate the saddlepoint approximation for the likelihood ratio, the exponential score, and the Wald-Wolfowitz test statistics. The case of composite hypotheses is also discussed.

Journal ArticleDOI
TL;DR: Gröbner bases, elimination theory and factorization may be used to perform calculations in elementary discrete probability and more complex areas such as Bayesian networks (influence diagrams).
Abstract: Grobner bases, elimination theory and factorization may be used to perform calculations in elementary discrete probability and more complex areas such as Bayesian networks (influence diagrams). The paper covers the application of computational algebraic geometry to probability theory. The application to the Boolean algebra of events is straightforward (and essentially known). The extension into the probability superstructure is via the polynomial interpolation of densities and log densities and this is used naturally in the Bayesian application.

Journal ArticleDOI
TL;DR: It is shown that each of the three search problems above can be reduced to an equivalent search problem on hypergraphs, which can be solved in polynomial time.
Abstract: Extended log-linear models (ELMs) are the natural generalization of log-linear models when the positivity assumption is relaxed. The hypergraph language, which is currently used to specify the syntax of ELMs, both provides an insight into key notions of the theory of ELMs such as collapsibility and decomposability, and allows to work out efficient algorithms to solve some problems of inference. This is the case for the three search problems addressed in this paper and referred to as the approximation problem, the selective-reduction problem and the synthesis problem. The approximation problem consists in finding the smallest decomposable ELM that contains a given ELM and is such that the given ELM is collapsible onto each of its generators. The selective-reduction problem consists in deleting the maximum number of generators of a given ELM in such a way that the resulting ELM is a submodel and none of certain variables of interest is missing. The synthesis problem consists in finding a minimal ELM containing the intersection of ELMs specified by given independence relations. We show that each of the three search problems above can be reduced to an equivalent search problem on hypergraphs, which can be solved in polynomial time.

Journal ArticleDOI
TL;DR: Maximum likelihood estimation incorporating the error structure in the generation of the training data is used, which has proved potentially useful for locating and labelling cells in microscope slides Rue and Hurn (1999).
Abstract: In recent years, a number of statistical models have been proposed for the purposes of high-level image analysis tasks such as object recognition. However, in general, these models remain hard to use in practice, partly as a result of their complexity, partly through lack of software. In this paper we concentrate on a particular deformable template model which has proved potentially useful for locating and labelling cells in microscope slides Rue and Hurn (1999). This model requires the specification of a number of rather non-intuitive parameters which control the shape variability of the deformed templates. Our goal is to arrange the estimation of these parameters in such a way that the microscope user's expertise is exploited to provide the necessary training data graphically by identifying a number of cells displayed on a computer screen, but that no additional statistical input is required. In this paper we use maximum likelihood estimation incorporating the error structure in the generation of our training data.

Journal ArticleDOI
TL;DR: An operator is described which acts as a filter for a general purpose operator described in Andrews and Stafford (J.R. Statist. Soc. B55, 613–627) which calculates the joint cumulants of such statistics, and is used to deepen the understanding of the behaviour of the resampling based variance estimate.
Abstract: With time series data, there is often the issue of finding accurate approximations for the variance of such quantities as the sample autocovariance function or spectral estimate. Smith and Field (J. Time. Ser. Anal 14: 381–395, 1993) proposed a variance estimate motivated by resampling in the frequency domain. In this paper we present some results on the cumulants of this and other frequency domain estimates obtained via symbolic computation. The statistics of interest are linear combinations of products of discrete Fourier transforms. We describe an operator which calculates the joint cumulants of such statistics, and use the operator to deepen our understanding of the behaviour of the resampling based variance estimate. The operator acts as a filter for a general purpose operator described in Andrews and Stafford (J.R. Statist. Soc. B55, 613–627).

Journal ArticleDOI
TL;DR: A method for resampling time series generated by a chaotic dynamical system is proposed to develop an algorithm for building trajectories which lie on the same attractor of the true DGP, with the same dynamical and geometrical properties of the original data.
Abstract: In the field of chaotic time series analysis, there is a lack of a distributional theory for the main quantities used to characterize the underlying data generating process (DGP). In this paper a method for resampling time series generated by a chaotic dynamical system is proposed. The basic idea is to develop an algorithm for building trajectories which lie on the same attractor of the true DGP, that is with the same dynamical and geometrical properties of the original data. We performed some numerical experiments on some short noise-free and high-noise series confirming that we are able to correctly reproduce the distribution of the largest finite-time Lyapunov exponent and of the correlation dimension.

Journal ArticleDOI
TL;DR: The purpose of this paper is to provide a short tutorial for this method of estimation under a likelihood–based model, reviewing results from Stein (1956) and Severini and Wong (1992).
Abstract: Let X, T, Y be random vectors such that the distribution of Y conditional on covariates partitioned into the vectors X e x and T e t is given by f(ys x, p), where p e (t, η(t)). Here t is a parameter vector and η(t) is a smooth, real–valued function of t. The joint distribution of X and T is assumed to be independent of t and η. This semiparametric model is called conditionally parametric because the conditional distribution f(ys x, p) of Y given X e x, T e t is parameterized by a finite dimensional parameter p e (t, η(t)). Severini and Wong (1992. Annals of Statistics 20: 1768–1802) show how to estimate t and η(·) using generalized profile likelihoods, and they also provide a review of the literature on generalized profile likelihoods. Under specified regularity conditions, they derive an asymptotically efficient estimator of t and a uniformly consistent estimator of η(·). The purpose of this paper is to provide a short tutorial for this method of estimation under a likelihood–based model, reviewing results from Stein (1956. Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, University of California Press, Berkeley, pp. 187–196), Severini (1987. Ph.D Thesis, The University of Chicago, Department of Statistics, Chicago, Illinois), and Severini and Wong (op. cit.).

Journal ArticleDOI
TL;DR: It is shown how the elegant algebraic structure underlying the expressive and effective formalism of Itô calculus can be implemented directly in AXIOM using the package's programmable facilities for “strong typing” of computational objects.
Abstract: Symbolic Ito calculus refers both to the implementation of Ito calculus in a computer algebra package and to its application This article reports on progress in the implementation of Ito calculus in the powerful and innovative computer algebra package AXIOM, in the context of a decade of previous implementations and applications It is shown how the elegant algebraic structure underlying the expressive and effective formalism of Ito calculus can be implemented directly in AXIOM using the package's programmable facilities for “strong typing” of computational objects An application is given of the use of the implementation to provide calculations for a new proof, based on stochastic differentials, of the Mardia-Dryden distribution from statistical shape theory

Journal ArticleDOI
TL;DR: It is found that a weighted version of the log likelihood function has desirable robust properties in detecting the order of the process and can be traced from conditional density estimation.
Abstract: The study focuses on the selection of the order of a general time series process via the conditional density of the latter, a characteristic of which is that it remains constant for every order beyond the true one. Using simulated time series from various nonlinear models we illustrate how this feature can be traced from conditional density estimation. We study whether two statistics derived from the likelihood function can serve as univariate statistics to determine the order of the process. It is found that a weighted version of the log likelihood function has desirable robust properties in detecting the order of the process.

Journal ArticleDOI
TL;DR: This paper reviews some of the commonly used methods for performing boosting and shows how they can be fit into a Bayesian setup at each iteration of the algorithm.
Abstract: Boosting is a new, powerful method for classification. It is an iterative procedure which successively classifies a weighted version of the sample, and then reweights this sample dependent on how successful the classification was. In this paper we review some of the commonly used methods for performing boosting and show how they can be fit into a Bayesian setup at each iteration of the algorithm. We demonstrate how this formulation gives rise to a new splitting criterion when using a domain-partitioning classification method such as a decision tree. Further we can improve the predictive performance of simple decision trees, known as stumps, by using a posterior weighted average of them to classify at each step of the algorithm, rather than just a single stump. The main advantage of this approach is to reduce the number of boosting iterations required to produce a good classifier with only a minimal increase in the computational complexity of the algorithm.

Journal ArticleDOI
TL;DR: A set of REDUCE procedures that make a number of existing higher-order asymptotic results available for both theoretical and practical research and apply to regression-scale models and multiparameter exponential families.
Abstract: This paper presents a set of REDUCE procedures that make a number of existing higher-order asymptotic results available for both theoretical and practical research. Attention has been restricted to the context of exact and approximate inference for a parameter of interest conditionally either on an ancillary statistic or on a statistic partially sufficient for the nuisance parameter. In particular, the procedures apply to regression-scale models and multiparameter exponential families. Most of them support algebraic computation as well as numerical calculation for a given data set. Examples illustrate the code.

Journal ArticleDOI
TL;DR: This paper calculates explicity bounds on convergence rates in terms calculable directly from chain transition operators, useful in cases like those considered by Kolassa (1999).
Abstract: Kolassa and Tanner (J. Am. Stat. Assoc. (1994) 89, 697–702) present the Gibbs-Skovgaard algorithm for approximate conditional inference. Kolassa (Ann Statist. (1999), 27, 129–142) gives conditions under which their Markov chain is known to converge. This paper calculates explicity bounds on convergence rates in terms calculable directly from chain transition operators. These results are useful in cases like those considered by Kolassa (1999).

Journal ArticleDOI
TL;DR: A permutation test for the white noise hypothesis is described, offering power against a general class of smooth alternatives, and an example demonstrates its use in a particular problem in which a test for randomness was sought without any specific alternative.
Abstract: A permutation test for the white noise hypothesis is described, offering power against a general class of smooth alternatives. Simulation results show that it performs well, as compared with similar tests available in the literature, in terms of power. An example demonstrates its use in a particular problem in which a test for randomness was sought without any specific alternative.


Journal ArticleDOI
TL;DR: These tests help to decide which of the covariate contributions indeed change over time, and may be modelled with constant hazard coefficients, thus reducing the number of curves that have to be estimated nonparametrically.
Abstract: In the semiparametric additive hazard regression model of McKeague and Sasieni (Biometrika 81: 501–514), the hazard contributions of some covariates are allowed to change over time, without parametric restrictions (Aalen model), while the contributions of other covariates are assumed to be constant In this paper, we develop tests that help to decide which of the covariate contributions indeed change over time The remaining covariates may be modelled with constant hazard coefficients, thus reducing the number of curves that have to be estimated nonparametrically Several bootstrap tests are proposed The behavior of the tests is investigated in a simulation study In a practical example, the tests consistently identify covariates with constant and with changing hazard contributions

Journal ArticleDOI
TL;DR: The stochastic reversal of map models is shown to lead to a new class of invariant distribution, and some connections between congruential recursions and independence in discretized chaotic processes are illustrated.
Abstract: This paper will informally explore the reversal of some stochastic autoregressive processes, which lead to deterministically chaotic processes. Correspondingly, the stochastic reversal of map models is shown to lead to a new class of invariant distribution. Finally, some connections between congruential recursions and independence in discretized chaotic processes are illustrated.