scispace - formally typeset
Search or ask a question

Showing papers in "Scandinavian Journal of Statistics in 2006"


Journal ArticleDOI
TL;DR: In this paper, a non-parametric approach for checking whether the dependence structure of a random sample of censored bivariate data is appropriately modelled by a given family of Archimedean copulas is described.
Abstract: Wang & Wells [J Amer Statist Assoc 95 (2000) 62] describe a non-parametric approach for checking whether the dependence structure of a random sample of censored bivariate data is appropriately modelled by a given family of Archimedean copulas Their procedure is based on a truncated version of the Kendall process introduced by Genest & Rivest [J Amer Statist Assoc 88 (1993) 1034] and later studied by Barbe et al [J Multivariate Anal 58 (1996) 197] Although Wang & Wells (2000) determine the asymptotic behaviour of their truncated process, their model selection method is based exclusively on the observed value of its L2-norm This paper shows how to compute asymptotic p-values for various goodness-of-fit test statistics based on a non-truncated version of Kendall's process Conditions for weak convergence are met in the most common copula models, whether Archimedean or not The empirical behaviour of the proposed goodness-of-fit tests is studied by simulation, and power comparisons are made with a test proposed by Shih [Biometrika 85 (1998) 189] for the gamma frailty family

425 citations


Journal ArticleDOI
TL;DR: In this article, the authors unify these proposals under a new general formulation, clarifying at the same time their relationships, and sketch an extension of the argument to the skew-elliptical family.
Abstract: . The distribution theory literature connected to the multivariate skew-normal distribution has grown rapidly in recent years, and a number of extensions and alternative formulations have been put forward. Presently there are various coexisting proposals, similar but not identical, and with rather unclear connections. The aim of this paper is to unify these proposals under a new general formulation, clarifying at the same time their relationships. The final part sketches an extension of the argument to the skew-elliptical family.

348 citations


Journal ArticleDOI
TL;DR: This paper embeds tail dependence into the concept of tail copulae which describes the dependence structure in the tail of multivariate distributions but works more generally.
Abstract: . Dependencies between extreme events (extremal dependencies) are attracting an increasing attention in modern risk management. In practice, the concept of tail dependence represents the current standard to describe the amount of extremal dependence. In theory, multi-variate extreme-value theory turns out to be the natural choice to model the latter dependencies. The present paper embeds tail dependence into the concept of tail copulae which describes the dependence structure in the tail of multivariate distributions but works more generally. Various non-parametric estimators for tail copulae and tail dependence are discussed, and weak convergence, asymptotic normality, and strong consistency of these estimators are shown by means of a functional delta method. Further, weak convergence of a general upper-order rank-statistics for extreme events is investigated and the relationship to tail dependence is provided. A simulation study compares the introduced estimators and two financial data sets were analysed by our methods.

340 citations


Journal ArticleDOI
TL;DR: A Bayesian design criterion is proposed which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model parameter values are unknown.
Abstract: This paper describes the use of model-based geostatistics for choosing the optimal set of sampling locations, collectively called the design, for a geostatistical analysis Two types of design situations are considered These are retrospective design, which concerns the addition of sampling locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing optimal positions for a new set of sampling locations We propose a Bayesian desin criterion which focuses on the goal of ecien t spatial prediction whilst allowing for the fact that model parameter values are unknown The results show that in this situation a wide range of inter-point distances should be included in the design, and the widely used regular design is therefore not the optimal choice

216 citations


Journal ArticleDOI
TL;DR: In this paper, the identifiability of finite mixtures of elliptical distributions under conditions on the characteristic generators or density generators was studied, including the multivariate t-distribution, symmetric stable laws, exponential power and Kotz distributions.
Abstract: . We present general results on the identifiability of finite mixtures of elliptical distributions under conditions on the characteristic generators or density generators. Examples include the multivariate t-distribution, symmetric stable laws, exponential power and Kotz distributions. In each case, the shape parameter is allowed to vary in the mixture, in addition to the location vector and the scatter matrix. Furthermore, we discuss the identifiability of finite mixtures of elliptical densities with generators that correspond to scale mixtures of normal distributions.

79 citations


Journal ArticleDOI
TL;DR: In this article, the Dirichlet process is characterized as the only conjugate member of the whole class of normalized random measures with independent increments, and a new technique for deriving moments of these random probability measures is proposed.
Abstract: . Recently the class of normalized random measures with independent increments, which contains the Dirichlet process as a particular case, has been introduced. Here a new technique for deriving moments of these random probability measures is proposed. It is shown that, a priori, most of the appealing properties featured by the Dirichlet process are preserved. When passing to posterior computations, we obtain a characterization of the Dirichlet process as the only conjugate member of the whole class of normalized random measures with independent increments.

74 citations


Journal ArticleDOI
TL;DR: In this article, a two-component mixture model where one component distribution is known while the mixing proportion and the other component distribution are unknown is considered and an estimation method for the unknown parameters which is shown to be strongly consistent under mild conditions is proposed.
Abstract: . We consider a two-component mixture model where one component distribution is known while the mixing proportion and the other component distribution are unknown. These kinds of models were first introduced in biology to study the differences in expression between genes. The various estimation methods proposed till now have all assumed that the unknown distribution belongs to a parametric family. In this paper, we show how this assumption can be relaxed. First, we note that generally the above model is not identifiable, but we show that under moment and symmetry conditions some ‘almost everywhere’ identifiability results can be obtained. Where such identifiability conditions are fulfilled we propose an estimation method for the unknown parameters which is shown to be strongly consistent under mild conditions. We discuss applications of our method to microarray data analysis and to the training data problem. We compare our method to the parametric approach using simulated data and, finally, we apply our method to real data from microarray experiments.

65 citations


Journal ArticleDOI
Nicolai Meinshausen1
TL;DR: In this paper, a confidence envelope for false discovery control when testing multiple hypotheses of association simultaneously is proposed, which allows for an exploratory approach when choosing suitable rejection regions while still retaining strong control over the proportion of false discoveries.
Abstract: . We propose a confidence envelope for false discovery control when testing multiple hypotheses of association simultaneously. The method is valid under arbitrary and unknown dependence between the test statistics and allows for an exploratory approach when choosing suitable rejection regions while still retaining strong control over the proportion of false discoveries.

65 citations


Journal ArticleDOI
TL;DR: In this article, the estimation of unknown parameters in the drift and diffusion coefficients of a one-dimensional ergodic diffusion X when the observation is a discrete sampling of the integral of X at times iΔ,i = 1,n.
Abstract: . We consider the estimation of unknown parameters in the drift and diffusion coefficients of a one-dimensional ergodic diffusion X when the observation is a discrete sampling of the integral of X at times iΔ,i = 1,…,n. Assuming that the sampling interval tends to 0 while the total length time interval tends to infinity, we first prove limit theorems for functionals associated with our observations. We apply these results to obtain a contrast function. The associated minimum contrast estimators are shown to be consistent and asymptotically Gaussian with different rates for drift and diffusion coefficient parameters.

60 citations


Journal ArticleDOI
TL;DR: Pareto sampling was introduced by Rosen in the late 1990s and is a simple method to get a fixed size πps sample though with inclusion probabilities only approximately as desired as discussed by the authors.
Abstract: Pareto sampling was introduced by Rosen in the late 1990s. It is a simple method to get a fixed size πps sample though with inclusion probabilities only approximately as desired. Sampford sampling, ...

49 citations


Journal ArticleDOI
TL;DR: In this article, rank-based monotone estimating functions for the regression parameters of these marginal models based on right-censored observations are developed, and the resultant estimators are consistent and asymptotically normal.
Abstract: Multivariate failure time data arises when each study subject can potentially ex- perience several types of failures or recurrences of a certain phenomenon, or when failure times are sampled in clusters. We formulate the marginal distributions of such multivariate data with semi- parametric accelerated failure time models (i.e. linear regression models for log-transformed failure times with arbitrary error distributions) while leaving the dependence structures for related failure times completely unspecified. We develop rank-based monotone estimating functions for the regression parameters of these marginal models based on right-censored observations. The estimat- ing equations can be easily solved via linear programming. The resultant estimators are consistent and asymptotically normal. The limiting covariance matrices can be readily estimated by a novel resampling approach, which does not involve non-parametric density estimation or evaluation of numerical derivatives. The proposed estimators represent consistent roots to the potentially non- monotone estimating equations based on weighted log-rank statistics. Simulation studies show that the new inference procedures perform well in small samples. Illustrations with real medical data are provided.

Journal ArticleDOI
TL;DR: In this article, a model-based approach is taken, where the covariates in the superpopulation model are subject to measurement errors, and the asymptotic optimality of EB estimators is proved.
Abstract: . This paper considers simultaneous estimation of means from several strata. A model-based approach is taken, where the covariates in the superpopulation model are subject to measurement errors. Empirical Bayes (EB) and Hierarchical Bayes estimators of the strata means are developed and asymptotic optimality of EB estimators is proved. Their performances are examined and compared with that of the sample mean in a simulation study as well as in data analysis.

Journal ArticleDOI
TL;DR: In this paper, maximum likelihood estimates in Gaussian AMP chain graph models can be obtained by combining generalized least squares and iterative proportional fitting to an iterative algorithm, and they give useful convergence results for iterative partial maximization algorithms.
Abstract: . The Andersson–Madigan–Perlman (AMP) Markov property is a recently proposed alternative Markov property (AMP) for chain graphs. In the case of continuous variables with a joint multivariate Gaussian distribution, it is the AMP rather than the earlier introduced Lauritzen–Wermuth–Frydenberg Markov property that is coherent with data-generation by natural block-recursive regressions. In this paper, we show that maximum likelihood estimates in Gaussian AMP chain graph models can be obtained by combining generalized least squares and iterative proportional fitting to an iterative algorithm. In an appendix, we give useful convergence results for iterative partial maximization algorithms that apply in particular to the described algorithm.

Journal ArticleDOI
TL;DR: In this article, an optimal Bayesian decision procedure for testing hypothesis in normal linear models based on intrinsic model posterior probabilities is considered, and the evaluation of the procedure can be carried out analytically through the frequentist analysis of the posterior probability of the null.
Abstract: . An optimal Bayesian decision procedure for testing hypothesis in normal linear models based on intrinsic model posterior probabilities is considered. It is proven that these posterior probabilities are simple functions of the classical F-statistic, thus the evaluation of the procedure can be carried out analytically through the frequentist analysis of the posterior probability of the null. An asymptotic analysis proves that, under mild conditions on the design matrix, the procedure is consistent. For any testing hypothesis it is also seen that there is a one-to-one mapping – which we call calibration curve– between the posterior probability of the null hypothesis and the classical bip-value. This curve adds substantial knowledge about the possible discrepancies between the Bayesian and the p-value measures of evidence for testing hypothesis. It permits a better understanding of the serious difficulties that are encountered in linear models for interpreting the p-values. A specific illustration of the variable selection problem is given.

Journal ArticleDOI
TL;DR: In this article, the problem of estimating the association between two related survival variables when they follow a copula model and only bivariate interval-censored failure time data are available is studied.
Abstract: . Multivariate failure time data frequently occur in medical studies and the dependence or association among survival variables is often of interest (Biometrics, 51, 1995, 1384; Stat. Med., 18, 1999, 3101; Biometrika, 87, 2000, 879; J. Roy. Statist. Soc. Ser. B, 65, 2003, 257). We study the problem of estimating the association between two related survival variables when they follow a copula model and only bivariate interval-censored failure time data are available. For the problem, a two-stage estimation procedure is proposed and the asymptotic properties of the proposed estimator are established. Simulation studies are conducted to assess the finite sample properties of the presented estimate and the results suggest that the method works well for practical situations. An example from an acquired immunodeficiency syndrome clinical trial is discussed.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the behavior of a so-called cumulant M-estimator, in case this Levy density is characterized by a Euclidean (finite-dimensional) parameter.
Abstract: Key words and Phrases: cumulant, empirical characteristic function, Levy process, self-decomposable distribution, stationary process. Consider a stationary sequence of random variables with infinitely divisible marginal law, characterized by its Levy density. We analyze the behavior of a so-called cumulant M-estimator, in case this Levy density is characterized by a Euclidean (finite-dimensional) parameter. Under mild conditions, we prove consistency and asymptotic normality of the estimator. The estimator is considered in the situation where the data are increments of a subordinator as well as the situation where the data consist of a discretely sampled Ornstein Uhlenbeck process induced by the subordinator. We illustrate our results for the Gamma-process and the Inverse-Gaussian-OU-process. For these processes we also explain how the estimator can be computed numerically.

Journal ArticleDOI
TL;DR: In this paper, a constrained empirical likelihood confidence region for a parameter in the semi-linear errors-in-variables model is proposed, which combines the score function corresponding to the squared orthogonal distance with a constraint on the parameter, and it overcomes that the solution of limiting mean estimation equations is not unique.
Abstract: This paper proposes a constrained empirical likelihood confidence region for a parameter in the semi-linear errors-in-variables model. The confidence region is constructed by combining the score function corresponding to the squared orthogonal distance with a constraint on the parameter, and it overcomes that the solution of limiting mean estimation equations is not unique. It is shown that the empirical log likelihood ratio at the true parameter converges to the standard chi-square distribution. Simulations show that the proposed confidence region has coverage probability which is closer to the nominal level, as well as narrower than those of normal approximation of generalized least squares estimator in most cases. A real data example is given.

Journal ArticleDOI
TL;DR: In this article, the authors present a specification test for the parametric form of the variance function in diffusion processes, where the test is based on the estimation of certain integrals of the volatility function.
Abstract: Properties of a specification test for the parametric form of the variance function in diffusion processes dXt = b(t,Xt)dt + � (t,Xt)dWt are discussed. The test is based on the estimation of certain integrals of the volatility function. If the volatility function does not depend on the variable x it is known that the corresponding statistics have an asymptotic normal distribution. However, most models of mathematical finance use a volatility function which depends on the state x. In

Journal ArticleDOI
TL;DR: In this article, the goodness of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations, provided that the specified joint model for random effects and observations is correct, the marginal distribution of the simulated random effects coincides with the assumed random effects distribution.
Abstract: . The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function obtained from the conditional sample of the random effects. The approach is illustrated by simulation studies and data examples.

Journal ArticleDOI
TL;DR: In this article, the authors considered the Extended Growth Curve model and showed that the residuals are symmetrically distributed around zero and are uncorrelated with each other, and the covariance between residuals and the estimated model as well as the dispersion matrices for residuals were also given.
Abstract: The Extended Growth Curve model is considered. It turns out that the estimated mean of the model is the projection of the observations on the space generated by the design matrices which turns out to be the sum of two tensor product spaces. The orthogonal complement of this space is decomposed into four orthogonal spaces and residuals are defined by projecting the observation matrix on the resulting components. The residuals are interpreted and some remarks are given as to why we should not use ordinary residuals, what kind of information our residuals give and how this information might be used to validate model assumptions and detect outliers and influential observations. It is shown that the residuals are symmetrically distributed around zero and are uncorrelated with each other. The covariance between the residuals and the estimated model as well as the dispersion matrices for the residuals are also given. The Extended Growth Curve model has many applications and may arise in many differ- ent situations. One of its principal applications under the condition of equally observed 'time points', is in the analysis of growth curves which is applied extensively in biostatistics, medi- cal research and epidemiology. Verbyla & Venables (1988) considered models of which the Extended Growth Curve model is a special case. They gave several examples to illustrate how the model may arise. They also gave some indications of the applications of the model. Therefore, if using the Extended Growth Curve model in practice, there is a need to develop some diagnostic tools for validating the model. To our knowledge there has not yet been any studies regarding residuals in this model. We hope that this paper will lay a ground for further

Journal ArticleDOI
TL;DR: In this paper, the authors consider a stochastic volatility model where the volatility (V t ) is a positive stationary Markov process, and propose a non-parametric estimator for f obtained by a penalized projection method.
Abstract: In this paper, we consider a stochastic volatility model (Y t , V t ), where the volatility (V t ) is a positive stationary Markov process. We assume that (ln V t ) admits a stationary density f that we want to estimate. Only the price process Y t is observed at n discrete times with regular sampling interval A. We propose a non-parametric estimator for f obtained by a penalized projection method. Under mixing assumptions on (V t ), we derive bounds for the quadratic risk of the estimator. Assuming that Δ=Δ n tends to 0 while the number of observations and the length of the observation time tend to infinity, we discuss the rate of convergence of the risk. Examples of models included in this framework are given.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a method of deconvolution in a periodic setting which combines two important ideas, the fast wavelet and Fourier transform-based estimation procedure of Johnstone et al.
Abstract: . The paper proposes a method of deconvolution in a periodic setting which combines two important ideas, the fast wavelet and Fourier transform-based estimation procedure of Johnstone et al. [J. Roy. Statist. Soc. Ser. B66 (2004) 547] and the multichannel system technique proposed by Casey and Walnut [SIAM Rev. 36 (1994) 537]. An unknown function is estimated by a wavelet series where the empirical wavelet coefficients are filtered in an adapting non-linear fashion. It is shown theoretically that the estimator achieves optimal convergence rate in a wide range of Besov spaces. The procedure allows to reduce the ill-posedness of the problem especially in the case of non-smooth blurring functions such as boxcar functions: it is proved that additions of extra channels improve convergence rate of the estimator. Theoretical study is supplemented by an extensive set of small-sample simulation experiments demonstrating high-quality performance of the proposed method.

Journal ArticleDOI
Yi Li1, Louise Ryan1
TL;DR: In this paper, the authors proposed a method for fitting proportional hazards models with error-prone covariates, where regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates.
Abstract: . We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerate distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicates for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error.

Journal ArticleDOI
TL;DR: New point and block predictors are derived that are optimal in mean squared error sense within certain families of predictors that contain the corresponding lognormal kriging point andBlock predictors, as well as a block predictor originally motivated under the assumption of 'preservation of lognormality', and hence improve upon them.
Abstract: This work discusses the problems of point and block prediction in log-Gaussian random fields with unknown mean. New point and block predictors are derived that are optimal in mean squared error sense within certain families of predictors that contain the corresponding lognormal kriging point and block predictors, as well as a block predictor originally motivated under the assumption of 'preservation of lognormality', and hence improve upon them. A compari- son between the optimal, lognormal kriging and best linear unbiased predictors is provided, as well as between the two new block predictors. Somewhat surprisingly, it is shown that the correspond- ing optimal and lognormal kriging predictors are almost identical under most scenarios. It is also shown that one of the new block predictors is uniformly better than the other.

Journal ArticleDOI
TL;DR: In this paper, semiparametric regression methods for analyzing multiple-category recurrent event data and considering the setting where event times are always known, but the information used to categorize events may be missing.
Abstract: Censored recurrent event data frequently arise in biomedical studies Often, the events are not homogenous, and may be categorized We propose semiparametric regression methods for analysing multiple-category recurrent event data and consider the setting where event times are always known, but the information used to categorize events may be missing Application of existing methods after censoring events of unknown category (ie ‘complete-case’ methods) produces consistent estimators only when event types are missing completely at random, an assumption which will frequently fail in practice We propose methods, based on weighted estimating equations, which are applicable when event category missingness is missing at random Parameter estimators are shown to be consistent and asymptotically normal Finite sample properties are examined through simulations and the proposed methods are applied to an end-stage renal disease data set obtained from a national organ failure registry

Journal ArticleDOI
TL;DR: This paper considers the problem of testing whether a specific covariate has different impacts on the regression curve in these two samples, and proposes a subsampling procedure with automatic choice of subsample size that avoids the curse of dimensionality.
Abstract: . Imagine we have two different samples and are interested in doing semi- or non-parametric regression analysis in each of them, possibly on the same model. In this paper, we consider the problem of testing whether a specific covariate has different impacts on the regression curve in these two samples. We compare the regression curves of different samples but are interested in specific differences instead of testing for equality of the whole regression function. Our procedure does allow for random designs, different sample sizes, different variance functions, different sets of regressors with different impact functions, etc. As we use the marginal integration approach, this method can be applied to any strong, weak or latent separable model as well as to additive interaction models to compare the lower dimensional separable components between the different samples. Thus, in the case of having separable models, our procedure includes the possibility of comparing the whole regression curves, thereby avoiding the curse of dimensionality. It is shown that bootstrap fails in theory and practice. Therefore, we propose a subsampling procedure with automatic choice of subsample size. We present a complete asymptotic theory and an extensive simulation study.

Journal ArticleDOI
TL;DR: In this paper, a large sample approximation of the distribution of the convex-hull estimator in the general case where p ≥ 1 is given. But the authors do not discuss how to use this large sample to correct the bias of the DEA estimators and to construct confidence intervals for the true function.
Abstract: . Given n independent and identically distributed observations in a set G = {(x, y) ∈ [0, 1]p × ℝ : 0 ≤ y ≤ g(x)} with an unknown function g, called a boundary or frontier, it is desired to estimate g from the observations. The problem has several important applications including classification and cluster analysis, and is closely related to edge estimation in image reconstruction. The convex-hull estimator of a boundary or frontier is also very popular in econometrics, where it is a cornerstone of a method known as ‘data envelope analysis’. In this paper, we give a large sample approximation of the distribution of the convex-hull estimator in the general case where p ≥ 1. We discuss ways of using the large sample approximation to correct the bias of the convex-hull and the DEA estimators and to construct confidence intervals for the true function.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of hypothesis testing with the basic simple hypothesis that observed sequence of points corresponds to the stationary Poisson process with known intensity, and construct locally asymptotically uniformly most powerful tests.
Abstract: . We consider the problem of hypotheses testing with the basic simple hypothesis: observed sequence of points corresponds to the stationary Poisson process with known intensity. The alternatives are stationary self-exciting point processes. We consider one-sided parametric and one-sided non-parametric composite alternatives and construct locally asymptotically uniformly most powerful tests. The results of numerical simulations of the tests are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a method of using simulations from the Markov chain to construct a statistical estimate of X from which it is straightforward to sample, and show that this estimate is "strongly consistent" in the sense that the total variation distance between the estimate and X converges to 0 almost surely as the number of simulations grows.
Abstract: Let ir denote an intractable probability distribution that we would like to explore. Suppose that we have a positive recurrent, irreducible Markov chain that satisfies a minorization condition and has a as its invariant measure. We provide a method of using simulations from the Markov chain to construct a statistical estimate of X from which it is straightforward to sample. We show that this estimate is 'strongly consistent' in the sense that the total variation distance between the estimate and X converges to 0 almost surely as the number of simulations grows. Moreover, we use some recently developed asymptotic results to provide guidance as to how much simulation is necessary. Draws from the estimate can be used to approximate features of X or as intelligent starting values for the original Markov chain. We illustrate our methods with two examples.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a regularized version of the SIR with the accuracy of a neural network for regression and classification with functional predictors, which combines an efficient dimension reduction procedure [functional sliced inverse regression, first introduced by Ferre & Yao (Statistics, 37, 2003, 475)], for which they gave a regularised version, with the accuracies of neural networks.
Abstract: . Functional data analysis is a growing research field as more and more practical applications involve functional data. In this paper, we focus on the problem of regression and classification with functional predictors: the model suggested combines an efficient dimension reduction procedure [functional sliced inverse regression, first introduced by Ferre & Yao (Statistics, 37, 2003, 475)], for which we give a regularized version, with the accuracy of a neural network. Some consistency results are given and the method is successfully confronted to real-life data.