scispace - formally typeset
Search or ask a question

Showing papers in "Statistical Methods and Applications in 2012"


Journal ArticleDOI
TL;DR: The methods of maximum likelihood and parametric bootstrap and a Bayesian procedure are proposed for estimating the model parameters and explicit expressions are derived for the moments of order statistics for the Gumbel distribution.
Abstract: The Gumbel distribution is perhaps the most widely applied statistical distribution for problems in engineering. We propose a generalization—referred to as the Kumaraswamy Gumbel distribution—and provide a comprehensive treatment of its structural properties. We obtain the analytical shapes of the density and hazard rate functions. We calculate explicit expressions for the moments and generating function. The variation of the skewness and kurtosis measures is examined and the asymptotic distribution of the extreme values is investigated. Explicit expressions are also derived for the moments of order statistics. The methods of maximum likelihood and parametric bootstrap and a Bayesian procedure are proposed for estimating the model parameters. We obtain the expected information matrix. An application of the new model to a real dataset illustrates the potentiality of the proposed model. Two bivariate generalizations of the model are proposed.

102 citations


Journal ArticleDOI
TL;DR: A class of ordinal data models is generalized, called cub and proven effective for fitting and interpretation, for taking the possible presence of a shelter choice into account in rating surveys.
Abstract: In rating surveys, people are requested to evaluate objects, items, services, and so on, by choosing among a list of ordered categories. In some circumstances, it may happen that a subset of respondents selects a specific option just to simplify a more demanding choice. In this context, we generalize a class of ordinal data models (called cub and proven effective for fitting and interpretation), for taking the possible presence of a shelter choice into account. After the discussion of interpretative and inferential issues, the usefulness of the approach is checked against real case studies and by means of a simulation experiment. Some final remarks end the paper.

86 citations


Journal ArticleDOI
TL;DR: It is found that the estimated effects are increasing with amount of financial aid for both small-sized and medium- or large-sized firms, whereas the marginal effects of additional incentives are decreasing with amount the financial aid, and have an inverse J-shape for medium-or large- sized firms.
Abstract: Regional and national development policies play an important role to support local enterprises in Italy. The amount of financial aid may be a key feature for firms’ employment policies. We study the impact on employment of the amount of financial aid attributed to enterprises located in Piedmont, a region in northern Italy, analysing small-sized firms and medium- or large-sized firms separately. We apply generalized propensity score methods under the unconfoundedness assumption that adjusting for differences in a set of observed pre-treatment variables removes all biases in comparisons by different amounts of financial aid. We find that the estimated effects are increasing with amount of financial aid for both small-sized and medium- or large-sized firms, whereas the marginal effects of additional incentives are decreasing with amount of financial aid for small-sized firms, and have an inverse J-shape for medium- or large-sized firms.

43 citations


Journal ArticleDOI
TL;DR: An extension of the classical normal censored model is developed by considering independent disturbances with identical Student-t distribution and an efficient EM-type algorithm for the estimation of the model parameters is developed.
Abstract: In statistical analysis, particularly in econometrics, it is usual to consider regression models where the dependent variable is censored (limited). In particular, a censoring scheme to the left of zero is considered here. In this article, an extension of the classical normal censored model is developed by considering independent disturbances with identical Student-t distribution. In the context of maximum likelihood estimation, an expression for the expected information matrix is provided, and an efficient EM-type algorithm for the estimation of the model parameters is developed. In order to know what type of variables affect the income of housewives, the results and methods are applied to a real data set. A brief review on the normal censored regression model or Tobit model is also presented.

42 citations


Journal ArticleDOI
TL;DR: It is pointed out that neglecting the existing dependence in fact overestimates the actual household fragility, and it is shown that the proposed method improves the estimation of household financial fragility.
Abstract: The paper is inspired by the stress–strength models in the reliability literature, in which given the strength (Y) and the stress (X) of a component, its reliability is measured by P(X Y) is the measure of interest and X and Y are clearly not independent. Modeling income and consumption as non-identically Dagum distributed variables and their dependence by a Frank copula, we show that the proposed method improves the estimation of household financial fragility. Using data from the 2008 wave of the Bank of Italy’s Survey on Household Income and Wealth we point out that neglecting the existing dependence in fact overestimates the actual household fragility.

31 citations


Journal ArticleDOI
Jing Wang1
TL;DR: Results show that the Bayesian QR estimator provides a fuller examination of the shape of the conditional distribution of the response variable, and may not be generalized to models without a given model form.
Abstract: We propose quantile regression (QR) in the Bayesian framework for a class of nonlinear mixed effects models with a known, parametric model form for longitudinal data. Estimation of the regression quantiles is based on a likelihood-based approach using the asymmetric Laplace density. Posterior computations are carried out via Gibbs sampling and the adaptive rejection Metropolis algorithm. To assess the performance of the Bayesian QR estimator, we compare it with the mean regression estimator using real and simulated data. Results show that the Bayesian QR estimator provides a fuller examination of the shape of the conditional distribution of the response variable. Our approach is proposed for parametric nonlinear mixed effects models, and therefore may not be generalized to models without a given model form.

29 citations


Journal ArticleDOI
Steven B. Caudill1
TL;DR: A partially adaptive estimator for the censored regression model based on an error structure described by a mixture of two normal distributions is introduced and applied to data on wife’s hours worked from Mroz (1987).
Abstract: The goal of this paper is to introduce a partially adaptive estimator for the censored regression model based on an error structure described by a mixture of two normal distributions. The model we introduce is easily estimated by maximum likelihood using an EM algorithm adapted from the work of Bartolucci and Scaccia (Comput Stat Data Anal 48:821–834, 2005). A Monte Carlo study is conducted to compare the small sample properties of this estimator to the performance of some common alternative estimators of censored regression models including the usual tobit model, the CLAD estimator of Powell (J Econom 25:303–325, 1984), and the STLS estimator of Powell (Econometrica 54:1435–1460, 1986). In terms of RMSE, our partially adaptive estimator performed well. The partially adaptive estimator is applied to data on wife’s hours worked from Mroz (1987). In this application we find support for the partially adaptive estimator over the usual tobit model.

27 citations


Journal ArticleDOI
TL;DR: The linear correlation coefficient is obtained showing that, although it is not a strong family of correlation, it can be greater than the value of this coefficient in the Farlie–Gumbel–Morgenstern family.
Abstract: In this paper we firstly develop a Sarmanov–Lee bivariate family of distributions with the beta and gamma as marginal distributions We obtain the linear correlation coefficient showing that, although it is not a strong family of correlation, it can be greater than the value of this coefficient in the Farlie–Gumbel–Morgenstern family We also determine other measures for this family: the coefficient of median concordance and the relative entropy, which are analyzed by comparison with the case of independence Secondly, we consider the problem of premium calculation in a Poisson–Lindley and exponential collective risk model, where the Sarmanov–Lee family is used as a structure function We determine the collective and Bayes premiums whose values are analyzed when independence and dependence between the risk profiles are considered, obtaining that notable variations in premiums values are obtained even when low levels of correlation are considered

26 citations


Journal ArticleDOI
TL;DR: A hierarchical Bayesian factor model for multivariate spatially correlated data is proposed, and the inclusion of prior opinions about adjacent regions having highly correlated observable and latent variables is allowed.
Abstract: A hierarchical Bayesian factor model for multivariate spatially correlated data is proposed. Multiple cancer incidence data in Scotland are jointly analyzed, looking for common components, able to detect etiological factors of diseases hidden behind the data. The proposed method searches factor scores incorporating a dependence within observations due to a geographical structure. The great flexibility of the Bayesian approach allows the inclusion of prior opinions about adjacent regions having highly correlated observable and latent variables. The proposed model is an extension of a model proposed by Rowe (2003a) and starts from the introduction of separable covariance matrix for the observations. A Gibbs sampling algorithm is implemented to sample from the posterior distributions.

18 citations


Journal ArticleDOI
TL;DR: It can be shown that not one multi-stage design of this family of methods at the same level of privacy protection can theoretically be more efficient than its one-stage basic version.
Abstract: If nonresponse and/or untruthful answering mechanisms occur, analyzing only the available cases may substantially weaken the validity of sample results. The paper starts with a reference to strategies of empirical social researchers related to respondent cooperation in surveys embedding the statistical techniques of randomized response in this framework. Further, multi-stage randomized response techniques are incorporated into the standardized randomized response technique for estimating proportions. In addition to already existing questioning designs of this family of methods, this generalization includes also several (in particular: two-stage) techniques that have not been published before. The statistical properties of this generalized design are discussed for all probability sampling designs. Further, the efficiency of the model is presented as a function of privacy protection. Hence, it can be shown that not one multi-stage design of this family at the same level of privacy protection can theoretically be more efficient than its one-stage basic version.

14 citations


Journal ArticleDOI
TL;DR: Estimates of the joint distribution of voters between the European Parliament election and the other two elections provide evidence of substantially different kinds of voting behaviour which, given the specific context, are interpreted in the light of the recent literature on the subject.
Abstract: When elections are close in time, voters may stick to their preferred party or chose a different option for several reasons; reliable estimates of the amount of transitions across the available options can allow to answer a number of relevant questions about electoral behaviour. We describe a modified version of the model due to Brown and Payne (J Am Stat Assoc 81:453–460, 1986) and argue that it is based on simple, yet realistic, assumptions with a direct interpretation in terms of individual behaviour and compares well with other models proposed more recently. We apply the model to an Italian borough where, during June 2009, two elections were held simultaneously and a runoff took place two weeks later. Estimates of the joint distribution of voters between the European Parliament election and the other two elections provide evidence of substantially different kinds of voting behaviour which, given the specific context, we interpret in the light of the recent literature on the subject.

Journal ArticleDOI
Ali Akbar Jafari1
TL;DR: This article considers constructing a confidence interval and testing the hypotheses about the ratios of two independent generalized variances, and the ratio of two dependent generalized variance in two multivariate normal populations.
Abstract: Statistical inferences about the dispersion of multivariate population are determined by generalized variance. In this article, we consider constructing a confidence interval and testing the hypotheses about the ratio of two independent generalized variances, and the ratio of two dependent generalized variances in two multivariate normal populations. In the case of independence, we first propose a computational approach and then obtain an approximate approach. In the case of dependence, we give an approach using the concepts of generalized confidence interval and generalized p value. In each case, simulation studies are performed for comparing the methods and we find satisfactory results. Practical examples are given for each approach.

Journal ArticleDOI
TL;DR: In this paper, threshold nonlinear nonstationary models based on several regimes both in time and in levels were proposed to fit all series satisfactorily, allow a closer description of the temperature changes evolution, and help to discover the essential differences in the behavior of different stations.
Abstract: The annual temperatures recorded for the last two centuries in fifteen european stations around the Alps are analyzed. They show a global warming whose growth rate is not however constant in time. An analysis based on linear Arima models does not provide accurate results. Thus, we propose threshold nonlinear nonstationary models based on several regimes both in time and in levels. Such models fit all series satisfactorily, allow a closer description of the temperature changes evolution, and help to discover the essential differences in the behavior of the different stations.

Journal ArticleDOI
TL;DR: This paper derives elementary M- and optimally robust asymptotic linear (AL)-estimates for the parameters of an Ornstein–Uhlenbeck process and discusses the estimator construction, i.e. the problem of constructing an estimator from the family of optimal ICs.
Abstract: In this paper, we derive elementary M- and optimally robust asymptotic linear (AL)-estimates for the parameters of an Ornstein–Uhlenbeck process. Simulation and estimation of the process are already well-studied, see Iacus (Simulation and inference for stochastic differential equations. Springer, New York, 2008). However, in order to protect against outliers and deviations from the ideal law the formulation of suitable neighborhood models and a corresponding robustification of the estimators are necessary. As a measure of robustness, we consider the maximum asymptotic mean square error (maxasyMSE), which is determined by the influence curve (IC) of AL estimates. The IC represents the standardized influence of an individual observation on the estimator given the past. In a first step, we extend the method of M-estimation from Huber (Robust statistics. Wiley, New York, 1981). In a second step, we apply the general theory based on local asymptotic normality, AL estimates, and shrinking neighborhoods due to Kohl et al. (Stat Methods Appl 19:333–354, 2010), Rieder (Robust asymptotic statistics. Springer, New York, 1994), Rieder (2003), and Staab (1984). This leads to optimally robust ICs whose graph exhibits surprising behavior. In the end, we discuss the estimator construction, i.e. the problem of constructing an estimator from the family of optimal ICs. Therefore we carry out in our context the One-Step construction dating back to LeCam (Asymptotic methods in statistical decision theory. Springer, New York, 1969) and compare it by means of simulations with MLE and M-estimator.

Journal ArticleDOI
TL;DR: It is proved that individual factors (compositional effect), even representing the most important correlates of health, do not completely explain intra-regional heterogeneity, confirming the existence of an autonomous contextual effect.
Abstract: The aim of this study is to explore if the context matters in explaining socioeconomic inequality in the self-rated health of Italian elderly people. Our hypothesis is that health status perception is associated with existing huge imbalances among Italian areas. A multilevel approach is applied to account for the natural hierarchical structure, as individuals nested in geographical regions. Multilevel logistic regression models are performed including both individual and contextual variables, using data from 2005 Italian Health survey. We prove that individual factors (compositional effect), even representing the most important correlates of health, do not completely explain intra-regional heterogeneity, confirming the existence of an autonomous contextual effect. These territorial differences are present among both Regions and large areas, two geographical aggregations relevant in the domain of health. Moreover, for some Regions, the account for contextual factors explains variations in perceived health, leading to an overthrow of the initial situation: these Regions perform better than expected in the field of health. For other Regions, the contextual elements introduced do not catch the milieu heterogeneity. In this regard, we expect, and solicit, a major effort toward data availability, qualitatively and quantitatively, that might help in explaining residual territorial heterogeneity in health perception, a fundamental starting point for targeting specific policy interventions.

Journal ArticleDOI
TL;DR: A closer look at best performing non-standard workers shows that even for them an early contractual stabilization may not always be expected and there are differential effects on wages associated with non- standard patterns.
Abstract: We focus on work histories of new entrants in 1998 in the Italian labour market. For workers in the private sector, we define a standard and three non-standard history patterns. We profile the workers through a mixed-effect multinomial logit model and show that certain features may be associated with the probability of belonging to one or the other category. Furthermore, we show that there are differential effects on wages associated with non-standard patterns. A closer look at best performing non-standard workers shows that even for them an early contractual stabilization may not always be expected.

Journal ArticleDOI
TL;DR: A new algorithm for grouping sparse data to create pseudo replicates and using them to test for lack of fit is developed and analysis of a dataset consisting of the ages of menarche of Warsaw girls is used to compare the new and existing lack offit tests.
Abstract: The usefulness of logistic regression depends to a great extent on the correct specification of the relation between a binary response and characteristics of the unit on which the response is recoded. Currently used methods for testing for misspecification (lack of fit) of a proposed logistic regression model do not perform well when a data set contains almost as many distinct covariate vectors as experimental units, a condition referred to as sparsity. A new algorithm for grouping sparse data to create pseudo replicates and using them to test for lack of fit is developed. A simulation study illustrates settings in which the new test is superior to existing ones. Analysis of a dataset consisting of the ages of menarche of Warsaw girls is also used to compare the new and existing lack of fit tests.

Journal ArticleDOI
TL;DR: This work focuses on relationships between stationary point process using spectral analysis techniques and it is proved that the asymptotic distribution of the square root of the estimated PRA is Normal with a constant variance.
Abstract: In this work we focus on relationships between stationary point process using spectral analysis techniques. The evaluation of these relationships is accomplished with the help of the product ratio of association (PRA), which is based on the cumulant densities of the point processes. The estimation procedure is obtained by smoothing the periodogram statistic, a function of the frequency domain. It is proved that the asymptotic distribution of the square root of the estimated PRA is Normal with a constant variance. Statistical tests for hypotheses concerning the independence of two point processes and the characterization of a Poisson process are proposed. Furthermore, approximate 95% pointwise confidence interval can be obtained for the estimated PRA. These results can be applied on stochastic systems involving as input and output stationary point processes. An illustrative example from the framework of neurophysiology is presented.

Journal ArticleDOI
TL;DR: In this paper, assuming that returns follows a stationary and ergodic stochastic process, the asymptotic distribution of the natural estimator of the Sharpe Ratio is explicitly given and this distribution is used in order to define an approximated confidence interval for theSharpe ratio.
Abstract: In this paper, assuming that returns follows a stationary and ergodic stochastic process, the asymptotic distribution of the natural estimator of the Sharpe Ratio is explicitly given. This distribution is used in order to define an approximated confidence interval for the Sharpe ratio. Particular attention is devoted to the case of the GARCH(1,1) process. In this latter case, a simulation study is performed in order to evaluate the minimum sample size for reaching a good coverage accuracy of the asymptotic confidence intervals.

Journal ArticleDOI
TL;DR: Results suggest that when the evaluations are strongly affected by the students’ covariates, the assessment based on the value of an unadjusted indicator can lead to bias and unreliable conclusions about the differences in performance.
Abstract: Taking into account the students’ evaluation of the quality of degree programs this paper presents a proposal for building up an adjusted performance indicator based on Latent Class Regression Analysis. The method enables us (i) to summarize in a single indicator statement multiple facets evaluated by students through a survey questionnaire and (ii) to control the variability in the evaluations that is mainly attributable to the characteristics (often referred as the Potential Confounding Factors) of the evaluators (students) rather than to real differences in the performances of the degree programs under evaluation. A simulation study is implemented in order to test the method and assess its potential when the composition of the degree programs as regards to students’ characteristics is sensibly different between one another. Results suggest that when the evaluations are strongly affected by the students’ covariates, the assessment based on the value of an unadjusted indicator can lead to bias and unreliable conclusions about the differences in performance. An application to real data is also provided.

Journal ArticleDOI
TL;DR: The main aim of this work is to prove consistency and asymptotic normality of these estimators of stochastic models for the spread of epidemics developing in closed populations.
Abstract: This article is a contribution to the asymptotic inference on the parameters of a quite general class of stochastic models for the spread of epidemics developing in closed populations. Various epidemic models are contained within our framework, for instance, a stochastic version of the Kermack and McKendrick model and the SIS epidemic model. Each model belonging to this class, which consists in a family of discrete-time stochastic process, contains certain parameters to be estimated by means of martingale estimators. Some particular cases defined by means of Markov chains are included in our setting. The main aim of this work is to prove consistency and asymptotic normality of these estimators. Some hypothesis tests based on the main results are also shown.

Journal ArticleDOI
TL;DR: This paper analyze the daily concentrations of three pollutants highly relevant in such an industrial area, namely SO2, NO2 and PM10, with the aim of reconstructing daily pollutants concentration surfaces for the town area and proposes a full Bayesian separable space-time hierarchical model for each pollutant concentration series.
Abstract: An analysis of air quality data is provided for the municipal area of Taranto (Italy) characterized by high environmental risks as decreed by the Italian government in the 1990s. In the context of an agreement between Dipartimento di Scienze Statistiche—Universita degli Studi di Bari and the local regional environmental protection agency air quality, data were provided concerning six monitoring stations and covering years from 2005 to 2007. In this paper we analyze the daily concentrations of three pollutants highly relevant in such an industrial area, namely SO2, NO2 and PM10, with the aim of reconstructing daily pollutants concentration surfaces for the town area. Taking into account the large amount of sparse missing data and the non normality affecting pollutants’ concentrations, we propose a full Bayesian separable space-time hierarchical model for each pollutant concentration series. The proposed model allows to embed missing data imputation and prediction of pollutant concentration. We critically discuss the results, highlighting advantages and disadvantages of the proposed methodology.

Journal ArticleDOI
TL;DR: Two empirical applications are presented in order to compare the estimated parameters of the quarterly models for German and US gross domestic products with those of the corresponding models aggregated to annual frequency.
Abstract: This paper focuses on temporal aggregation of the cyclical component model as introduced by Harvey (1989). More specifically, it provides the properties of the aggregate process for any generic period of aggregation. As a consequence, the exact link between aggregate and disaggregate parameters can be easily derived. The cyclical model is important due to its relevance in the analysis of business cycle. Given this, two empirical applications are presented in order to compare the estimated parameters of the quarterly models for German and US gross domestic products with those of the corresponding models aggregated to annual frequency.

Journal ArticleDOI
TL;DR: This discussion focuses on threshold nonstationary–nonlinear time series modelling and raises various issues to do with identifiability and model complexity.
Abstract: This discussion focuses on threshold nonstationary–nonlinear time series modelling; it raises various issues to do with identifiability and model complexity. It also gives some background history concerning smooth threshold/transition autoregressive models and hidden Markov switching models.

Journal ArticleDOI
TL;DR: Some of the statistical results of the Battaglia and Protopapas paper are commented versus findings and also questions arising from the underlying physics of the climate system including also some climate impacts are raised.
Abstract: This discussion does not go into specific statistical details. It expresses the impressions of a climatologist when reading a paper about statistical methodologies applied on climate timeseries. Thus the main goal of the discussion is a typical interdisciplinary one. Some of the statistical results of the Battaglia and Protopapas paper are commented versus findings and also questions arising from the underlying physics of the climate system including also some climate impacts.

Journal ArticleDOI
TL;DR: In this contribution some of the statistical results presented in the paper are commented and a different approach to the problem is proposed based on a temporal aggregation analysis and it can help to highlight some features in the data.
Abstract: We discuss the paper by Battaglia and Protopapas concerning the analysis of global warming phenomenon in the Alpine Region. The Authors consider a nonlinear model which takes into account regimes in time and levels. In this contribution some of the statistical results presented in the paper are commented and a different approach to the problem is proposed. It is based on a temporal aggregation analysis and it can help to highlight some features in the data.

Journal ArticleDOI
TL;DR: The present paper is essentially a statistical analysis based on a particular model, by no means exhaustive, while a deeper study would require a more specific climate knowledge.
Abstract: Let us first of all thank the Editors for organizing such a wide, deep and interesting discussion, and the authoritative scientists who accepted to give their contribution. We are particularly proud that Professor Tong participated the discussion, since he is the father of the threshold principle, and also the first proposal of a nonlinear nonstationary threshold model is due to him. A first general suggestion is to widen the proposed analysis, taking into account at least monthly data for analyzing seasonality, as so convincingly argued by Professor Bohm, and spatial relationships possibly integrating a dynamic cluster analysis. This is undoubtedly a crucial point, and calls for further study. Moreover, the present paper is essentially a statistical analysis based on a particular model, by no means exhaustive, while a deeper study would require a more specific climate knowledge. We are particularly honored that two authoritative climatologists accepted to read our paper and to contribute this discussion, and hope this may be an occasion for future fruitful collaboration. A second general question, as raised by Professor Piccolo, is more delicate and concerns the concept of data generating process, and the extent a model should try to reproduce it rather than privileging fitting. This is a controversial issue: we reported Rissanen’s position, Professor Tong cites the famous Box’s dictum all models are wrong but some are useful, Professor Piccolo’s position seems partially different. Such an important issue, dating back at least to themeasurement without theory debate (Koopmans 1947), cannot be appropriately discussed here; we just confirm that our

Journal ArticleDOI
TL;DR: An elegant mathematical generalization of autoregressive models (the nine types) is given and state-of-the-art model fitting techniques (genetic algorithm combined with fitness function and least squares) are explained.
Abstract: The paper by Battaglia and Protopapas (Stat Method Appl 2012) is stimulating. It gives an elegant mathematical generalization of autoregressive models (the nine types). It explains state-of-the-art model fitting techniques (genetic algorithm combined with fitness function and least squares). It is written in a fluent and authoritative manner. Important for having a wider impact: it is accessible to non-statisticians. Finally, it has interesting results on the temperature evolution over the instrumental period (roughly the past 200 years). These merits make this paper an important contribution to applied statistics as well as climatology. As a climate researcher, coming from Physics and having had an affiliation with a statistical institute only as postdoc, I re-analyse here three data series with the aim of providing motivation for model selection and interpreting the results from the climatological perspective.

Journal ArticleDOI
TL;DR: The scientific contribution of Battaglia and Protopapas’ paper concerning the debate on global warming supported by an extensive analysis of temperature time series in the Alpine region is discussed.
Abstract: We discuss the scientific contribution of Battaglia and Protopapas’ paper concerning the debate on global warming supported by an extensive analysis of temperature time series in the Alpine region. In the work, Authors use several exploratory and modelling tools for assessing and discriminating the presence of different patterns in the data. We add some general and specific considerations mainly devoted to the modelling stage of their analysis.

Journal ArticleDOI
TL;DR: This paper exploits the continuous wavelet transform’s capabilities in derivative calculation to construct a two-step estimator of the scaling exponent of the n-fBm process, and discusses a weighted least squares regression based-estimator for this class of stochastic process.
Abstract: In this paper, we investigate the use of wavelet techniques in the study of the nth order fractional Brownian motion (n-fBm). First, we exploit the continuous wavelet transform’s capabilities in derivative calculation to construct a two-step estimator of the scaling exponent of the n-fBm process. We show, via simulation, that the proposed method improves the estimation performance of the n-fBm signals contaminated by large-scale noise. Second, we analyze the statistical properties of the n-fBm process in the time-scale plan. We demonstrate that, for a convenient choice of the wavelet basis, the discrete wavelet detail coefficients of the n-fBm process are stationary at each resolution level whereas their variance exhibits a power-law behavior. Using the latter property, we discuss a weighted least squares regression based-estimator for this class of stochastic process. Experiments carried out on simulated and real-world datasets prove the relevance of the proposed method.