scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 2015"


Journal ArticleDOI
TL;DR: Model reduction aims to reduce the computational burden by generating reduced models that are faster and cheaper to simulate, yet accurately represent the original large-scale system behavior as mentioned in this paper. But model reduction of linear, nonparametric dynamical systems has reached a considerable level of maturity, as reflected by several survey papers and books.
Abstract: Numerical simulation of large-scale dynamical systems plays a fundamental role in studying a wide range of complex physical phenomena; however, the inherent large-scale nature of the models often leads to unmanageable demands on computational resources. Model reduction aims to reduce this computational burden by generating reduced models that are faster and cheaper to simulate, yet accurately represent the original large-scale system behavior. Model reduction of linear, nonparametric dynamical systems has reached a considerable level of maturity, as reflected by several survey papers and books. However, parametric model reduction has emerged only more recently as an important and vibrant research area, with several recent advances making a survey paper timely. Thus, this paper aims to provide a resource that draws together recent contributions in different communities to survey the state of the art in parametric model reduction methods. Parametric model reduction targets the broad class of problems for wh...

1,230 citations


Journal ArticleDOI
TL;DR: Adaptive branch-site random effects likelihood (aBSREL), whose key innovation is variable parametric complexity chosen with an information theoretic criterion, delivers statistical performance matching or exceeding best-in-class existing approaches, while running an order of magnitude faster.
Abstract: Over the past two decades, comparative sequence analysis using codon-substitution models has been honed into a powerful and popular approach for detecting signatures of natural selection from molecular data. A substantial body of work has focused on developing a class of “branch-site” models which permit selective pressures on sequences, quantified by the ω ratio, to vary among both codon sites and individual branches in the phylogeny. We develop and present a method in this class, adaptive branch-site random effects likelihood (aBSREL), whose key innovation is variable parametric complexity chosen with an information theoretic criterion. By applying models of different complexity to different branches in the phylogeny, aBSREL delivers statistical performance matching or exceeding best-in-class existing approaches, while running an order of magnitude faster. Based on simulated data analysis, we offer guidelines for what extent and strength of diversifying positive selection can be detected reliably and suggest that there is a natural limit on the optimal parametric complexity for “branch-site” models. An aBSREL analysis of 8,893 Euteleostomes gene alignments demonstrates that over 80% of branches in typical gene phylogenies can be adequately modeled with a single ω ratio model, that is, current models are unnecessarily complicated. However, there are a relatively small number of key branches, whose identities are derived from the data using a model selection procedure, for which it is essential to accurately model evolutionary complexity.

501 citations


Journal ArticleDOI
TL;DR: The ECCO v4 non-linear inverse modeling framework and its baseline solution for the evolving ocean state over the period 1992-2011 are publicly available and subjected to regular, automated regression tests as mentioned in this paper.
Abstract: . This paper presents the ECCO v4 non-linear inverse modeling framework and its baseline solution for the evolving ocean state over the period 1992–2011. Both components are publicly available and subjected to regular, automated regression tests. The modeling framework includes sets of global conformal grids, a global model setup, implementations of data constraints and control parameters, an interface to algorithmic differentiation, as well as a grid-independent, fully capable Matlab toolbox. The baseline ECCO v4 solution is a dynamically consistent ocean state estimate without unidentified sources of heat and buoyancy, which any interested user will be able to reproduce accurately. The solution is an acceptable fit to most data and has been found to be physically plausible in many respects, as documented here and in related publications. Users are being provided with capabilities to assess model–data misfits for themselves. The synergy between modeling and data synthesis is asserted through the joint presentation of the modeling framework and the state estimate. In particular, the inverse estimate of parameterized physics was instrumental in improving the fit to the observed hydrography, and becomes an integral part of the ocean model setup available for general use. More generally, a first assessment of the relative importance of external, parametric and structural model errors is presented. Parametric and external model uncertainties appear to be of comparable importance and dominate over structural model uncertainty. The results generally underline the importance of including turbulent transport parameters in the inverse problem.

388 citations


Journal ArticleDOI
TL;DR: The Standardized Drought Analysis Toolbox (SDAT) as mentioned in this paper is based on a nonparametric framework that can be applied to different climatic variables including precipitation, soil moisture and relative humidity, without having to assume representative parametric distributions.

274 citations


Journal ArticleDOI
TL;DR: The Goddard profiling algorithm has evolved from a pseudoparametric algorithm used in the current TRMM operational product to a fully parametric approach used operationally in the GPM era (GPROF 2014), which uses a Bayesian inversion for all surface types.
Abstract: The Goddard profiling algorithm has evolved from a pseudoparametric algorithm used in the current TRMM operational product (GPROF 2010) to a fully parametric approach used operationally in the GPM era (GPROF 2014). The fully parametric approach uses a Bayesian inversion for all surface types. The algorithm thus abandons rainfall screening procedures and instead uses the full brightness temperature vector to obtain the most likely precipitation state. This paper offers a complete description of the GPROF 2010 and GPROF 2014 algorithms and assesses the sensitivity of the algorithm to assumptions related to channel uncertainty as well as ancillary data. Uncertainties in precipitation are generally less than 1%–2% for realistic assumptions in channel uncertainties. Consistency among different radiometers is extremely good over oceans. Consistency over land is also good if the diurnal cycle is accounted for by sampling GMI product only at the time of day that different sensors operate. While accounting...

271 citations


Journal ArticleDOI
TL;DR: A new R package nparcomp is introduced which provides an easy and user-friendly access to rank-based methods for the analysis of unbalanced one-way layouts and provides procedures performing multiple comparisons and computing simultaneous confidence intervals for the estimated effects which can be easily visualized.
Abstract: One-way layouts, i.e., a single factor with several levels and multiple observations at each level, frequently arise in various fields. Usually not only a global hypothesis is of interest but also multiple comparisons between the different treatment levels. In most practical situations, the distribution of observed data is unknown and there may exist a number of atypical measurements and outliers. Hence, use of parametric and semiparametric procedures that impose restrictive distributional assumptions on observed samples becomes questionable. This, in turn, emphasizes the demand on statistical procedures that enable us to accurately and reliably analyze one-way layouts with minimal conditions on available data. Nonparametric methods offer such a possibility and thus become of particular practical importance. In this article, we introduce a new R package nparcomp which provides an easy and user-friendly access to rank-based methods for the analysis of unbalanced one-way layouts. It provides procedures performing multiple comparisons and computing simultaneous confidence intervals for the estimated effects which can be easily visualized. The special case of two samples, the nonparametric Behrens-Fisher problem, is included. We illustrate the implemented procedures by examples from biology and medicine.

241 citations


Journal ArticleDOI
TL;DR: A systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data concludes that the family of kernel-based MLRAs (e.g. GPR) is the most promising processing approach.
Abstract: Given the forthcoming availability of Sentinel-2 (S2) images, this paper provides a systematic comparison of retrieval accuracy and processing speed of a multitude of parametric, non-parametric and physically-based retrieval methods using simulated S2 data. An experimental field dataset (SPARC), collected at the agricultural site of Barrax (Spain), was used to evaluate different retrieval methods on their ability to estimate leaf area index (LAI). With regard to parametric methods, all possible band combinations for several two-band and three-band index formulations and a linear regression fitting function have been evaluated. From a set of over ten thousand indices evaluated, the best performing one was an optimized three-band combination according to ( ρ 560 - ρ 1610 - ρ 2190 ) / ( ρ 560 + ρ 1610 + ρ 2190 ) with a 10-fold cross-validation R CV 2 of 0.82 ( RMSE CV : 0.62). This family of methods excel for their fast processing speed, e.g., 0.05 s to calibrate and validate the regression function, and 3.8 s to map a simulated S2 image. With regard to non-parametric methods, 11 machine learning regression algorithms (MLRAs) have been evaluated. This methodological family has the advantage of making use of the full optical spectrum as well as flexible, nonlinear fitting. Particularly kernel-based MLRAs lead to excellent results, with variational heteroscedastic (VH) Gaussian Processes regression (GPR) as the best performing method, with a R CV 2 of 0.90 ( RMSE CV : 0.44). Additionally, the model is trained and validated relatively fast (1.70 s) and the processed image (taking 73.88 s) includes associated uncertainty estimates. More challenging is the inversion of a PROSAIL based radiative transfer model (RTM). After the generation of a look-up table (LUT), a multitude of cost functions and regularization options were evaluated. The best performing cost function is Pearson’s χ -square. It led to a R 2 of 0.74 ( RMSE : 0.80) against the validation dataset. While its validation went fast (0.33 s), due to a per-pixel LUT solving using a cost function, image processing took considerably more time (01:01:47). Summarizing, when it comes to accurate and sufficiently fast processing of imagery to generate vegetation attributes, this paper concludes that the family of kernel-based MLRAs (e.g. GPR) is the most promising processing approach.

240 citations


Journal ArticleDOI
TL;DR: The results suggest that analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories.

225 citations


Journal ArticleDOI
TL;DR: New adaptive finite time continuous distributed control algorithms are proposed for the multi-agent systems and it is shown that the states of the mechanical systems can reach a consensus within finite time under an undirected graph.

220 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo ensemble approach was adopted to characterizeparametricuncertainty, because initial experimentsindicatetheexistenceof significant nonlinear interactions, and the resulting ensemble exhibits a wider uncertainty range before 1900, as well as an uncertainty maximum around World War II.
Abstract: Described herein is the parametric and structural uncertainty quantification for the monthly Extended Reconstructed Sea Surface Temperature (ERSST) version 4 (v4). A Monte Carlo ensemble approach was adoptedtocharacterizeparametricuncertainty,becauseinitialexperimentsindicatetheexistenceofsignificant nonlinear interactions. Globally, the resulting ensemble exhibits a wider uncertainty range before 1900, as well as an uncertainty maximum around World War II. Changes at smaller spatial scales in many regions, or for important features such as Nino-3.4 variability, are found to be dominated by particular parameter choices. Substantial differences in parametric uncertainty estimates are found between ERSST.v4 and the independently derived Hadley Centre SST version 3 (HadSST3) product. The largest uncertainties are over the mid and high latitudes in ERSST.v4but in the tropics in HadSST3. Overall, in comparison with HadSST3, ERSST.v4 has larger parametric uncertainties at smaller spatial and shorter time scales and smaller parametric uncertainties at longer time scales, which likely reflects the different sources of uncertainty quantified in the respective parametric analyses. ERSST.v4 exhibits a stronger globally averaged warming trend than HadSST3duringtheperiodof1910‐2012,butwithasmallerparametricuncertainty.Theseglobal-meantrend estimates and their uncertainties marginally overlap. Several additional SST datasetsare usedto infer the structuraluncertainty inherent in SST estimates. For the global mean, the structural uncertainty, estimated as the spread between available SST products, is more often than not larger than the parametric uncertainty in ERSST.v4. Neither parametric nor structural uncertainties call into question that on the global-mean level and centennial time scale, SSTs have warmed notably.

218 citations


Journal ArticleDOI
TL;DR: In this article, a regression approach is proposed to obtain asymptotically efficient estimation of each entry of a precision matrix under a sparseness condition relative to the sample size.
Abstract: The Gaussian graphical model, a popular paradigm for studying relationship among variables in a wide range of applications, has attracted great attention in recent years. This paper considers a fundamental question: When is it possible to estimate low-dimensional parameters at parametric square-root rate in a large Gaussian graphical model? A novel regression approach is proposed to obtain asymptotically efficient estimation of each entry of a precision matrix under a sparseness condition relative to the sample size. When the precision matrix is not sufficiently sparse, or equivalently the sample size is not sufficiently large, a lower bound is established to show that it is no longer possible to achieve the parametric rate in the estimation of each entry. This lower bound result, which provides an answer to the delicate sample size question, is established with a novel construction of a subset of sparse precision matrices in an application of Le Cam’s lemma. Moreover, the proposed estimator is proven to have optimal convergence rate when the parametric rate cannot be achieved, under a minimal sample requirement. The proposed estimator is applied to test the presence of an edge in the Gaussian graphical model or to recover the support of the entire model, to obtain adaptive rate-optimal estimation of the entire precision matrix as measured by the matrix $\ell_{q}$ operator norm and to make inference in latent variables in the graphical model. All of this is achieved under a sparsity condition on the precision matrix and a side condition on the range of its spectrum. This significantly relaxes the commonly imposed uniform signal strength condition on the precision matrix, irrepresentability condition on the Hessian tensor operator of the covariance matrix or the $\ell_{1}$ constraint on the precision matrix. Numerical results confirm our theoretical findings. The ROC curve of the proposed algorithm, Asymptotic Normal Thresholding (ANT), for support recovery significantly outperforms that of the popular GLasso algorithm.

Journal ArticleDOI
TL;DR: Results reveal that, at least on a theoretical level, the solution map can be well approximated by discretizations of moderate complexity, thereby showing how the curse of dimensionality is broken.
Abstract: Parametrized families of PDEs arise in various contexts such as inverse problems, control and optimization, risk assessment, and uncertainty quantification. In most of these applications, the number of parameters is large or perhaps even infinite. Thus, the development of numerical methods for these parametric problems is faced with the possible curse of dimensionality. This article is directed at (i) identifying and understanding which properties of parametric equations allow one to avoid this curse and (ii) developing and analyzing effective numerical methodd which fully exploit these properties and, in turn, are immune to the growth in dimensionality. The first part of this article studies the smoothness and approximability of the solution map, that is, the map $a\mapsto u(a)$ where $a$ is the parameter value and $u(a)$ is the corresponding solution to the PDE. It is shown that for many relevant parametric PDEs, the parametric smoothness of this map is typically holomorphic and also highly anisotropic in that the relevant parameters are of widely varying importance in describing the solution. These two properties are then exploited to establish convergence rates of $n$-term approximations to the solution map for which each term is separable in the parametric and physical variables. These results reveal that, at least on a theoretical level, the solution map can be well approximated by discretizations of moderate complexity, thereby showing how the curse of dimensionality is broken. This theoretical analysis is carried out through concepts of approximation theory such as best $n$-term approximation, sparsity, and $n$-widths. These notions determine a priori the best possible performance of numerical methods and thus serve as a benchmark for concrete algorithms. The second part of this article turns to the development of numerical algorithms based on the theoretically established sparse separable approximations. The numerical methods studied fall into two general categories. The first uses polynomial expansions in terms of the parameters to approximate the solution map. The second one searches for suitable low dimensional spaces for simultaneously approximating all members of the parametric family. The numerical implementation of these approaches is carried out through adaptive and greedy algorithms. An a priori analysis of the performance of these algorithms establishes how well they meet the theoretical benchmarks.

Journal ArticleDOI
TL;DR: In this article, an analytic extension of the solution map on a tensor product of ellipses in the complex domain is proposed to estimate the Legendre coefficients of u. The analytic extension is based on the holomorphic version of the implicit function theorem in Banach spaces and can be applied to a large variety of parametric PDEs.

Posted Content
TL;DR: In this article, the effects of bias correction on confidence interval coverage in the context of kernel density and local polynomial regression estimation were studied. But bias correction can be preferred to undersmoothing for minimizing coverage error and increasing robustness to tuning parameter choice.
Abstract: Nonparametric methods play a central role in modern empirical work. While they provide inference procedures that are more robust to parametric misspecification bias, they may be quite sensitive to tuning parameter choices. We study the effects of bias correction on confidence interval coverage in the context of kernel density and local polynomial regression estimation, and prove that bias correction can be preferred to undersmoothing for minimizing coverage error and increasing robustness to tuning parameter choice. This is achieved using a novel, yet simple, Studentization, which leads to a new way of constructing kernel-based bias-corrected confidence intervals. In addition, for practical cases, we derive coverage error optimal bandwidths and discuss easy-to-implement bandwidth selectors. For interior points, we show that the MSE-optimal bandwidth for the original point estimator (before bias correction) delivers the fastest coverage error decay rate after bias correction when second-order (equivalent) kernels are employed, but is otherwise suboptimal because it is too "large". Finally, for odd-degree local polynomial regression, we show that, as with point estimation, coverage error adapts to boundary points automatically when appropriate Studentization is used; however, the MSE-optimal bandwidth for the original point estimator is suboptimal. All the results are established using valid Edgeworth expansions and illustrated with simulated data. Our findings have important consequences for empirical work as they indicate that bias-corrected confidence intervals, coupled with appropriate standard errors, have smaller coverage error and are less sensitive to tuning parameter choices in practically relevant cases where additional smoothness is available.

Journal ArticleDOI
TL;DR: The performances of two well-known soft computing predictive techniques, artificial neural network and genetic programming (GP), are evaluated based on several criteria, including over-fitting potential and results indicate model acceptance criteria should include engineering analysis from parametric studies.

Journal ArticleDOI
TL;DR: This work proposes an efficient extension of t-SNE to a parametric framework, kernel t-sNE, which preserves the flexibility of basic t- SNE, but enables explicit out-of-sample extensions and demonstrates that this technique yields satisfactory results also for large data sets.

Journal Article
TL;DR: Likert, Likert-type, and ordinal-scale responses are very popular psychometric item scoring schemes for attempting to quantify people's opinions, interests, or perceived efficacy of an intervention and are used extensively in Physical Education and Exercise Science research.
Abstract: Likert, Likert-type, and ordinal-scale responses are very popular psychometric item scoring schemes for attempting to quantify people's opinions, interests, or perceived efficacy of an intervention and are used extensively in Physical Education and Exercise Science research. However, these numbered measures are generally considered ordinal and violate some statistical assumptions needed to evaluate them as normally distributed, parametric data. This is an issue because parametric statistics are generally perceived as being more statistically powerful than non-parametric statistics. To avoid possible misinterpretation, care must be taken in analyzing these types of data. The use of visual analog scales may be equally efficacious and provide somewhat better data for analysis with parametric statistics.

Journal ArticleDOI
TL;DR: The recent GRavitational lEnsing Accuracy Testing (GREAT3) challenge as discussed by the authors was the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images.
Abstract: We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by similar to 1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the S,rsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods' results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

Journal ArticleDOI
14 Jul 2015
TL;DR: In this article, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate.
Abstract: Selection or calibration of particle property input parameters is one of the key problematic aspects for the implementation of the discrete element method (DEM) In the current study, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate In this case study, particle properties, such as Young’s modulus, friction parameters and coefficient of restitution were systematically changed in order to assess their effect on material repose angles and particle flow rate (FR) It was shown that inter-particle static friction plays a primary role in determining both final angle of repose and FR, followed by the role of inter-particle rolling friction coefficient The particle restitution coefficient and Young’s modulus were found to have insignificant impacts and were strongly cross correlated The proposed approach provides a systematic method that can be used to show the importance of specific DEM input parameters for a given system and then potentially facilitates their selection or calibration It is concluded that shortening the process for input parameters selection and calibration can help in the implementation of DEM

Journal ArticleDOI
TL;DR: This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions.
Abstract: Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

Journal ArticleDOI
TL;DR: This work develops an efficient, data-driven technique for estimating the parameters of these models from observed equilibria, and supports both parametric and nonparametric estimation by leveraging ideas from statistical learning (kernel methods and regularization operators).
Abstract: Equilibrium modeling is common in a variety of fields such as game theory and transportation science. The inputs for these models, however, are often difficult to estimate, while their outputs, i.e., the equilibria they are meant to describe, are often directly observable. By combining ideas from inverse optimization with the theory of variational inequalities, we develop an efficient, data-driven technique for estimating the parameters of these models from observed equilibria. We use this technique to estimate the utility functions of players in a game from their observed actions and to estimate the congestion function on a road network from traffic count data. A distinguishing feature of our approach is that it supports both parametric and nonparametric estimation by leveraging ideas from statistical learning (kernel methods and regularization operators). In computational experiments involving Nash and Wardrop equilibria in a nonparametric setting, we find that a) we effectively estimate the unknown demand or congestion function, respectively, and b) our proposed regularization technique substantially improves the out-of-sample performance of our estimators.

Journal ArticleDOI
TL;DR: In this paper, a nonparametric approach compared to the parametric approach of CB-SEM was used to compare the performance of Variance Based SEM and Covariance Based SemEval.
Abstract: Lately, there was some attention for the Variance Based SEM (VB-SEM) against that of Covariance Based SEM (CB-SEM) from social science researches regarding the fitness indexes, sample size requirement, and normality assumption Not many of them aware that VB-SEM is developed based on the non-parametric approach compared to the parametric approach of CB-SEM In fact the fitness of a model should not be taken lightly since it reflects the behavior of data in relation to the proposed model for the study Furthermore, the adequacy of sample size and the normality of data are among the main assumptions of parametric test itself This study intended to clarify the ambiguities among the social science community by employing the data-set which do not meet the fitness requirements and normality assumptions to execute both CB-SEM and VB-SEM The findings reveal that the result of CB-SEM with bootstrapping is almost similar to that of VB-SEM (bootstrapping as usual) Therefore, the failure to meet the fitness and normality requirements should not be the reason for employing Non-Parametric SEM

Journal ArticleDOI
TL;DR: An overview of the developments on the estimation of parameters of extreme events and on the testing of extreme value conditions under a semi‐parametric framework is presented.
Abstract: Statistical issues arising in modelling univariate extremes of a random sample have been successfully used in the most diverse fields, such as biometrics, finance, insurance and risk theory. Statistics of univariate extremes (SUE), the subject to be dealt with in this review paper, has recently faced a huge development, partially because rare events can have catastrophic consequences for human activities, through their impact on the natural and constructed environments. In the last decades, there has been a shift from the area of parametric SUE, based on probabilistic asymptotic results in extreme value theory, towards semi-parametric approaches. After a brief reference to Gumbel's block methodology and more recent improvements in the parametric framework, we present an overview of the developments on the estimation of parameters of extreme events and on the testing of extreme value conditions under a semi-parametric framework. We further discuss a few challenging topics in the area of SUE. univariate extremes; parametric and semi-parametric approaches; extreme value index and tail parameters; testing issues.


Journal ArticleDOI
TL;DR: In this paper, a tuning approach guided by the eigenvalue parametric sensitivities calculated from a linearized model of the converter and its control system is proposed in the form of an iterative procedure enforcing the stability of the system and ensuring that the system eigenvalues are moved away from critical locations.
Abstract: Control structures containing cascaded loops are used in several applications for the stand-alone and parallel operation of three-phase power electronic converters. Potential interactions between these cascaded loops and the complex functional dependence between the controller parameters and the system dynamics prevent the effective application of classical tuning methods in the case of converters operating with a low switching frequency. A tuning approach guided by the eigenvalue parametric sensitivities calculated from a linearized model of the converter and its control system is proposed in this paper. The method is implemented in the form of an iterative procedure enforcing the stability of the system and ensuring that the system eigenvalues are moved away from critical locations. Numerical simulations in the time domain are presented to verify the improvement in the dynamic performance of the system when tuned with the presented algorithm compared with a conventional rule-based tuning method.

Journal ArticleDOI
TL;DR: In this paper, a simple analytical approach for predicting all possible damage modes of Uni-Directional (UD) hybrid composites and their stress-strain response in tensile loading is proposed.
Abstract: A new simple analytical approach for predicting all possible damage modes of Uni-Directional (UD) hybrid composites and their stress–strain response in tensile loading is proposed. To do so, the required stress level for the damage modes (fragmentation, delamination and final failure) are assessed separately. The damage process of the UD hybrid can then be predicted based on the order of the required stress for each damage mode. Using the developed analytical method, a new series of standard-thickness glass/ thin-ply carbon hybrid composites was tested and a very good pseudo-ductile tensile response with 1.0% pseudo-ductile strain and no load drop until final failure was achieved. The yield stress value for the best tested layup was more than 1130 MPa. The proposed analytical method is simple, very fast to run and it gives accurate results that can be used for designing thin-ply UD hybrid laminates with the desired tensile response and for conducting further parametric studies. 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://

Proceedings ArticleDOI
07 Jun 2015
TL;DR: A substantial improvement in the representational power of the model is shown, while maintaining the efficiency of a linear shape basis, and it is shown that hand shape variation can be represented using only a small number of basis components.
Abstract: We describe how to learn a compact and efficient model of the surface deformation of human hands. The model is built from a set of noisy depth images of a diverse set of subjects performing different poses with their hands. We represent the observed surface using Loop subdivision of a control mesh that is deformed by our learned parametric shape and pose model. The model simultaneously accounts for variation in subject-specific shape and subject-agnostic pose. Specifically, hand shape is parameterized as a linear combination of a mean mesh in a neutral pose with a small number of offset vectors. This mesh is then articulated using standard linear blend skinning (LBS) to generate the control mesh of a subdivision surface. We define an energy that encourages each depth pixel to be explained by our model, and the use of a smooth subdivision surface allows us to optimize for all parameters jointly from a rough initialization. The efficacy of our method is demonstrated using both synthetic and real data, where it is shown that hand shape variation can be represented using only a small number of basis components. We compare with other approaches including PCA and show a substantial improvement in the representational power of our model, while maintaining the efficiency of a linear shape basis.

Journal ArticleDOI
TL;DR: An interval observer design methodology for linear parameter varying (LPV) systems with parametric uncertainty with information on upper and lower bounds of the uncertain parameters is developed and an envelope covering all possible state trajectories is presented.

Journal ArticleDOI
TL;DR: In this article, an adaptive Neuro-Fuzzy Inference System with fuzzy c-means clustering (FCM-ANFIS) was employed to design the thermal prediction model.

Book ChapterDOI
18 Jul 2015
TL;DR: ProPhESY, a tool for analyzing parametric Markov chains (MCs), can compute a rational function (i.e., a fraction of two polynomials in the model parameters) for reachability and expected reward objectives and supports the novel feature of conditional probabilities.
Abstract: We present PROPhESY, a tool for analyzing parametric Markov chains (MCs). It can compute a rational function (i.e., a fraction of two polynomials in the model parameters) for reachability and expected reward objectives. Our tool outperforms state-of-the-art tools and supports the novel feature of conditional probabilities. PROPhESY supports incremental automatic parameter synthesis (using SMT techniques) to determine “safe” and “unsafe” regions of the parameter space. All values in these regions give rise to instantiated MCs satisfying or violating the (conditional) probability or expected reward objective. PROPhESY features a web front-end supporting visualization and user-guided parameter synthesis. Experimental results show that PROPhESY scales to MCs with millions of states and several parameters. Open image in new window