scispace - formally typeset
Search or ask a question

Showing papers on "Parametric model published in 2021"


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a novel three parametric model named as Exponentiated Transformation of Gumbel Type-II (ETGT-II) for modeling the two data sets of death cases due to COVID-19.
Abstract: The aim of this study is to analyze the number of deaths due to COVID-19 for Europe and China. For this purpose, we proposed a novel three parametric model named as Exponentiated transformation of Gumbel Type-II (ETGT-II) for modeling the two data sets of death cases due to COVID-19. Specific statistical attributes are derived and analyzed along with moments and associated measures, moments generating functions, uncertainty measures, complete/incomplete moments, survival function, quantile function and hazard function, etc. Additionally, model parameters are estimated by utilizing maximum likelihood method and Bayesian paradigm. To examine efficiency of the ETGT-II model a simulation analysis is performed. Finally, using the data sets of death cases of COVID-19 of Europe and China to show adaptability of suggested model. The results reveal that it may fit better than other well-known models.

50 citations


Journal ArticleDOI
TL;DR: The ensemble results demonstrated stabilization of the forecasting errors, indicating the ability of the proposed hybrid approach to decompose wind speed time series into uncorrelated components, reducing the errors from one up to a 12-steps-ahead forecasting horizon.

49 citations


Proceedings Article
01 Jan 2021
TL;DR: Wang et al. as discussed by the authors proposed a Pyramidal Mesh Alignment Feedback (PyMAF) loop to leverage a feature pyramid and rectify the predicted parameters explicitly based on the mesh-image alignment status in deep regressor.
Abstract: Regression-based methods have recently shown promising results in reconstructing human meshes from monocular images. By directly mapping raw pixels to model parameters, these methods can produce parametric models in a feed-forward manner via neural networks. However, minor deviation in parameters may lead to noticeable misalignment between the estimated meshes and image evidences. To address this issue, we propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop to leverage a feature pyramid and rectify the predicted parameters explicitly based on the mesh-image alignment status in our deep regressor. In PyMAF, given the currently predicted parameters, mesh-aligned evidences will be extracted from finer-resolution features accordingly and fed back for parameter rectification. To reduce noise and enhance the reliability of these evidences, an auxiliary pixel-wise supervision is imposed on the feature encoder, which provides mesh-image correspondence guidance for our network to preserve the most related information in spatial features. The efficacy of our approach is validated on several benchmarks, including Human3.6M, 3DPW, LSP, and COCO, where experimental results show that our approach consistently improves the mesh-image alignment of the reconstruction. The project page with code and video results can be found at this https URL.

47 citations


Journal ArticleDOI
TL;DR: The efforts of this study provide an efficient updating strategy for the dynamic model updating of complex assembled structures with experimental test data, which is promising to promote the precision and feasibility of simulation-based design optimization and performance evaluation of complex structures.

43 citations


Journal ArticleDOI
TL;DR: The efforts of this study provide an efficient dynamic model updating strategy (PM-MUS) for aeroengine casings by parametric modeling and experimental test data regarding uncorrelated modes.

41 citations


Journal ArticleDOI
TL;DR: In this paper, a parametric Scan-to-FEM approach suitable for architectural heritage is presented, which uses the Generative Programming paradigm implementing a modelling framework into a visual programming environment.
Abstract: Historic masonry buildings are characterised by uniqueness, which is intrinsically present in their building techniques, morphological features, architectural decorations, artworks, etc. From the modelling point of view, the degree of detail reached on transforming discrete digital representations of historic buildings, e.g., point clouds, into 3D objects and elements strongly depends on the final purpose of the project. For instance, structural engineers involved in the conservation process of built heritage aim to represent the structural system rigorously, neglecting architectural decorations and other details. Following this principle, the software industry is focusing on the definition of a parametric modelling approach, which allows performing the transition from half-raw survey data (point clouds) to geometrical entities in nearly no time. In this paper, a novel parametric Scan-to-FEM approach suitable for architectural heritage is presented. The proposed strategy uses the Generative Programming paradigm implementing a modelling framework into a visual programming environment. Such an approach starts from the 3D survey of the case-study structure and culminates with the definition of a detailed finite element model that can be exploited to predict future scenarios. This approach is appropriate for architectural heritage characterised by symmetries, repetition of modules and architectural orders, making the Scan-to-FEM transition fast and efficient. A Portuguese monument is adopted as a pilot case to validate the proposed procedure. In order to obtain a proper digital twin of this structure, the generated parametric model is imported into an FE environment and then calibrated via an inverse dynamic problem, using as reference metrics the modal properties identified from field acceleration data recorded before and after a retrofitting intervention. After assessing the effectiveness of the strengthening measures, the digital twin ability of reproducing past and future damage scenarios of the church is validated through nonlinear static analyses.

32 citations


Journal ArticleDOI
TL;DR: In this article, an accurate imaging and motion estimation method based on multiple-input-multiple-out (MIMO) radar is presented, where a preprocessing strategy is exerted based on the space-time adaptive processing (STAP) theory, clutter signals can be suppressed effectually with constructing a Doppler spectrum model.
Abstract: Image deterioration problem occurs in radar imaging for ship target, which results from the complex time-varying motions of ship, the noise in channels, and the clutter on sea surface. It is hard to be solved effectively due to coherent accumulation sampling time and high-dimensional parametric model. Hence, an accurate imaging and motion estimation method based on multiple-input–multiple-out (MIMO) radar is presented. First, the multidimensional signal model is built to characterize target features accurately. To reduce the interference from sea clutters, a preprocessing strategy is exerted based on the space–time adaptive processing (STAP) theory, clutter signals can be suppressed effectually with constructing a Doppler spectrum model. Then, for accurate imaging and motion estimation, a combined trace norm minimization problem is deduced based on the relaxation of tensor rank, where the noise in sea environments is also considered. Meanwhile, generalized tensor total variation constraint is developed to ensure stable estimation and smooth imaging results when separating the noise term. Accordingly, an effective decomposition criterion is formulated based on alternating direction multiplier method (ADMM) strategy, and motion parameters can be precisely calculated based on the least square (LS) method. Finally, theoretical analysis and simulation results present the accurate performance of the proposed method.

27 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new model called Space-Time Chips (ST Chips), which uses highly-localized space-time slices called ST Chips to implicitly capture motion.
Abstract: We propose a new model for no-reference video quality assessment (VQA). Our approach uses a new idea of highly-localized space-time (ST) slices called Space-Time Chips (ST Chips). ST Chips are localized cuts of video data along directions that implicitly capture motion. We use perceptually-motivated bandpass and normalization models to first process the video data, and then select oriented ST Chips based on how closely they fit parametric models of natural video statistics. We show that the parameters that describe these statistics can be used to reliably predict the quality of videos, without the need for a reference video. The proposed method implicitly models ST video naturalness, and deviations from naturalness. We train and test our model on several large VQA databases, and show that our model achieves state-of-the-art performance at reduced cost, without requiring motion computation.

26 citations


Journal ArticleDOI
TL;DR: The objective of the present study is to develop a parametric file that is able to generate all types of bridges from a single parametric model in a design software application and in a structural analysis software application.

26 citations


Journal ArticleDOI
TL;DR: This work proposes a methodology based on Deep Generative Models to create complex models of galaxy morphologies that may meet the image simulation needs of upcoming surveys, and introduces GalSim-Hub, a community-driven repository of generative models, and a framework for incorporatingGenerative models within the GalSim image simulation software.
Abstract: Image simulations are essential tools for preparing and validating the analysis of current and future wide-field optical surveys. However, the galaxy models used as the basis for these simulations are typically limited to simple parametric light profiles, or use a fairly limited amount of available space-based data. In this work, we propose a methodology based on Deep Generative Models to create complex models of galaxy morphologies that may meet the image simulation needs of upcoming surveys. We address the technical challenges associated with learning this morphology model from noisy and PSF-convolved images by building a hybrid Deep Learning/physical Bayesian hierarchical model for observed images, explicitly accounting for the Point Spread Function and noise properties. The generative model is further made conditional on physical galaxy parameters, to allow for sampling new light profiles from specific galaxy populations. We demonstrate our ability to train and sample from such a model on galaxy postage stamps from the HST/ACS COSMOS survey, and validate the quality of the model using a range of second- and higher-order morphology statistics. Using this set of statistics, we demonstrate significantly more realistic morphologies using these deep generative models compared to conventional parametric models. To help make these generative models practical tools for the community, we introduce GalSim-Hub, a community-driven repository of generative models, and a framework for incorporating generative models within the GalSim image simulation software.

25 citations


Journal ArticleDOI
TL;DR: In this article, a novel data-driven approach is proposed to predict the hourly global irradiation profiles from the cheaper and more likely available records of daily global irradiations, based on a prior categorization of hourly observations using the K-means clustering algorithm, followed by non-parametric function approximation using the multi-layered perceptron artificial neural network.

Journal ArticleDOI
TL;DR: This study defines and discusses the properties of stable and tempered stable random variables, and conducts an empirical analysis to explore the performance of different models representing the distributions of log-returns for the S&P500 and DAX indexes.
Abstract: In this study, we investigate the performance of different parametric models with stable and tempered stable distributions for capturing the tail behaviour of log-returns (financial asset returns). First, we define and discuss the properties of stable and tempered stable random variables. We then show how to estimate their parameters and simulate them based on their characteristic functions. Finally, as an illustration, we conduct an empirical analysis to explore the performance of different models representing the distributions of log-returns for the S&P500 and DAX indexes.

Journal ArticleDOI
TL;DR: In this article, the performance of single-and double-robust estimators was compared to parametric regression for bias and confidence interval coverage under a simple confounding scenario and a complex confounding scenario.
Abstract: Unlike parametric regression, machine learning (ML) methods do not generally require precise knowledge of the true data generating mechanisms. As such, numerous authors have advocated for ML methods to estimate causal effects. Unfortunately, ML algorithmscan perform worse than parametric regression. We demonstrate the performance of ML-based single- and double-robust estimators. We use 100 Monte Carlo samples with sample sizes of 200, 1200, and 5000 to investigate bias and confidence interval coverage under several scenarios. In a simple confounding scenario, confounders were related to the treatment and the outcome via parametric models. In a complex confounding scenario, the simple confounders were transformed to induce complicated nonlinear relationships. In the simple scenario, when ML algorithms were used, double-robust estimators were superior to single-robust estimators. In the complex scenario, single-robust estimators with ML algorithms were at least as biased as estimators using misspecified parametric models. Double-robust estimators were less biased, but coverage was well below nominal. The use of sample splitting, inclusion of confounder interactions, reliance on a richly specified ML algorithm, and use of doubly robust estimators was the only explored approach that yielded negligible bias and nominal coverage. Our results suggest that ML based singly robust methods should be avoided.

Journal ArticleDOI
TL;DR: A new time series clustering procedure allowing for heteroskedasticity, non-normality and model's non-linearity is developed via the Autocorrelation-based fuzzy C -means (A-FCM) algorithm.

Journal ArticleDOI
TL;DR: In this article, the authors used the zero-order parametric Brown model with out-of-limit model parameters for indoor fire forecasting and found that the performance of the model is improved by selecting the smoothing parameter from the classical set of parameters.
Abstract: Possibilities of parameterization of the zero-order Brown model for indoor air forecasting based on the current measure of air state gain recurrence are considered. The key to the zero-order parametric Brown forecasting model is the selection of the smoothing parameter, which characterizes forecast adaptability to the current air state gain recurrence measure. It is shown that for effective short-term indoor fire forecast, the Brown model parameter must be selected from the out-of-limit set defined by 1 and 2. The out-of-limit set for the Brown model parameter is an area of effective fire forecasting based on the measure of current indoor air state gain recurrence. Errors of fire forecast based on the parameterized zero-order Brown model in the case of the classical and out-of-limit sets of the model parameters are investigated using the example of ignition of various materials in a laboratory chamber. As quantitative indicators of forecast quality, the absolute and mean forecast errors exponentially smoothed with a parameter of 0.4 are investigated. It was found that for alcohol, the smoothed absolute and mean forecast errors for the classical smoothing parameter in the no-ignition interval do not exceed 20 %. At the same time, for the out-of-limit case, the indicated forecast errors are, on average, an order of magnitude smaller. Similar ratios for forecast errors remain in paper, wood and textile ignition. However, for the transition zone corresponding to the time of material ignition, a sharp decrease in the current measure of chamber air state gain recurrence is observed. It was found that for this zone, the smoothed absolute forecast error for alcohol is about 2 % if the model parameter is selected from the classical set. If the model parameter is selected from the out-of-limit set, the forecast error is about 0.2 %. The results generally demonstrate significant advantages of using the zero-order Brown parametric model with out-of-limit model parameters for indoor fire forecasting.

Journal ArticleDOI
TL;DR: It is shown that typical trial follow-up can be unsuitable for extrapolation, resulting in unreliable estimation of multiple parameter models, and that selecting survival models based only on goodness-of-fit statistics is unsuitable due to the high level of uncertainty in a cost-effectiveness analysis.
Abstract: Extrapolations of parametric survival models fitted to censored data are routinely used in the assessment of health technologies to estimate mean survival, particularly in diseases that potentially reduce the life expectancy of patients. Akaike’s information criterion (AIC) and Bayesian information criterion (BIC) are commonly used in health technology assessment alongside an assessment of plausibility to determine which statistical model best fits the data and should be used for prediction of long-term treatment effects. We compare fit and estimates of restricted mean survival time (RMST) from 8 parametric models and contrast models preferred in terms of AIC, BIC, and log-likelihood, without considering model plausibility. We assess the methods’ suitability for selecting a parametric model through simulation of data replicating the follow-up of intervention arms for various time-to-event outcomes from 4 clinical trials. Follow-up was replicated through the consideration of recruitment duration and minimum and maximum follow-up times. Ten thousand simulations of each scenario were performed. We demonstrate that the different methods can result in disagreement over the best model and that it is inappropriate to base model selection solely on goodness-of-fit statistics without consideration of hazard behavior and plausibility of extrapolations. We show that typical trial follow-up can be unsuitable for extrapolation, resulting in unreliable estimation of multiple parameter models, and infer that selecting survival models based only on goodness-of-fit statistics is unsuitable due to the high level of uncertainty in a cost-effectiveness analysis. This article demonstrates the potential problems of overreliance on goodness-of-fit statistics when selecting a model for extrapolation. When follow-up is more mature, BIC appears superior to the other selection methods, selecting models with the most accurate and least biased estimates of RMST.

Journal ArticleDOI
01 Jun 2021-Extremes
TL;DR: In this article, the authors describe various approaches to quantifying notions of flexibility and then propose new parametric classes of distributions that satisfy these notions and are computable without requiring numerical integration.
Abstract: When making inferences about extreme quantiles, using simple parametric models for the entire distribution can be problematic in that a model that accurately describes the bulk of the distribution may lead to substantially biased estimates of extreme quantiles if the model is misspecified. One way to address this problem is to use flexible parametric families of distributions. For the setting where extremes in both the upper and lower tails are of interest, this paper describes various approaches to quantifying notions of flexibility and then proposes new parametric classes of distributions that satisfy these notions and are computable without requiring numerical integration. A semiparametric extension of these distributions is proposed when the parametric classes are not sufficiently flexible. Some of the new models are applied to daily temperature in July from an ensemble of 50 climate model runs that can be treated as independent realizations of the climate system over the period studied. The large ensemble makes it possible to compare estimates of extreme quantiles based on a single model run to estimates based on the full ensemble. For these data, at the four largest US cities, Chicago, Houston, Los Angeles and New York City, the parametric models generally dominate estimates based on fitting generalized Pareto distributions to some fraction of the most extreme observations, sometimes by a substantial margin. Thus, in at least this setting, parametric models not only provide a way to estimate the whole distribution, they also result in better estimates of extreme quantiles than traditional extreme value approaches.

Journal ArticleDOI
TL;DR: In this paper, particle swarm optimization is applied to optimize the profile of an insulator of the conventional electric power grid, based on the electrical potential data obtained using the finite element method, defined in this paper as optimized finiteelement method.
Abstract: Reliability in the supply of electricity depends on isolation from the electrical power system. Insulators are components that have the function of insulating and supporting the electrical grid. The design of electrical distribution network insulators can have a great influence on the distribution of the electric field over its surface. When there is a great intensity of electric field applied specifically in a location, there may be a greater chance of the development of a fault. An optimized insulator profile design can ensure that the network has better performance. In this paper, particle swarm optimization is applied to optimize the profile of an insulator of the conventional electric power grid, based on the electrical potential data obtained using the finite element method, defined in this paper as optimized finite element method. The component parameters are optimized to obtain a parametric model. The parametric model is evaluated and compared with the usually employed profiles to define an optimized design. The results show that the proposed method is a promising alternative for the design of electrical energy distribution insulators.


Journal ArticleDOI
TL;DR: The study suggests that the wall insulation, infiltration, and lighting load are the most significant parameters affecting the region's energy performance, and suggests a reduction strategy that leads to a valid yet relatively fast analysis process.
Abstract: The construction of zero energy buildings has been an impactful response to global warming and energy crises. Despite numerous approaches to model net-zero-energy buildings, it is still a challenge for building designers to evaluate the possibility of achieving this goal during the architectural design phases. This article, therefore, aims to offer a systematic framework for a feasibility study of zero energy buildings suitable for the design process. By applying a well-structured method of identifying residential prototypes in the city of Shiraz in Iran, the paper develops a parametric model, considering both geometric prototypes and non-geometric parameters, which generates a large number of options for the analysis of buildings' energy consumption and photovoltaics' electricity generation. Although multivariable parametric analysis leads to broad solution space, the large size of option space is a barrier to implement effectively and regularly during the design process. Thus, using a statistical design of experiment, we apply a reduction strategy that leads to a valid yet relatively fast analysis process. Finally, to identify the parameters with the highest impact on energy use intensity, we post-process the results and perform a sensitivity analysis. With the current patterns and configurations of the residential buildings in the region under investigation, the results show that achieving net-zero-energy building is highly feasible by improving the buildings' envelope and construction and the interior lighting power density. The study suggests that the wall insulation, infiltration, and lighting load are the most significant parameters affecting the region's energy performance.

Journal ArticleDOI
TL;DR: In this article, two discretization schemes, corresponding to equidistant times or equidististant marginal survival probabilities, and two ways of interpolating the discrete-time predictions, were introduced.
Abstract: Due to rapid developments in machine learning, and in particular neural networks, a number of new methods for time-to-event predictions have been developed in the last few years. As neural networks are parametric models, it is more straightforward to integrate parametric survival models in the neural network framework than the popular semi-parametric Cox model. In particular, discrete-time survival models, which are fully parametric, are interesting candidates to extend with neural networks. The likelihood for discrete-time survival data may be parameterized by the probability mass function (PMF) or by the discrete hazard rate, and both of these formulations have been used to develop neural network-based methods for time-to-event predictions. In this paper, we review and compare these approaches. More importantly, we show how the discrete-time methods may be adopted as approximations for continuous-time data. To this end, we introduce two discretization schemes, corresponding to equidistant times or equidistant marginal survival probabilities, and two ways of interpolating the discrete-time predictions, corresponding to piecewise constant density functions or piecewise constant hazard rates. Through simulations and study of real-world data, the methods based on the hazard rate parametrization are found to perform slightly better than the methods that use the PMF parametrization. Inspired by these investigations, we also propose a continuous-time method by assuming that the continuous-time hazard rate is piecewise constant. The method, named PC-Hazard, is found to be highly competitive with the aforementioned methods in addition to other methods for survival prediction found in the literature.

Journal ArticleDOI
TL;DR: Juyong et al. as discussed by the authors proposed a neural network based method to regress the 3D face shape and orientation from the input 2D caricature image, which works well for various caricatures.
Abstract: Caricature is an artistic abstraction of the human face by distorting or exaggerating certain facial features, while still retains a likeness with the given face. Due to the large diversity of geometric and texture variations, automatic landmark detection and 3D face reconstruction for caricature is a challenging problem and has rarely been studied before. In this paper, we propose the first automatic method for this task by a novel 3D approach. To this end, we first build a dataset with various styles of 2D caricatures and their corresponding 3D shapes, and then build a parametric model on vertex based deformation space for 3D caricature face. Based on the constructed dataset and the nonlinear parametric model, we propose a neural network based method to regress the 3D face shape and orientation from the input 2D caricature image. Ablation studies and comparison with state-of-the-art methods demonstrate the effectiveness of our algorithm design. Extensive experimental results demonstrate that our method works well for various caricatures. Our constructed dataset, source code and trained model are available at https://github.com/Juyong/CaricatureFace .

Book ChapterDOI
18 Jul 2021
TL;DR: IMITATOR as mentioned in this paper is a parametric model checker for real-time systems that takes as input an extension of parametric timed automata (PTAs), a powerful formalism to formally verify critical realtime systems.
Abstract: Real-time systems are notoriously hard to verify due to nondeterminism, concurrency and timing constraints. When timing constants are uncertain (in early the design phase, or due to slight variations of the timing bounds), timed model checking techniques may not be satisfactory. In contrast, parametric timed model checking synthesizes timing values ensuring correctness. IMITATOR takes as input an extension of parametric timed automata (PTAs), a powerful formalism to formally verify critical real-time systems. IMITATOR extends PTAs with multi-rate clocks, global rational-valued variables and a set of additional useful features. We describe here the new features and algorithms offered by IMITATOR 3, that moved along the years from a simple prototype dedicated to robustness analysis to a standalone parametric model checker for timed systems.

Journal ArticleDOI
TL;DR: A modeling framework applicable to the 3D joint distribution of circular-linear-linear (C-L-L) dataset consisting of a parametric model based on copulas and a nonparametric kernel density estimation model is proposed.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
Abstract: Modeling 3D humans accurately and robustly from a single image is very challenging, and the key for such an ill-posed problem is the 3D representation of the human models. To overcome the limitations of regular 3D representations, we propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function. In our PaMIR-based reconstruction framework, a novel deep neural network is proposed to regularize the free-form deep implicit function using the semantic features of the parametric model, which improves the generalization ability under the scenarios of challenging poses and various clothing topologies. Moreover, a novel depth-ambiguity-aware training loss is further integrated to resolve depth ambiguities and enable successful surface detail reconstruction with imperfect body reference. Finally, we propose a body reference optimization method to improve the parametric model estimation accuracy and to enhance the consistency between the parametric model and the implicit function. With the PaMIR representation, our framework can be easily extended to multi-image input scenarios without the need of multi-camera calibration and pose synchronization. Experimental results demonstrate that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.

Journal ArticleDOI
TL;DR: The experimental validation of a linear time-invariant (LTI) energy-maximizing control strategy for wave energy converters (WECs), applied to a 1/20 scale Wavestar WEC is addressed, validating the LiTe-Con controller in a realistic real-time scenario.
Abstract: This study addresses the experimental validation of a linear time-invariant (LTI) energy-maximizing control strategy for wave energy converters (WECs), applied to a 1/20 scale Wavestar WEC To fulfill this objective, system identification routines are utilized to compute a mathematical (parametric) model of the input–output dynamics of the device, suitable for control design and implementation With this parametric model, the so-called LiTe-Con energy-maximizing strategy, recently published in the literature, is designed, synthesized, and tested under irregular wave excitation in the wave basin at Aalborg University Given that the LiTe-Con requires instantaneous knowledge of the wave excitation effects, estimates are provided by means of an unknown-input Kalman filter, designed in close synergy with the so-called internal model principle For the experimental assessment, both controller and estimator are directly implemented in a real-time architecture The performance of the LiTe-Con is evaluated in terms of energy-absorption, showing consistent results with respect to those obtained in numerical simulation, hence validating the LiTe-Con controller in a realistic real-time scenario

Proceedings ArticleDOI
22 May 2021
TL;DR: In this article, a fast parametric model checking (fPMC) approach is proposed to extend the applicability of PMC to a broader class of systems than previously possible, by partitioning the Markov models that PMC operates with into fragments whose reachability properties are analysed independently, and obtaining PMC reachability formulae by combining the results of these fragment analyses.
Abstract: Parametric model checking (PMC) computes algebraic formulae that express key non-functional properties of a system (reliability, performance, etc.) as rational functions of the system and environment parameters. In software engineering, PMC formulae can be used during design, e.g., to analyse the sensitivity of different system architectures to parametric variability, or to find optimal system configurations. They can also be used at runtime, e.g., to check if non-functional requirements are still satisfied after environmental changes, or to select new configurations after such changes. However, current PMC techniques do not scale well to systems with complex behaviour and more than a few parameters. Our paper introduces a fast PMC (fPMC) approach that overcomes this limitation, extending the applicability of PMC to a broader class of systems than previously possible. To this end, fPMC partitions the Markov models that PMC operates with into fragments whose reachability properties are analysed independently, and obtains PMC reachability formulae by combining the results of these fragment analyses. To demonstrate the effectiveness of fPMC, we show how our fPMC tool can analyse three systems (taken from the research literature, and belonging to different application domains) with which current PMC techniques and tools struggle.

Proceedings ArticleDOI
10 Jan 2021
TL;DR: In this article, a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures is presented, with the view of producing models which train faster and more robustly.
Abstract: Typically, loss functions, regularization mechanisms and other important aspects of training parametric models are chosen heuristically from a limited set of options. In this paper, we take the first step towards automating this process, with the view of producing models which train faster and more robustly. Concretely, we present a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures. We develop a pipeline for “meta-training” such loss functions, targeted at maximizing the performance of the model trained under them. The loss landscape produced by our learned losses significantly improves upon the original task-specific losses in both supervised and reinforcement learning tasks. Furthermore, we show that our meta-learning framework is flexible enough to incorporate additional information at meta-train time. This information shapes the learned loss function such that the environment does not need to provide this information during meta-test time. We make our code available at https://sites.google.com/view/mlthree

Journal ArticleDOI
TL;DR: A comparison between the present study and other studies that used the same dataset showed that, compared to the hybridization of non-parametric models, the hybridized of parametric and non- Parametric models potentially results in better accuracy.
Abstract: This study employed a hybridization approach that combines parametric and non-parametric models to predict air over-pressure (AOp) associated with quarry blasting A simple linear regression model, which is a kind of parametric model, was used to select the most relevant inputs for predicting AOp Four parametric models, including Chi-square automatic interaction detector (CHAID), artificial neural network (ANN), k-nearest neighbors (KNN), and support vector machine (SVM) were developed using the outputs of a linear model to predict AOp The models developed were evaluated using five performance indicators, a simple ranking system, and a gains chart According to the evaluations, ANN and CHAID (both with cumulative ranking = 36) outperformed SVM (cumulative ranking = 15) and KNN (cumulative ranking = 24) to predict AOp While CHAID (training ranking = 20) performed better than other models in the training phase, ANN (testing ranking = 20) performed better than the other models in the testing phase In addition, while ANN and CHAID models identified distance as the least important factor for predicting AOp, there was no agreement on the most important factor Moreover, a comparison between the present study and other studies that used the same dataset showed that, compared to the hybridization of non-parametric models, the hybridization of parametric and non-parametric models potentially results in better accuracy