scispace - formally typeset
Search or ask a question

Showing papers on "Parametric model published in 1987"


Journal ArticleDOI
TL;DR: In this article, a parametric model is developed to describe relative permeability-saturation-fluid pressure functional relationships in two- or threefluid phase porous media systems subject to monotonic saturation paths.
Abstract: A parametric model is developed to describe relative permeability-saturation-fluid pressure functional relationships in two- or three-fluid phase porous media systems subject to monotonic saturation paths. All functions are obtained as simple closed-form expressions convenient for implementation in numerical multiphase flow models. Model calibration requires only relatively simple determinations of saturation-pressure relations in two-phase systems. A scaling procedure is employed to simplify the description of two-phase saturation-capillary head relations for arbitrary fluid pairs and experimental results for two porous media are presented to demonstrate its applicability. Extension of two-phase relations to three- phase systems is obtained under the assumption that fluid wettability follows the sequence water > nonaqueous phase liquid > air. Expressions for fluid relative permeabilities are derived from the scaled saturation-capillary head function using a flow channel distribution model to estimate effective mean fluid-conducting pore dimensions. Constraints on model application are discussed.

710 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined penalized likelihood estimation in the context of general regression problems, characterized as probability models with composite likelihood functions, where a parametric model is considered satisfactory but for inhomogeneity with respect to a few extra variables.
Abstract: Summary This paper examines penalized likelihood estimation in the context of general regression problems, characterized as probability models with composite likelihood functions. The emphasis is on the common situation where a parametric model is considered satisfactory but for inhomogeneity with respect to a few extra variables. A finite-dimensional formulation is adopted, using a suitable set of basis functions. Appropriate definitions of deviance, degrees of freedom, and residual are provided, and the method of cross-validation for choice of the tuning constant is discussed. Quadratic approximations are derived for all the required statistics.

302 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that the ARCH processes can be regarded as special cases of the RCA model and that the special feature of these two types of models is that the variability of a process might well depend on the available information.
Abstract: Under the traditional linear time series or regression setting, the conditional variance of one-step-ahead prediction is time invariant. Experience in conjunction with data analysis, however, suggests that the variability of a process might well depend on the available information. This reality has motivated extensive research to relax the constant variance assumption imposed by the traditional linear time series model, and several classes of generalized parametric models designed specifically for handling nonhomogeneity of a process have been proposed recently. In particular, the random coefficient autoregressive (RCA) models were widely investigated by time series analysts and the autoregressive conditional heteroscedastic (ARCH) models were investigated by econometricians. The interesting fact is that the ARCH processes can be regarded as special cases of the RCA model. In this article, I first give the relationship between these two types of models and show that the special feature of these t...

249 citations


Book
01 Jan 1987
TL;DR: This book discusses the role of Nonparametric Models in Continuous System Identification, and methods for Obtaining Transfer Functions from nonparametric models using the Frequency-Domain approach.
Abstract: Introduction. Continuous-Time Models of Dynamical Systems. Nonparametric Models. Parametric Models. Stochastic Models of Linear Time-Invariant Systems. Models of Distributed Parameter Systems (DPS). Signals and their Representations. Functions in the Ordinary Sense. Distribution or Generalized Functions. Identification of Linear Time-Invariant (LTIV) Systems via Nonparametric Models. The Role of Nonparametric Models in Continuous System Identification. Test Signals for System Identification. Identification of Linear Time-Invariant Systems - Time-Domain Approach. Frequency-Domain Approach. Methods for Obtaining Transfer Functions from Nonparametric Models. Numerical Transformations between Time- and Frequency-Domains. Parameter Estimation for Continuous-Time Models. The Primary Stage. The Secondary Stage: Parameter Estimation. Identification of Linear Systems Using Adaptive Models. Gradient Methods. Frequency-Domain. Stability Theory. Linear Filters. Identification of Multi-Input Multi-Output (MIMO) Systems, Distributed Parameter Systems (DPS) and Systems with Unknown Delays and Nonlinear Elements. MIMO Systems. Time-Varying Parameter Systems (TVPS). Lumped Systems with Unknown Time-Delays. Identification of Systems with Unknown Nonlinear Elements. Identification of Distributed Parameter Systems. Determination of System Structure. Index.

239 citations


Journal ArticleDOI
TL;DR: A general finding is that much of the information for forecasting is in the immediate past few observations or a few summary statistics based on past data.
Abstract: The problem of predicting a future measurement on an individual given the past measurements is discussed under nonparametric and parametric growth models. The efficiencies of different methods of prediction are assessed by cross-validation or leave-one-out technique in each of three data sets and the results are compared. Under nonparametric models, direct and inverse regression methods of prediction are described and their relative advantages and disadvantages are discussed. Under parametric models polynomial and factor analytic type growth curves are considered. Bayesian and empirical Bayesian methods are used to deal with unknown parameters. A general finding is that much of the information for forecasting is contained in the immediate past few observations or a few summary statistics based on past data. A number of data reduction methods are suggested and analyses based on them are described. The usefulness of the leave-one-out technique in model selection is demonstrated. A new method of calibration is introduced to improve prediction.

170 citations


Book
01 Jan 1987

168 citations


Journal ArticleDOI
TL;DR: In this article, a combination of the parametric and nonparametric estimates is used to fit a density function, where π (0 ≤ π ≤ 1) is unknown and π is estimated from the data, and its estimate is then used in as the proposed density estimate.
Abstract: One method of fitting a parametric density function f(x, θ) is first to estimate θ by maximum likelihood, say, and then to estimate f(x, θ) by On the other hand, when the parametric model does not hold, the true density f(x) may be estimated nonparametrically, as in the case of a kernel estimate The key idea proposed is to fit a combination of the parametric and nonparametric estimates, namely where π (0 ≤ π ≤ 1) is unknown The parameter π is estimated from the data, and its estimate is then used in as the proposed density estimate The main point is that we expect to be close to unity when the parametric model prevails, and close to zero when it does not hold We show that, under certain conditions, converges to the parametric density when the parametric model holds and approaches the true f(x) when the parametric model does not hold The procedure was applied to a number of actual data sets In each case the maximum likelihood estimate was readily obtained and the semiparametric density es

98 citations


Journal ArticleDOI
TL;DR: In this article, the authors used two different parametric models for the joint distribution of crest front steepness and wave height to estimate encounter probabilities of steep and high waves in deep water for given sea states.
Abstract: Estimates for encounter probabilities of occurrence of steep and high waves in deep water for given sea states by using two different parametric models for the joint distribution of crest front steepness and wave height are presented. The parametric models are fitted to the same data set, with data from a zero‐downcross analysis of wave data obtained from measurements at sea on the Norwegian continental shelf. The probability of occurrence of waves with different crest front steepness are estimated with each parametric model for a family of JONSWAP spectra. An example is given where the probabilities of occurrence of “extreme waves,” which are considered to be critical to capsizing of smaller vessels, are estimated with one parametric model for sea states described by significant wave height and mean zero‐crossing period.

41 citations


Journal ArticleDOI
Rolf Johansson1
TL;DR: This paper presents a parametrization scheme for adaptive control of linear, multivariable systems with strictly proper transfer functions using the internal structure matrix BS * and appropriate polynomial degrees.
Abstract: This paper presents a parametrization method for direct adaptive control of linear multivariable systems with strictly proper discrete-time or continuous-time transfer functions. The necessary a priori information is shown to be a diagonal matrix with the noninvertible zeros of the Smith form and appropriate polynomial degrees.

36 citations


Journal ArticleDOI
TL;DR: In this paper, an unbiased nonparametric graphical approach for investigating the generalized covariance function is developed, which is illustrated on a set of coal ash data and soil pH data.
Abstract: Fitting trend and error covariance structure iteratively leads to bias in the estimated error variogram. Use of generalized increments overcomes this bias. Certain generalized increments yield difference equations in the variogram which permit graphical checking of the model. These equations extend to the case where errors are intrinsic random functions of order k, k=1, 2, ..., and an unbiased nonparametric graphical approach for investigating the generalized covariance function is developed. Hence, parametric models for the generalized covariance produced by BLUEPACK-3D or other methods may be assessed. Methods are illustrated on a set of coal ash data and a set of soil pH data.

28 citations


Journal ArticleDOI
TL;DR: The authors proposed an alternative estimation method, the corrected maximum likelihood estimate, which is consistent for the slope vector in the outcome equation up to a multiplicative scalar, even through the parametric model on which the estimate is based might be misspecified.

Journal ArticleDOI
TL;DR: In this paper, a trimmed logit method that combines the advantages of a parametric model with that of trimming in dealing with heavy-tailed distributions is presented, which has been found to work well in our experience with over 200 data sets.
Abstract: Trimmed nonparametric procedures such as the trimmed Spearman-Karber method have been proposed in the literature for overcoming the deficiencies of the probit and logit models in the analysis of quantal bioassay data. However, there are situations where the median effective dose (ED50) is not calculable with the trimmed Spearman-Karber method, but is estimable with a parametric model. Also, it is helpful to have a parametric model for estimating percentiles of the dose-response curve such as the ED10 and ED25. A trimmed logit method that combines the advantages of a parametric model with that of trimming in dealing with heavy-tailed distributions is presented here. These advantages are substantiated with examples of actual bioassay data. Simulation results are presented to support the validity of the trimmed logit method, which has been found to work well in our experience with over 200 data sets. A computer program for computing the ED50 and associated 95% asymptotic confidence interval, based on the trimmed logit method, can be obtained from the authors.

Book ChapterDOI
01 Jan 1987
TL;DR: The parametric model is extremely simple: an innate set of grammars presents a child with highly limited choices: two or three possibilities exist.
Abstract: The parametric model is extremely simple. An innate set of grammars presents a child with highly limited choices: two or three possibilities exist. A piece of input data shows unambiguously which is correct. How does the model connect to the reality of acquisition?


Journal ArticleDOI
TL;DR: Horowitz and Newman as discussed by the authors used ordi¬nary least squares to estimate the parameters of a linear regression model without requiring that the distribution of the disturbances be in a finite-parameter family.
Abstract: Semi parametric methods provide estimates of finite parameter vectors without requiring that the complete data generation process be assumed in a finite-dimensional family. By avoiding bias from incorrect specification, such estimators gain robustness, although usually at the cost of decreased precision. The most familiar semi parametric method in econometrics is ordi¬nary least squares, which estimates the parameters of a linear regression model without requiring that the distribution of the disturbances be in a finite-parameter family. The recent literature in econometric theory has extended semi parametric methods to a variety of non-linear models, including models appropriate for analysis of censored duration data. Horowitz and Newman make perhaps the first empirical application of these methods, to data on employment duration. Their analysis provides insights into the practical problems of implementing these methods, and limited information on performance. Their data set, containing 226 male controls...

Journal ArticleDOI
TL;DR: In this paper, a non-linear two parametric model introduced by R. Bouc is used to describe the hysteretic loops obtained from experiments on stranded cables and the procedure for identification of model parameters is given and the physical meaning of parameters is discussed.

Journal ArticleDOI
TL;DR: In this article, a parametric model for common cause failure quantification in reliability analysis is described, which is formally similar to the Multiple Greek Letter model, but its parameters are based on event frequencies, not on component failure frequencies.

Journal ArticleDOI
TL;DR: In this article, the authors present the pragmatic base upon which parametric models are developed and case study results comparing parametric model estimates with actual industrial production costs, and compare them with actual production costs.
Abstract: Summary As alternative design comparisons have become more complex, cost comparisons take on significant meaning in the evaluation The typical and time proven “grass roots” methods of cost estimating developed through industrial engineering and cost accounting disciplines can be time consuming and non-systems oriented An alternative means to unit production cost estimating at the conceptual and early design stages is the “top down” or parametric model The computerized parametric cost estimating model is used successfully by most international space industry contractors and government agencies, including the DOD and NASA in the United States and the MOD in the United Kingdom This paper presents the pragmatic base upon which parametric models are developed and case study results comparing parametric model estimates with actual industrial production costs

Posted Content
01 Jan 1987
TL;DR: The generalized beta of the second kind (GB2) as discussed by the authors is a principal set of distributions for modeling insurance loss processes, which can be generated as mixtures of well-known distributions, thus facilitating theoretical modeling of claims from heterogeneous populations.
Abstract: This article proposes the family of probability distributions known as the generalized beta of the second kind (GB2) as a principal set of distributions for modeling insurance loss processes. The GB2 family encompasses many commonly used distributions such as the log-normal, gamma and Weibull. It also includes distributions such as the Burr and generalized gamma which have significant potential for improving the distributional fit in many applications involving heavy-tailed distributions. Most members of the GB2 family can be generated as mixtures of well-known distributions, thus facilitating theoretical modeling of claims from heterogeneous populations. An example is presented which involves fitting the log-gamma and log-Burr distributions to a sample of fire claims. The results suggest that seemingly slight differences in modelling the tails of severity distributions can lead to substantial differences in reinsurance premiums and qualtiles of simulated total claims distributions. (A)

Journal ArticleDOI
TL;DR: It is shown that Renyi entropy consistently provides a better approximation for busy period length and waiting time distributions as compared to those obtained using Shannon or parametric models.

Allan G. Piersol1
01 Oct 1987
TL;DR: In this paper, the general methodology for the analysis of arbitrary nonstationary random data is reviewed and a specific parametric model, called the product model, that has applications to space vehicle launch vibration data analysis is discussed.
Abstract: The general methodology for the analysis of arbitrary nonstationary random data is reviewed A specific parametric model, called the product model, that has applications to space vehicle launch vibration data analysis is discussed Illustrations are given using the nonstationary launch vibration data measured on the Space Shuttle orbiter vehicle

Proceedings Article
04 May 1987
TL;DR: This work has developed a method to determine parametric models, describing saturations for a large range of input amplitude, on an inverting amplifier which causes slew-induced distortion.
Abstract: Recently, Volterra theory for handling nonlinear phenomena has been freed of its inability to treat, satisfactorily, saturating systems by the introduction of rational Volterra theory. This, combined with the present availability of parameter estimation techniques for the identification of systems set up around traditional Volterra theory, makes it now possible to develop a method to determine parametric models, describing saturations for a large range of input amplitude. The technique is illustrated on an inverting amplifier which causes slew-induced distortion.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the structure of the crossover design is such that use of the parametric theory linear model is required if a single, consistent model is desired.
Abstract: In certain areas of medical research, the two-period crossover design is a frequent choice for comparing treatments A and B in a randomized clinical trial. Earlier work by Grizzle and by Brown was based upon a parametric theory linear model. Recently, the present authors employed D. R. Cox's additive randomization models and, for the case of zero residual effect, found a discrepancy between it and the parametric model with respect to the precision of period effects. In the present note, this divergence is accounted for by allowing for the possibility of non-additivity through the use of a completely general randomization model. It is concluded that the structure of the crossover design is such that use of the parametric theory linear model is required if a single, consistent model is desired.

DissertationDOI
01 Jan 1987
TL;DR: In this paper, the authors investigated the synergism of dynamic modeling and process control, as pertaining to the fields of low-order controller design, model reduction, and model identification, and showed that parsimonious, more effective controllers are possible if control considerations are incorporated in the modeling stage.
Abstract: Modeling and control system design have traditionally been viewed as distinct, independent problems. Not all model characteristics, however, are relevant to the control system design problem. One can expect, then, that parsimonious, more effective controllers are possible if control considerations are incorporated in the modeling stage. The synergism of dynamic modeling and process control, as pertaining to the fields of low-order controller design, model reduction, and model identification, is investigated in this thesis. The guiding theoretical framework is the robust control paradigm using the Structured Singular Value, which addresses controller design in the presence of model uncertainty. The main contribution of this thesis is the development of a control-relevant model reduction methodology. The effectiveness of reduction is increased by incorporating the closed-loop performance/robustness specifications, plant uncertainties, and setpoint/disturbance characteristics explicitly as weights in the reduction procedure. The efficient computation of the control-relevant reduction problem is indicated and illustrated with examples taken from the control of a methanation reactor and a binary distillation column. A low-order controller design methodology for single-input, single-output plants is also presented. The basis for this methodology is the combination of the control-relevant reduction problem with the Internal Model Control (IMC) design procedure. The relationship between low-order IMC controllers and classical feeback compensators is examined. It is shown that for many models common to the process industries, the controllers obtained from the low-order compensator design technique are of the PID type. Finally, a model identification methodology is established using spectral time series analysis to obtain plant transfer function and uncertainty estimates directly from experiments. The control-relevant model reduction procedure can then be used to fit the "full-order" frequency response to a "reduced-order" parametric model. Model validation for control purposes is achieved by insuring that the robustness condition is satisfied.

Proceedings ArticleDOI
01 Apr 1987
TL;DR: The use of parametric models and robustified maximum likelihood to estimate two-dimensional power spectra from imperfectly observed lattice data and the computational complexity is reduced without unduly destroying the asymptotic properties of the estimates.
Abstract: In this paper we investigate the use of parametric models and robustified maximum likelihood to estimate two-dimensional power spectra from imperfectly observed lattice data. The maximum likelihood (ML) estimates for the signal plus noise model are consistent and asymptotically efficient for noncausal autoregressive (NCAR) models, but the solution requires the use of computationally expensive non-linear optimization, such as Newton-Raphson. By approximating the ML equations through the use of a toroidal lattice the computational complexity is reduced without unduly destroying the asymptotic properties of the estimates. When outliers in the data occurs, ML might not perform well. In this case we make no strict assumption about the distribution of the observations but assume only that the data are nominally Gaussian but with heavier tails. Then we use a robust procedure to estimate the parameters for the model.

Book ChapterDOI
01 Jan 1987
TL;DR: In this article, the design of deadbeat self-tuning controllers is discussed and two types of models -a nonparametric (impulse response) and a parametric (ARMA) one are applied.
Abstract: The design of deadbeat self tuning controllers is discussed. Two types of models - a nonparametric (impulse response) and a parametric (ARMA) one are applied. The nonparametric model is estimated by the correlation method while the estimates of the parametric model are obtained using a modified recursive instrumental variables method. Both approaches are poorly developed because of the difficulties in obtaining consistent estimates in closed loop. To remove these difficulties the application of an additional measurable signal at the process input with white noise characteristics is proposed. Two approaches for determining the coefficients of the controller depending on the type of the model (nonparametric or parameteric) are proposed. The algorithms worked out in this paper take into account restrictions of the manipulating variable and provide a finite settling time. Experimental investigations of the proposed algorithms are carried out on digital simulated close loop system with different models of the process. The experimental results of the deadbeat self-tuning controllers applied to stationary as well to nonstationary processes show the good properties of the considered methods and computational procedures.

01 Apr 1987
TL;DR: In this article, the problem of estimating the parameters of noisy multicomponent signals using parametric modeling technique is considered, where the deconvolved data are then modeled using a special nonstationary autoregressive moving average (ARMA) process.
Abstract: The problem of estimating the parameters of noisy multicomponent signals using parametric modeling technique is considered in this paper. The multicomponent signal of interest is formed by a superposition of basic functions having the same location in time but different widths and amplitudes. Based on the modified Gardner transformation, some samples of deconvolved data are derived from the multicomponent signals. The deconvolved data are then modeled using a special nonstationary autoregressive moving average (ARMA) process in which the parameters of the ARMA model are obtained by linear least-squares procedure. The least-squares procedure is based on the singular value decomposition (SVD) to overcome the limitations of the transient error method (TEM) of analysis that uses cholesky decomposition to determine its AR coefficients. The moving average (MA) coefficients corresponds to the initial residual error sequences so as to account for the nonstationary noise in the deconvolved data. This new method of analysis, termed the SVD-based transient error method, produces high resolution estimates of the exponents of multicomponent signals at both low and high signal to noise (SNR) ratios.

Journal ArticleDOI
TL;DR: The parallel structure is identified as a promising alternative for large order time series modelling and some limitations for commonly used model structures such as direct form and lattice are discussed.

Journal ArticleDOI
TL;DR: In this article, two discrete parametric models, the Nash and Muskingum, were tested as response functions for ground-water hydrographs in karst aquifers.
Abstract: Similarity of ground-water hydrographs in karst aquifers with surface runoff hydrographs indicates that some models used in unit hydrograph analysis could adequately model the ground-water system. Two such models are used to model ground-water levels in a karst aquifer. The aquifer was treated as a linear time-invariant system with a response function representing the rise in ground-water levels in response to rainfall. Two discrete parametric models, the Nash and Muskingum, were tested as response functions. Parameters for both models were identified from one year of data by applying the theory of moments to sets of input and output data. The models were verified using rainfall and water-level data for a second year. The Muskingum method did not adequately model the system. Results of the Nash model were good.

Journal ArticleDOI
TL;DR: In this article, it was shown that any estimand of these parameters having an unbounded range is non-estimable, and it was also shown that the same is true for a random sample from a discrete distribution supported on a finite set of points.
Abstract: Consider a random sample from a discrete distribution supported on a finite set of points and depending on a vector of unknown parameters. It is shown that any estimand of these parameters having an unbounded range is nonestimable.