scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 1994"


Journal ArticleDOI
TL;DR: In this paper, the authors present a general approach that accommodates most forms of experimental layout and ensuing analysis (designed experiments with fixed effects for factors, covariates and interaction of factors).
Abstract: + Abstract: Statistical parametric maps are spatially extended statistical processes that are used to test hypotheses about regionally specific effects in neuroimaging data. The most established sorts of statistical parametric maps (e.g., Friston et al. (1991): J Cereb Blood Flow Metab 11:690-699; Worsley et al. 119921: J Cereb Blood Flow Metab 12:YOO-918) are based on linear models, for example ANCOVA, correlation coefficients and t tests. In the sense that these examples are all special cases of the general linear model it should be possible to implement them (and many others) within a unified framework. We present here a general approach that accommodates most forms of experimental layout and ensuing analysis (designed experiments with fixed effects for factors, covariates and interaction of factors). This approach brings together two well established bodies of theory (the general linear model and the theory of Gaussian fields) to provide a complete and simple framework for the analysis of imaging data. The importance of this framework is twofold: (i) Conceptual and mathematical simplicity, in that the same small number of operational equations is used irrespective of the complexity of the experiment or nature of the statistical model and (ii) the generality of the framework provides for great latitude in experimental design and analysis.

9,614 citations


01 Jan 1994

1,933 citations


Journal ArticleDOI
TL;DR: In this article, empirical likelihood ratio statistics for various parameters of an unknown distribution have been used to obtain tests or confidence intervals in a way that is completely analogous to that used with parametric likelihoods.
Abstract: For some time, so-called empirical likelihoods have been used heuristically for purposes of nonparametric estimation. Owen showed that empirical likelihood ratio statistics for various parameters $\theta(F)$ of an unknown distribution $F$ have limiting chi-square distributions and may be used to obtain tests or confidence intervals in a way that is completely analogous to that used with parametric likelihoods. Our objective in this paper is twofold: first, to link estimating functions or equations and empirical likelihood; second, to develop methods of combining information about parameters. We do this by assuming that information about $F$ and $\theta$ is available in the form of unbiased estimating functions. Empirical likelihoods for parameters are developed and shown to have properties similar to those for parametric likelihood. Efficiency results for estimates of both $\theta$ and $F$ are obtained. The methods are illustrated on several problems, and areas for future investigation are noted.

1,692 citations


Journal ArticleDOI
TL;DR: In this article, a simulation-based method of inference for parametric measurement error models in which the measurement error variance is known or at least well estimated is described, and the method entails adding add...
Abstract: We describe a simulation-based method of inference for parametric measurement error models in which the measurement error variance is known or at least well estimated. The method entails adding add...

724 citations


Journal ArticleDOI
TL;DR: The development of a new coarse in the control curriculum dealing with the control of systems subject to parametric uncertainty is presented, rich in theoretical content, easy to motivate from a practical standpoint and requires just the right level of mathematics to be taught as a fundamental discipline to engineers and scientists.

593 citations


01 Jan 1994
TL;DR: In this paper, the authors study problems of semiparametric statistical inference connected with long-memory covariance stationary time series, having spectrum which varies regularly at the origin: there is an unknown self-similarity parameter, but elsewhere the spectrum satisfies no parametric or smoothness conditions, it need not be in $L_p, for any p > 1, and in some circumstances the slowly varying factor can be of unknown form.
Abstract: We study problems of semiparametric statistical inference connected with long-memory covariance stationary time series, having spectrum which varies regularly at the origin: There is an unknown self-similarity parameter, but elsewhere the spectrum satisfies no parametric or smoothness conditions, it need not be in $L_p$, for any $p > 1$, and in some circumstances the slowly varying factor can be of unknown form. The basic statistic of interest is the discretely averaged periodogram, based on a degenerating band of frequencies around the origin. We establish some consistency properties under mild conditions. These are applied to show consistency of new estimates of the self-similarity parameter and scale factor. We also indicate applications of our results to standard errors of least squares estimates of polynomial regression with long-memory errors, to generalized least squares estimates of this model and to estimates of a "cointegrating" relationship between long-memory time series

431 citations


Journal ArticleDOI
TL;DR: In this article, the theory of the two-photon state generated by type-II optical parametric down-conversion is studied with emphasis on the space-time and polarization entanglement of the photons.
Abstract: The theory of the two-photon state generated by type-II optical parametric down-conversion is studied with emphasis on the space-time and polarization entanglement of the photons. Several experiments are reviewed that demonstrate various aspects of the quantum nature of this state. The theory of a different type of two-photon interferometer is presented.

408 citations


Journal ArticleDOI
TL;DR: A robust speaker-identification system is presented that was able to deal with various forms of anomalies that are localized in time, such as spurious noise events and crosstalk.
Abstract: We describe current approaches to text-independent speaker identification based on probabilistic modeling techniques. The probabilistic approaches have largely supplanted methods based on comparisons of long-term feature averages. The probabilistic approaches have an important and basic dichotomy into nonparametric and parametric probability models. Nonparametric models have the advantage of being potentially more accurate models (though possibly more fragile) while parametric models that offer computational efficiencies and the ability to characterize the effects of the environment by the effects on the parameters. A robust speaker-identification system is presented that was able to deal with various forms of anomalies that are localized in time, such as spurious noise events and crosstalk. It is based on a segmental approach in which normalized segment scores formed the basic input for a variety of robust 43% procedures. Experimental results are presented, illustrating 59% the advantages and disadvantages of the different procedures. 64%. We show the role that cross-validation can play in determining how to weight the different sources of information when combining them into a single score. Finally we explore a Bayesian approach to measuring confidence in the decisions made, which enabled us to reject the consideration of certain tests in order to achieve an improved, predicted performance level on the tests that were retained. >

366 citations


Journal ArticleDOI
TL;DR: A robust BP learning algorithm is derived that is resistant to the noise effects and is capable of rejecting gross errors during the approximation process, and its rate of convergence is improved since the influence of incorrect samples is gracefully suppressed.
Abstract: The backpropagation (BP) algorithm allows multilayer feedforward neural networks to learn input-output mappings from training samples. Due to the nonlinear modeling power of such networks, the learned mapping may interpolate all the training points. When erroneous training data are employed, the learned mapping can oscillate badly between data points. In this paper we derive a robust BP learning algorithm that is resistant to the noise effects and is capable of rejecting gross errors during the approximation process. The spirit of this algorithm comes from the pioneering work in robust statistics by Huber and Hampel. Our work is different from that of M-estimators in two aspects: 1) the shape of the objective function changes with the iteration time; and 2) the parametric form of the functional approximator is a nonlinear cascade of affine transformations. In contrast to the conventional BP algorithm, three advantages of the robust BP algorithm are: 1) it approximates an underlying mapping rather than interpolating training samples; 2) it is robust against gross errors; and 3) its rate of convergence is improved since the influence of incorrect samples is gracefully suppressed. >

303 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the case where branch probabilities are products of nonnegative integer powers in the parameters, and their complements, 1 - θs, and show that the EM algorithm necessarily converges to a local maximum.
Abstract: Multinomial processing tree models assume that an observed behavior category can arise from one or more processing sequences represented as branches in a tree. These models form a subclass of parametric, multinomial models, and they provide a substantively motivated alternative to loglinear models. We consider the usual case where branch probabilities are products of nonnegative integer powers in the parameters, 0≤θs≤1, and their complements, 1 - θs. A version of the EM algorithm is constructed that has very strong properties. First, the E-step and the M-step are both analytic and computationally easy; therefore, a fast PC program can be constructed for obtaining MLEs for large numbers of parameters. Second, a closed form expression for the observed Fisher information matrix is obtained for the entire class. Third, it is proved that the algorithm necessarily converges to a local maximum, and this is a stronger result than for the exponential family as a whole. Fourth, we show how the algorithm can handle quite general hypothesis tests concerning restrictions on the model parameters. Fifth, we extend the algorithm to handle the Read and Cressie power divergence family of goodness-of-fit statistics. The paper includes an example to illustrate some of these results.

275 citations


Book ChapterDOI
TL;DR: In this article, the authors provide an overview of asymptotic results available for parametric estimators in dynamic models, including multivariate least squares estimation of a dynamic conditional mean, quasi-maximum likelihood estimation, and generalized method of moments estimation of orthogonality conditions.
Abstract: This chapter provides an overview of asymptotic results available for parametric estimators in dynamic models. Three cases are treated: stationary (or essentially stationary) weakly dependent data, weakly dependent data containing deterministic trends, and nonergodic data (or data with stochastic trends). Estimation of asymptotic covariance matrices and computation of the major test statistics are covered. Examples include multivariate least squares estimation of a dynamic conditional mean, quasi-maximum likelihood estimation of a jointly parameterized conditional mean and conditional variance, and generalized method of moments estimation of orthogonality conditions. Some results for linear models with integrated variables are provided, as are some abstract limiting distribution results for nonlinear models with trending data.

BookDOI
01 Jan 1994
TL;DR: In this paper, the authors present a theory of the ground state evolution of the system with increasing pumping Amplitude and the second and intermediate thresholds of the second-and intermediate thresholds.
Abstract: . . . . . . . . . . . . . . 6 Advanced S-Theory: Supplementary Sections 121 6.1 Ground State Evolution of System . . . . . . . . . . . . . . . . . with Increasing Pumping Amplitude 123 6.1.1 Ground State of Parametric Waves . . . . . . for Complex Pair Interaction Amplitudes 124 . . . . . . 6.1.2 The Second and Intermediate Thresholds 125 6.1.3 Nonlinear Behavior . . . of Non-Analytic Pair Interaction Amplitudes 128 6.2 Influence of Nonlinear Damping . . . . . . . . . . . . . . . . . . . . . . . . . . on Parametric Excitation 132 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Simple Theory 132 6.2.2 Influence of Non-Analyticity . . . . . . . . . . . . . . . . . . . . . . on Nonlinear Damping 135 6.3 Parametric Excitation Under the Feedback Effect on Pumping . . . . . . . . . . . . . . 139 6.3.1 Harniltonian of the Problem . . . . . . . . . . . . . . . . . . 139 . . . 6.3.2 General Analysis of the Equations of Motion 141 6.3.3 First-Order Processes . . . . . . . . . . . . . . . . . . . . . . . . 143 6.3.4 SecondOrder Processes . . . . . . . . . . . . . . . . . . . . . . 146 6.4 Nonlinear Theory of Parametric wave Excitation at Finite Temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 6.4.1 Different Time Correlators and Frequency Spectrum . . . . . . . . . . . . . . . . . . . . 147 . . . . . 6.4.2 Basic Equations of Temperature S-Theory 146 6.4.3 Separation of Waves into Parametric and Thermal 150 6.4.4 Two-Dimensional Reduction of Basic Equations . 151 6.4.5 Distribution of Parametric Waves in k . . . . . . . . . 152 6.4.6 Spectrum of Pararnctr-ic Waves . . . . . . . . . . . . . . . 153 6.4.7 Heating Below Threshold . . . . . . . . . . . . . . . . . . . . 153 6.4.8 Influence of Thermal Bath on Total Characteristics 153 6.5 Introduction to Spatially Inhomogeneous S-Theory . . . . . 155 6.5.1 Basic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 6.5.2 Parametric Threshold in Inhomogeneous Media . 157 . . . 6.5.3 Stationary State in Non-Homogeneous Media 160 6.6 Nonlinear Behavior of Parametric Waves from Various Branches. Asymmetrical S-Theory . . . . . . . . . . . 165 6.6.1 Derivation of Basic Equations . . . . . . . . . . . . . . . . 165 6.6.2 Stationary States in Isotropic Case . . . . . . . . . . . . 167 6.7 Parametric Excitation of Waves by Noise Pumping . . . . 172 6.7.1 Equations of S-Theory Under Noise Pumping . . 173 6.7.2 Distribution of Parametric Waves ! 6 Advanced S-Theory: i Supplementary Sections / I

Journal ArticleDOI
TL;DR: Results show that the parametric interpolator used for command generation during CNC machining is favourable in the machining of free-form geometry in terms of high speed and tight tolerancing.
Abstract: The paper presents a comparative study of linear and parametric interpolators used for command generation during CNC machining. For many decades, linear interpolators have been used in NC machines for command generation. Recently, a parametric interpolator has been developed that is designed for command generation for parametric curves (or surfaces). Two algorithms for the online implementation of this parametric interpolator are introduced. The comparison is mainly based on memory size, feedrate fluctuation and CPU time. Other considerations include tracking errors and jerk magnitude. Results show that the parametric interpolator is favourable in the machining of free-form geometry in terms of high speed and tight tolerancing.

Journal ArticleDOI
Robert Gray1
TL;DR: A method to formulate a flexible parametric alternative using fixed knot splines, together with a penalty function that penalizes noisy alternatives more than smooth ones, to focus the power of the tests toward smooth alternatives.
Abstract: This paper examines a method for testing hypotheses on covariate effects in a proportional hazards model, and also on how effects change over time in regression analysis of survival data. The technique used is very general and can be applied to testing many other aspects of parametric and semiparametric models. The basic idea is to formulate a flexible parametric alternative using fixed knot splines, together with a penalty function that penalizes noisy alternatives more than smooth ones, to focus the power of the tests toward smooth alternatives. The test statistics are the analogs of ordinary likelihood-based statistics, only computed from a penalized likelihood formed by subtracting the penalty function from the ordinary log-likelihood. Large-sample approximations to the distributions are found when the number of knots is held fixed as the sample size increases. Numerical results suggest these approximations may be adequate with moderate sized samples.

Journal ArticleDOI
TL;DR: The technique proposed in the present paper is related to the class of so-called auto-calibration procedures, but it is assumed that certain prior knowledge of the array response errors is available, and it allows for more general perturbation models than does pure auto-Calibration.
Abstract: A number of techniques for parametric (high-resolution) array signal processing have been proposed in the last few decades. With few exceptions, these algorithms require an exact characterization of the array, including knowledge of the sensor positions, sensor gain/phase response, mutual coupling, and receiver equipment effects. Unless all sensors are identical, this information must typically be obtained by experimental measurements (calibration). In practice, of course, all such information is inevitably subject to errors. Several different methods have been proposed for alleviating the inherent sensitivity of parametric methods to such modelling errors. The technique proposed in the present paper is related to the class of so-called auto-calibration procedures, but it is assumed that certain prior knowledge of the array response errors is available. This is a reasonable assumption in most applications, and it allows for more general perturbation models than does pure auto-calibration. The optimal maximum a posteriori (MAP) estimator for the problem at hand is formulated, and a computationally more attractive large-sample approximation is derived. The proposed technique is shown to be statistically efficient, and the achievable performance is illustrated by numerical evaluation and computer simulation. >


Proceedings ArticleDOI
09 Sep 1994
TL;DR: An iterative algorithm is presented for simultaneous deformation of multiple curves and surfaces to an MRI, with inter-surface constraints and self-intersection avoidance, which automatically creates surfaces of MRI datasets with a common mapping to surface parametric space.
Abstract: An iterative algorithm is presented for simultaneous deformation of multiple curves and surfaces to an MRI, with inter-surface constraints and self-intersection avoidance. The resulting robust segmentation, combined with local curvature matching, automatically creates surfaces of MRI datasets with a common mapping to surface parametric space.

Journal ArticleDOI
TL;DR: In this paper, the authors generalize the parametric approach to a panel data setting and show that input and firm-specific allocative inefficiency, as well as firm specific technical inefficiency can be identified and estimated using a flexible functional form.
Abstract: The error-components approach to estimating allocative inefficiency imposes restrictive assumptions on the distributions of the errors and functional form. The parametric approach does not require special assumptions about the error distribution or technology but typically assumes technical efficiency or restrictive functional forms. The parametric approach also allows for systematic firm responses to shadow prices. The authors generalize the parametric approach to a panel data setting and show that input and firm-specific allocative inefficiency, as well as firm-specific technical inefficiency, can be identified and estimated using a flexible functional form. This is demonstrated empirically with an application to U.S. airlines. Copyright 1994 by Economics Department of the University of Pennsylvania and the Osaka University Institute of Social and Economic Research Association.

Journal ArticleDOI
TL;DR: This paper reformulated the rate-distortion problem in terms of the optimal mapping from the unit interval with Lebesgue measure that would induce the desired reproduction probability density and shows how the number of "symbols" grows as the system undergoes phase transitions.
Abstract: In rate-distortion theory, results are often derived and stated in terms of the optimizing density over the reproduction space. In this paper, the problem is reformulated in terms of the optimal mapping from the unit interval with Lebesgue measure that would induce the desired reproduction probability density. This results in optimality conditions that are "random relatives" of the known Lloyd (1982) optimality conditions for deterministic quantizers. The validity of the mapping approach is assured by fundamental isomorphism theorems for measure spaces. We show that for the squared error distortion, the optimal reproduction random variable is purely discrete at supercritical distortion (where the Shannon (1948) lower bound is not tight). The Gaussian source is thus the only source that produces continuous reproduction variables for the entire range of positive rate. To analyze the evolution of the optimal reproduction distribution, we use the mapping formulation and establish an analogy to statistical mechanics. The solutions are given by the distribution at isothermal statistical equilibrium, and are parameterized by the temperature in direct correspondence to the parametric solution of the variational equations in rate-distortion theory. The analysis of an annealing process shows how the number of "symbols" grows as the system undergoes phase transitions. Thus, an algorithm based on the mapping approach often needs but a few variables to find the exact solution, while the Blahut (1972) algorithm would only approach it at the limit of infinite resolution. Finally, a quick "deterministic annealing" algorithm to generate the rate-distortion curve is suggested. The resulting curve is exact as long as continuous phase transitions in the process are accurately followed. >

Journal ArticleDOI
Tor Steinar Schei1
TL;DR: A method for the automatic tuning of PID controllers in a closed loop, based on the estimation of a parametric ‘black-box’ transfer function model, is proposed, which has low sensitivity to disturbances and noise during the tuning experiment.

Journal ArticleDOI
TL;DR: A parametric method of statistical analysis for dilution assays is developed in detail from first principles of probability and statistics and produces an estimate for the concentration of target entities, a confidence interval for this concentration, and an indicator of the quality of the assay called the p value for goodness of fit.
Abstract: A parametric method of statistical analysis for dilution assays is developed in detail from first principles of probability and statistics. The method is based on a simple product binomial model for the experiment and produces an estimate for the concentration of target entities, a confidence interval for this concentration, and an indicator of the quality of the assay called the p value for goodness of fit. The procedure is illustrated with data from a virologic quantitative micrococulture assay used to quantify free human immunodeficiency virus in clinical trials. The merits of the procedure versus those of nonparametric methods of estimating the dilution inducing a 50% response rate are discussed. Advantages of the proposed approach include plausibility of the underlying assumptions, ability to assess plausibility of specific experimental outcomes through their likelihood, and plausibility of confidence intervals.

Journal ArticleDOI
TL;DR: An up-to-date list of publications on blending, including parametric-surface methods and other methods, is provided as a key to the literature.
Abstract: The paper discusses the blending problem in geometric modelling, and it provides a comprehensive review of solutions that use parametric surfaces. A terminology and a classification are presented to help clarify the nature of blending, and the relationships between various parametric blending methods. Several geometric techniques are evaluated, highlighting concepts which the authors feel to be important. Topological issues are also discussed. In conclusion, the applicability and efficiency of parametric techniques for general blending situations are emphasized, and open questions for future research are presented. An up-to-date list of publications on blending, including parametric-surface methods and other methods, is provided as a key to the literature.

Journal ArticleDOI
TL;DR: The paper presents a method for the computation of the monostatic radar cross section (RCS) of electrically large conducting objects modeled by nonuniform rational B-spline (NURBS) surfaces using the physical optic (PO) technique, which makes use of a small number of patches to model complex bodies.
Abstract: The paper presents a method for the computation of the monostatic radar cross section (RCS) of electrically large conducting objects modeled by nonuniform rational B-spline (NURBS) surfaces using the physical optic (PO) technique. The NURBS surfaces are expanded in terms of rational Bezier patches by applying the Cox-De Boor transform algorithm. This transformation is justified because Bezier patches are numerically more stable than NURBS surfaces. The PO integral is evaluated over the parametric space of the Bezier surfaces using asymptotic integration. The scattering field contribution of each Bezier patch is expressed in terms of its geometric parameters. Excellent agreement with PO predictions is obtained. The method is quite efficient because it makes use of a small number of patches to model complex bodies, so it requires very little memory and computing time. >

Journal ArticleDOI
TL;DR: In this paper, a method for testing a parametric model of the mean of a random variable Y conditional on a vector of explanatory variables X against a semiparametric alternative is described.
Abstract: This paper describes a method for testing a parametric model of the mean of a random variable Y conditional on a vector of explanatory variables X against a semiparametric alternative The test is motivated by a conditional moment test against a parametric alternative and amounts to replacing the parametric alternative model with a semiparametric model The resulting semiparametric test is consistent against a larger set of alternatives than are parametric conditional moments tests based on finitely many moment conditions The results of Monte Carlo experiments and an application illustrate the usefulness of the new test

Patent
14 Apr 1994
TL;DR: In this article, a system and method for measuring and storing parametric data pertaining to the operating characteristics of an implantable medical device is described, including the impedance of a lead that is attached to a patient's heart and the internal impedance of battery used to power the implantable device.
Abstract: A system and method for measuring and storing parametric data pertaining to the operating characteristics of an implantable medical device are provided. The parametric data may include the impedance of a lead that is attached to a patient's heart, and the internal impedance of a battery used to power the implantable medical device. The parametric data may be measured and stored at predetermined time intervals, as indicated by a clock provided within the implantable medical device. In addition, the parametric data measurements may be synchronized with the occurrence of a cardiac event, such as the application of a stimulation pulse to the patient's heart. A plurality of measurements for each type of parametric data may be stored, so that when the parametric data are later retrieved and displayed on an external programmer/analyzer, trends in the data can be readily observed.

Journal ArticleDOI
TL;DR: In this paper, the authors develop a stability theory for broad classes of parametric generalized equations and variational inequalities in finite dimensions, and prove new criteria for the existence of Lipschitzian multivalued and single-valued implicit functions.
Abstract: In this paper we develop a stability theory for broad classes of parametric generalized equations and variational inequalities in finite dimensions. These objects have a wide range of applications in optimization, nonlinear analysis, mathematical economics, etc. Our main concern is Lipschitzian stability of multivalued solution maps depending on parameters. We employ a new approach of nonsmooth analysis based on the generalized differentiation of multivalued and nonsmooth operators. This approach allows us to obtain effective sufficient conditions as well as necessary and sufficient conditions for a natural Lipschitzian behavior of solution maps. In particular, we prove new criteria for the existence of Lipschitzian multivalued and single-valued implicit functions.

Proceedings ArticleDOI
08 May 1994
TL;DR: A general learning algorithm is presented for determining the mapping between robot position and object appearance, which enables accurate visual control without any prior hand-eye calibration.
Abstract: The problem of vision-based robot positioning and tracking is addressed. A general learning algorithm is presented for determining the mapping between robot position and object appearance. The robot is first moved through several displacements with respect to its desired position, and a large set of object images is acquired. This image set is compressed using principal component analysis to obtain a four-dimensional subspace. Variations in object images due to robot displacements are represented as a compact parametrized manifold in the subspace. While positioning or tracking, errors in end-effector coordinates are efficiently computed from a single brightness image using the parametric manifold representation. The learning component enables accurate visual control without any prior hand-eye calibration. Several experiments have been conducted to demonstrate the practical feasibility of the proposed positioning/tracking approach and its relevance to industrial applications. >

Journal ArticleDOI
14 Dec 1994
TL;DR: The analysis establishes a link between (1) shooting techniques for solving the associated boundary value problem (BVP) and (2) SSC and proves solution differentiability in the sense that the optimal solution and the associated adjoint multiplier function are differentiable functions of the parameter.
Abstract: Investigates sensitivity of solutions to parametric nonlinear control problems subject to mixed control state constraints. The parameter modelling data perturbations is an element p of a Banach space P. The purpose of this paper is to extend solution differentiability to the case of vector-valued controls and constraints. Moreover, the authors elaborate more closely on the links between second-order sufficient conditions and shooting methods for solving the underlying boundary value problem. >

Proceedings ArticleDOI
29 Jun 1994
TL;DR: In this paper, a systematic way to combine an adaptive control design technique and sliding mode control methodology for trajectory tracking control of robot manipulators in the presence of parametric uncertainties and external disturbance is developed.
Abstract: A systematic way to combine an adaptive control design technique and sliding mode control methodology for trajectory tracking control of robot manipulators in the presence of parametric uncertainties and external disturbance is developed in this paper. Continuous sliding mode controllers without the unpleasant reaching transient and chattering problem are first developed by using a dynamic sliding mode. Transient performance is guaranteed and globally uniform ultimate boundedness (GUUB) stability is obtained. A conventional adaptive scheme is also developed for comparison. With some modifications to the conventional adaptation law, the control law is redesigned by combining the design methodologies of adaptive control and sliding mode control. The suggested controller preserves the advantages of both methods, namely, asymptotic stability of adaptive systems for parametric uncertainties and GUUB stability with guaranteed transient performance of sliding mode control for both parametric uncertainties and external disturbances. The control law is continuous and the chattering problem of sliding mode control is avoided. A priori knowledge of the bounds of the parameter uncertainties and external disturbances is assumed. Experimental results illustrate the effectiveness of the proposed methods.

Journal ArticleDOI
TL;DR: In this article, an approach to deal with an estimation problem which is often encountered in analyzing the longitudinal cost data gathered in a clinical trial is proposed. But the approach is illustrated using data from a trial comparing three different drug regimes.
Abstract: This paper suggests an approach to deal with an estimation problem which is often encountered in analyzing the longitudinal cost data gathered in a clinical trial. The source of that estimation problem is twofold: 1) a considerable number of missing data due to treatment-related withdrawal of severely affected patients with high health care costs in only one the treatment groups and 2) a heavily skewed cost distribution due to rare high-cost events. The approach is illustrated using data from a trial comparing 3 different drug regimes. In order to calculate costs per patient-year in case of selectively missing data we extrapolated the costs of patients with incomplete follow-up. Due to the skewness and the associated large variance in costs per patient-year, these costs cannot be analyzed using common parametric statistical methods relying on underlying normal distributions. A logarithmic transformation was performed to approximate a normal distribution, reduce the impact of extreme values and create similar size variances in the treatment groups. An ordinary least squares regression analysis of transformed data then standardized for differences in patient characteristics between the groups. For the retransformation, the so-called smearing estimate was used. This 'transformation-standardization-retransformation' approach enabled us to provide more consistent and efficient estimates of cost differences that were shown to be statistically significant and judged to be important.