scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 1983"


Journal ArticleDOI
Naihua Duan1
TL;DR: The smearing estimate as discussed by the authors is a nonparametric estimate of the expected response on the untransformed scale after fitting a linear regression model on a transformed scale, which is consistent under mild regularity conditions, and usually attains high efficiency relative to parametric estimates.
Abstract: The smearing estimate is proposed as a nonparametric estimate of the expected response on the untransformed scale after fitting a linear regression model on a transformed scale. The estimate is consistent under mild regularity conditions, and usually attains high efficiency relative to parametric estimates. It can be viewed as a low-premium insurance policy against departures from parametric distributional assumptions. A real-world example of predicting medical expenditures shows that the smearing estimate can outperform parametric estimates even when the parametric assumption is nearly satisfied.

2,093 citations


Journal ArticleDOI
TL;DR: In this paper, a review of the state of the art in multiparameter shrinkage estimators with emphasis on the empirical Bayes viewpoint, particularly in the case of parametric prior distributions, is presented.
Abstract: This article reviews the state of multiparameter shrinkage estimators with emphasis on the empirical Bayes viewpoint, particularly in the case of parametric prior distributions. Some successful applications of major importance are considered. Recent results concerning estimates of error and confidence intervals are described and illustrated with data.

1,409 citations


Proceedings ArticleDOI
01 Jul 1983
TL;DR: This paper advances a “pyramidal parametric” prefiltering and sampling geometry which minimizes aliasing effects and assures continuity within and between target images.
Abstract: The mapping of images onto surfaces may substantially increase the realism and information content of computer-generated imagery. The projection of a flat source image onto a curved surface may involve sampling difficulties, however, which are compounded as the view of the surface changes. As the projected scale of the surface increases, interpolation between the original samples of the source image is necessary; as the scale is reduced, approximation of multiple samples in the source is required. Thus a constantly changing sampling window of view-dependent shape must traverse the source image.To reduce the computation implied by these requirements, a set of prefiltered source images may be created. This approach can be applied to particular advantage in animation, where a large number of frames using the same source image must be generated. This paper advances a “pyramidal parametric” prefiltering and sampling geometry which minimizes aliasing effects and assures continuity within and between target images.Although the mapping of texture onto surfaces is an excellent example of the process and provided the original motivation for its development, pyramidal parametric data structures admit of wider application. The aliasing of not only surface texture, but also highlights and even the surface representations themselves, may be minimized by pyramidal parametric means.

1,000 citations


Journal ArticleDOI
TL;DR: In this paper, the existence, support size, likelihood equations, and uniqueness of the estimator are revealed to be directly related to the properties of the convex hull of the likelihood set and the support hyperplanes of that hull.
Abstract: In this paper certain fundamental properties of the maximum likelihood estimator of a mixing distribution are shown to be geometric properties of the likelihood set. The existence, support size, likelihood equations, and uniqueness of the estimator are revealed to be directly related to the properties of the convex hull of the likelihood set and the support hyperplanes of that hull. It is shown using geometric techniques that the estimator exists under quite general conditions, with a support size no larger than the number of distinct observations. Analysis of the convex dual of the likelihood set leads to a dual maximization problem. A convergent algorithm is described. The defining equations for the estimator are compared with the usual parametric likelihood equations for finite mixtures. Sufficient conditions for uniqueness are given. Part II will deal with a special theory for exponential family mixtures.

674 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a full range of results and applications of sensitivity analysis, relevant to chemical kinetic modeling, and exclude a large class of literature that deals with system sensitivities from a control theory perspective.
Abstract: Complex mathematical models are increasingly being used as predictive tools and as aids for understanding the processes underlying observed chemical phenomena. The parameters appearing in these models, which may include rate constants, activation energies, thermodynamic constants, transport coefficients, initial conditions, and operating conditions, are seldom known to high precision. Thus, the predictions or conclusions of modeling endeavors are usually subject to uncertainty. Furthermore, regardless of uncertainty questions, there is always the overriding matter of which parameters control laboratory observations. Quantification of the role of the parameters in the model predictions is the traditional realm of sensitivity analysis. A significant amount of current research is directed at conceptualization and implementation of numerical techniques for determining parametric sensitivities for algebraic, differential, and partial differential equation models including those with stochastic character and nonconstant parameters. Recent studies have also served to extend the range of the conventional parametric analysis to address new questions, relevant to the process of model building and interpretation. This review attempts to present a full range of results and applications of sensitivity analysis, relevant to chemical kinetic modeling. We exclude a large class of literature that deals with system sensitivities from a control theory perspective (1). We further limit discussion of related subjects such as parameter identification (estimation of best parameter values for fitting data) and optimization, in which parametric sensitivities play only

506 citations


Journal ArticleDOI
TL;DR: In this article, lower bounds for estimation of the parameters of models with both parametric and nonparametric components are given in the form of representation theorems (for regular estimates) and asymptotic minimax bounds.
Abstract: Asymptotic lower bounds for estimation of the parameters of models with both parametric and nonparametric components are given in the form of representation theorems (for regular estimates) and asymptotic minimax bounds. The methods used involve: (i) the notion of a "Hellinger-differentiable (root-) density", where part of the differentiation is with respect to the nonparametric part of the model, to obtain appropriate scores; and (ii) calculation of the "effective score" for the real or vector (finite-dimensional) parameter of interest as that component of the score function orthogonal to all nuisance parameter "scores" (perhaps infinite-dimensional). The resulting asymptotic information for estimation of the parametric component of the model is just (4 times) the squared $L^2$-norm of the "effective score". A corollary of these results is a simple necessary condition for "adaptive estimation": adaptation is possible only if the scores for the parameter of interest are orthogonal to the scores for the nuisance function or nonparametric part of the model. Examples considered include the one-sample location model with and without symmetry, mixture models, the two-sample shift model, and Cox's proportional hazards model.

406 citations


Journal ArticleDOI
TL;DR: This article shows how to fit a smooth curve (polynomial spline) to pairs of data values (yi, xi) using the Kalman filter to evaluate the likelihood function and achieve significant computational advantages over previous approaches to this problem.
Abstract: This article shows how to fit a smooth curve (polynomial spline) to pairs of data values (yi, xi ). Prior specification of a parametric functional form for the curve is not required. The resulting curve can be used to describe the pattern of the data, and to predict unknown values of y given x. Both point and interval estimates are produced. The method is easy to use, and the computational requirements are modest, even for large sample sizes. Our method is based on maximum likelihood estimation of a signal-in-noise model of the data. We use the Kalman filter to evaluate the likelihood function and achieve significant computational advantages over previous approaches to this problem.

287 citations


Journal ArticleDOI
TL;DR: In this article, a nonparametric method of discriminant analysis is proposed based on non-parametric extensions of commonly used scatter matrices for non-Gaussian data sets and a procedure is proposed to test the structural similarity of two distributions.
Abstract: A nonparametric method of discriminant analysis is proposed. It is based on nonparametric extensions of commonly used scatter matrices. Two advantages result from the use of the proposed nonparametric scatter matrices. First, they are generally of full rank. This provides the ability to specify the number of extracted features desired. This is in contrast to parametric discriminant analysis, which for an L class problem typically can determine at most L 1 features. Second, the nonparametric nature of the scatter matrices allows the procedure to work well even for non-Gaussian data sets. Using the same basic framework, a procedure is proposed to test the structural similarity of two distributions. The procedure works in high-dimensional space. It specifies a linear decomposition of the original data space in which a relative indication of dissimilarity along each new basis vector is provided. The nonparametric scatter matrices are also used to derive a clustering procedure, which is recognized as a k-nearest neighbor version of the nonparametric valley seeking algorithm. The form which results provides a unified view of the parametric nearest mean reclassification algorithm and the nonparametric valley seeking algorithm.

232 citations


Journal ArticleDOI
TL;DR: An algorithm using sensitivity analysis to solve a linear two-stage optimization problem using a set of first order optimality conditions that parallel the Kuhn-Tucker conditions associated with a one-dimensional parametric linear program is presented.
Abstract: This paper presents an algorithm using sensitivity analysis to solve a linear two-stage optimization problem. The underlying theory rests on a set of first order optimality conditions that parallel the Kuhn-Tucker conditions associated with a one-dimensional parametric linear program. The solution to the original problem is uncovered by systematically varying the parameter over the unit interval and solving the corresponding linear program. Finite convergence is established under nondegenerate assumptions. The paper also discusses other solution techniques including branch and bound and vertex enumeration and gives an example highlighting their computational and storage requirements. By these measures, the algorithm presented here has an overall advantage. Finally, a comparison is drawn between bicriteria and bilevel programming, and underscored by way of an example.

193 citations


Proceedings ArticleDOI
Michael F. Plass1, Maureen Stone1
01 Jul 1983
TL;DR: An algorithm is developed that takes a set of sample points, plus optional endpoint and tangent vector specifications, and iteratively derives a single parametric cubic polynomial that lies close to the data points as defined by an error metric based on least-squares.
Abstract: Parametric piecewise-cubic functions are used throughout the computer graphics industry to represent curved shapes. For many applications, it would be useful to be able to reliably derive this representation from a closely spaced set of points that approximate the desired curve, such as the input from a digitizing tablet or a scanner. This paper presents a solution to the problem of automatically generating efficient piecewise parametric cubic polynomial approximations to shapes from sampled data. We have developed an algorithm that takes a set of sample points, plus optional endpoint and tangent vector specifications, and iteratively derives a single parametric cubic polynomial that lies close to the data points as defined by an error metric based on least-squares. Combining this algorithm with dynamic programming techniques to determine the knot placement gives good results over a range of shapes and applications.

188 citations


Journal ArticleDOI
TL;DR: The asymptotic efficiency of the Kaplan-Meier product-limit estimator, relative to the maximum likelihood estimator of a parametric survival function, is examined under a random-censoring model.
Abstract: SUMMARY The asymptotic efficiency of the Kaplan-Meier product-limit estimator, relative to the maximum likelihood estimator of a parametric survival function, is examined under a random-censoring model.

Journal ArticleDOI
TL;DR: A novel mathematical model to determine efficiently the average power pattern degradations caused by random surface errors shows that as sidelobe levels decrease, their dependence on the surface rms/ \lambda becomes much stronger and, for a specified tolerance level, a considerably smaller rms is required to maintain the low sidelobes within the required bounds.
Abstract: Based on the works of Ruze and Vu, a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/ \lambda becomes much stronger and, for a specified tolerance level, a considerably smaller rms/ \lambda is required to maintain the low sidelobes within the required bounds.

Patent
10 Nov 1983
TL;DR: In this article, a flow monitor consisting of an optical sensing chamber and an electronic controller is used to determine the precise volume of IV solution or urine passing through the respective systems, and also forming part of the system are audible and visual alarms to alert the user to any malfunction in need of correction.
Abstract: A flow monitor including an optical sensing chamber and an electronic controller which allows determination of exact drop volumes and flow rates. In one embodiment, the flow monitor forms part of a gravity fed volumetric controller in an IV system. In another embodiment, the flow monitor takes the form of a urinary output monitor in a urine collection system. Basically, the flow monitor comprises a microcontroller which responds to parametric information fed into the system through a keyboard and variable information detected by a novel drop diameter detector. The electronic controller, in response to the parametric and variable information being fed into it, is able to determine the precise volume of IV solution or urine passing through the respective systems. In the volumetric controller, the microcontroller causes a linear actuator to control the diameter of a flexible pinch tube found in the IV system. Under one mode of operation, the diameter of the pinch tube is regulated to control drop size. In another mode of operation, the diameter of the tube is regulated to control the time interval between drops. By selectively combining the two modes of operation, a precise volume of IV fluid may be administered to a patient. Also forming part of the system are audible and visual alarms to alert the user to any malfunctions in need of correction.

Journal ArticleDOI
TL;DR: In this paper, the univariate conditions of Gnedenko characterizing domains of attraction for univariate extreme value distributions are generalized to higher dimensions, and random variables with a multi-dimensional extreme value distribution are associated.
Abstract: The univariate conditions of Gnedenko characterizing domains of attraction for univariate extreme value distributions are generalized to higher dimensions. In addition, it is shown that random variables with a multivariate extreme value distribution are associated. Applications are given to a number of parametric families of joint distributions with given marginal distributions.

Journal ArticleDOI
TL;DR: In this paper, the authors describe three procedures for carrying out such comparisons and explore the ability of each to distinguish between correct and incorrect models, including tests against a composite model, the Cox test of separate families of hypotheses, and comparisons based on the likelihood ratio index goodness-of-fit statistic.
Abstract: The development of empirical probabilistic discrete-choice models frequently entails comparing two non-nested models (i.e., models with the property that neither can be obtained as a parametric special case of the other) to determine which is most likely to provide a correct explanation of a particular choice situation. Conventional statistical procedures, such as the likelihood ratio test, do not apply to comparisons of nonnested models. This paper describes three procedures for carrying out such comparisons and explores the ability of each to distinguish between correct and incorrect models. The procedures are: tests against a composite model, the Cox test of separate families of hypotheses, and comparisons based on the likelihood ratio index goodness-of-fit statistic. A modification of the likelihood ratio index is proposed that corrects for the effects of differences in the numbers of estimated parameters in the compared models. The abilities of the various procedures to reject incorrect models and ac...

Book ChapterDOI
01 Jan 1983
TL;DR: In this paper, the parametric empirical Bayes confidence intervals are used to determine the true value of a given parameter in at least 68 percent and 95 percent of the cases, respectively.
Abstract: Publisher Summary This chapter outlines parametric empirical Bayes confidence intervals. Empirical Bayes modeling assumes the distributions π for the parameters θ= (θ 1 , …, θ k ) exist, with π taken from a known class Π of possible parameter distributions. Π is considered independent N (u, A) distributions on R k . It is called parametric empirical Bayes problem, because πɛ Π is determined by the parameters (u, A) and so is a parametric family of distributions. A simulation presented in the chapter was used to determine that the intervals ±s i and ±1.96s i contain the true values θ i in at least 68 percent and 95 percent of the cases. Empirical Bayes estimators, or Stein's estimator, can lead to misestimation of components that the statistician or his clients care about when exchangeability in the prior distribution is implausible. The term empirical Bayes, which is used for non-parametric empirical Bayes problems, actually fits the parametric empirical Bayes case too. The empirical Bayes methods in general and parametric empirical Bayes methods in particular provide a way to utilize this additional information by obtaining more precise estimates and estimating their precision.

Journal ArticleDOI
Brian Golding1
TL;DR: In this paper, a wave prediction model based on the Lax-Wendroff integration scheme is proposed. But the main difficulty with the method is in the separation of wind-sea and swell required to do this.
Abstract: The considerable increase in requirements for sea state forecasts in recent years has led to development of a numerical wave forecasting system in the Meteorological Office. This is based on a wave prediction model which combines the advantages of the parametric technique in predicting a growing wind-sea with those of a discrete spectral model in the swell regime. This is done using a discrete model by parametrizing the nonlinear interactions term in the energy balance equation in a way that reproduces the behaviour of the parametric model. The main difficulty with the method is in the separation of wind-sea and swell required to do this. Propagation of wave energy is performed using an accurate form of the Lax-Wendroff integration scheme. A two-term representation of wave growth is used whilst dissipation is modelled by an explicit whitecapping mechanism. Shallow water effects are included by representations of shoaling, refraction and bottom friction. The operational numerical atmospheric model at the Meteorological Office provides the wind input. An extensive program of evaluation has shown that the results provide high quality guidance with 24-hour forecasts of wave height having a r.m.s. error ranging from 0·6 m in the southern North Sea to 1·0 m east of the Shetlands.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a model of a multi-product firm in which the possibilities of both technical and allocative inefficiency are incorporated in an econometrically useful way.
Abstract: N recent years a great deal of research has been directed to the modelling and measurement of technical and allocative efficiency in production. With few exceptions this research has been restricted to single-product firms.' However, recent developments in duality theory have facilitated the extension of this research to multi-product firms. The main purpose of this paper is to develop a model of the multiproduct firm in which the possibilities of both technical and allocative inefficiency are incorporated in an econometrically useful way. The first model we develop includes a nonneutral2 type of technical inefficiency and three distinguishable types of allocative inefficiency-output mix, input mix, and scale. Each type of inefficiency is costly to the firm, in the sense that each causes a reduction in profit beneath the maximum value attainable under full efficiency. The cost of each type of inefficiency depends on the magnitude of the inefficiency and the structure of the underlying production technology. In the second model we develop, technical inefficiency remains nonneutral, but allocative inefficiency is not generally decomposable into output mix, input mix and scale components. However, both technical and allocative inefficiency remain costly to the firm, the cost of each type of inefficiency depending on its magnitude and the structure of the underlying production technology. We model the technology of a competitive profit maximizing multi-product firm with the dual profit function. This enables us to use Hotelling's Lemma to generate a system of profit maximizing output supply and input demand equations. These equations are then modified to allow for the possibility of technical and three types of allocative inefficiency. A virtue of using the profit function to represent production technology is that it permits a straightforward comparison of maximum profit under full efficiency with actual profit, and with the profit that would result from any combination of the four types of inefficiency. This enables us to allocate the cost of inefficiency to each of four components. Our model of inefficiency is parametric, and is embedded in a Generalized Leontief profit function, although any flexible specification of the profit function can be used. The model is developed in sections II-IV. Estimation of the model is considered in section V. An empirical example designed to illustrate the workings of the model is discussed in section VI. Section VII concludes.

Journal ArticleDOI
TL;DR: A method for interactive smoothing is outlined and a smoothing algorithm is described which is mathematically comparable to manual smoothing with a physical spline.
Abstract: The most common curve representation in CADCAM systems of today is the cubic parametric spline. Unfortunately this curve will sometimes oscillate and cause unwanted inflexions which are difficult to deal with. This paper has developed from the need to eliminate oscillations and remove inflexions from such splines, a need which may occur for example when interpolating data measured from a model. A method for interactive smoothing is outlined and a smoothing algorithm is described which is mathematically comparable to manual smoothing with a physical spline.

Journal ArticleDOI
TL;DR: In this paper, the problem of image segmentation is considered in the context of a mixture of probability distributions, where segments fall into classes and a probability distribution is associated with each class of segment.
Abstract: The problem of image segmentation is considered in the context of a mixture of probability distributions. The segments fall into classes. A probability distribution is associated with each class of segment. Parametric families of distributions are considered, a set of parameter values being associated with each class. With each observation is associated an unobservable label, indicating from which class the observation arose. Segmentation algorithms are obtained by applying a method of iterated maximum likelihood to the resulting likelihood function. A numerical example is given. Choice of the number of classes, using Akaike's information criterion (AIC) for model identification, is illustrated.

Journal ArticleDOI
TL;DR: This paper presents a new generation of parametric piecewise-cubic functions that are used throughout the computer graphics industry to represent curved shapes and some examples show how these functions can be modified for curved topographies.
Abstract: Parametric piecewise-cubic functions are used throughout the computer graphics industry to represent curved shapes. For many applications, it would be useful to be able to reliably derive this repr...

Journal ArticleDOI
TL;DR: In this article, a generalized closed-loop eigenstructure assignment via state feedback in a linear multivariable system with n states and r control inputs is presented. But the method is not suitable for the case of finite states.
Abstract: This paper generalizes a recently reported method [1] of closed-loop eigenstructure assignment via state feedback in a linear multivariable system (with n states and r control inputs). By introducing a lemma on the differentiation of determinants, the class of assignable eigenvectors and generalized eigenvectors associated with the assigned eigenvalues is explicitly described by a complete set of n r -dimensional free parameter vectors. This parametric characterization conveniently organizes the nonuniqueness of the solution of the eigenvalue-assignment problem and thereby provides an efficient means of further modifying the system dynamic response. A numerical example is worked out to demonstrate the feasibility of the method.

Book
13 Jan 1983
TL;DR: In this article, the authors present a set of simple statistical tests, such as measures of central tendency, measures of dispersion, and the normal distribution, which are used to test the goodness of fit of a number.
Abstract: Acknowledgements 1. Why do we need statistics? 2. Measures of central tendency 3. Measures of dispersion 4. The normal distribution 5. Probability 6. What are statistical tests all about? 7. Hypotheses 8. Significance 9. Simple statistical tests 10. What's in a number 11. Two parametric tests 12. Tests of goodness of fit 13. The design of experiments 14. Sampling 15. Correlation 16. In the last analysis ... 17. Operation schedules Answers to exercises Appendix Index.

Journal ArticleDOI
TL;DR: Two examples of parametric cost programming problems—one in network programming and one in NP-hard 0-1 programming—are given; in each case, the number of breakpoints in the optimal cost curve is exponential in the square root of thenumber of variables in the problem.
Abstract: Two examples of parametric cost programming problems—one in network programming and one in NP-hard 0-1 programming—are given; in each case, the number of breakpoints in the optimal cost curve is exponential in the square root of the number of variables in the problem.

01 Jan 1983
TL;DR: In this dissertation, linear and combinatorial problems in which the cost is a linear function of a parameter (lamda) are considered and the number of slope changes in the optimal cost curve is shown.
Abstract: In this dissertation, linear and combinatorial problems in which the cost is a linear function of a parameter (lamda) are considered. The complexity of such problems can be measured by the number of slope changes in the optimal cost curve. It is shown that: (1) In linear programming, the number of slope changes is, in the worst case, exponential in the number of variables, regardless of the size of the data. However, in 0-1 programming, the number of breakpoints is polynomial in n and in the maximum absolute data element. (2) In the parametric cost minimum cost flow, the parametric capacity maximum flow, and the parametric cost minimum cut problems, the number of slope changes is exponential in the number of nodes in the network in the worst case. However, the number of slope changes is polynomial in n and in the maximum data element. (3) In the parametric cost knapsack problem and many other Np-complete 0-1 problems, the number of slope changes may be as large as 2('SQRT.(n(' ))-(' )1, where n is the number of variables. However, the number of slope changes is polynomial in n and in the maximum data element. (4) Finally, in the parametric cost shortest chain problem, the number of slope changes may be n('Dlog n), where D is a positive number and n is the number of nodes. Again, the number of breakpoints is polynomial in n and in the maximum data element. It is also shown that the intermediate feasibility problem is NP-complete.

Book ChapterDOI
TL;DR: In this article, a class of singularly perturbed delay-differential equations is proposed to describe an optical bistable device and chaotic behavior for certain parametric values is investigated.
Abstract: In this paper a class of singularly perturbed delay-differential equation which, among other applications, describes an optical bistable device. In particular, the paper investigates chaotic behavior for certain parametric values. Earlier studies by others have predicted such results numerically and confirmed experimentally.

Book ChapterDOI
01 Jan 1983
TL;DR: In this paper, a parametric spectral estimate based on mixed autoregressive-moving average models is proposed for time series, which can be used for diagnostic checking to examine the agreement between the model and the available data, and apply some goodness-of-fit tests.
Abstract: Publisher Summary In most fields of science and engineering, time series occur often, that is, series of observations that depend on a discrete or continuous time parameter and fluctuate in a disorderly fashion in time. Series of this kind cannot be reasonably described deterministically and should be studied through statistical methods only. Parametric spectral estimates based on mixed autoregressive-moving average models are not as popular as autoregressive estimates, but they also have been used in a dozen works. Estimation through some parametric model fitting might be studied in this case, along with more usual nonparametric spectral estimation. After the parameters of the model have been evaluated, it is advisable to employ some method of diagnostic checking to examine the agreement between the model and the available data, that is, to apply some goodness-of-fit tests.

Journal ArticleDOI
TL;DR: The optimal model has parameter values and an impedance spectrum corresponding satisfactorily with real data and gives a clear picture of the internal hemodynamic behavior of PAT as an impedance matching device.

Journal ArticleDOI
TL;DR: In this article, a method for joint analysis of reaction times and same-different judgments is discussed, where a set of stimuli is assumed to have some parametric representation which uniquely defines dissimilarities between the stimuli.
Abstract: A method for joint analysis of reaction times and same-different judgments is discussed. A set of stimuli is assumed to have some parametric representation which uniquely defines dissimilarities between the stimuli. Those dissimilarities are then related to the observed reaction times and same-different judgments through a model of psychological processes. Three representation models of dissimilarities are considered, the Minkowski power distance model, the linear model, and Tversky's feature matching model. Maximum likelihood estimation procedures are developed and implemented in the form of a FORTRAN program. An example is given to illustrate the kind of analyses that can be performed by the proposed method.

Journal ArticleDOI
TL;DR: An empirical approach to studying the nature of the distances between scale points was developed, using responses to a clinical performance evaluation instrument that uses a four-point behaviorally-anchored scale.
Abstract: The analysis of data collectedfrom behavioral assessment instruments is typically conducted using parametric statistics, with little or no reference given to the underlying nature of the scale being used. If the nature of the distances between the scale points is not understood, the concept of normality of the distribution becomes clouded. An empirical approach to studying thisproblem was developed, using responses to a clinical performance evaluation instrument that uses a four-point behaviorally-anchored scale. Various combinations of nonlinear transformations were applied to the evaluation responses. The factorial structure of the fifteen items constituting the evaluation form was minimally affected by the transformations, suggesting that parametric statistics can be applied to behaviorally-anchored rating scales.