scispace - formally typeset
Search or ask a question

Showing papers in "British Journal of Mathematical and Statistical Psychology in 1976"


Journal ArticleDOI
TL;DR: The quadratic assignment paradigm developed in operations research is discussed as a general approach to data analysis tasks characterized by the use of proximity matrices in this article, and an extensive set of numerical examples are given illustrating the application of the search procedure to hierarchical clustering, the identification of homogeneous object subsets, linear and circular seriation, and a discrete version of multidimensional scaling.
Abstract: The quadratic assignment paradigm developed in operations research is discussed as a general approach to data analysis tasks characterized by the use of proximity matrices. Data analysis problems are first classified as being either static or non-static. The term ‘static’ implies the evaluation of a detailed substantive hypothesis that is posited without the aid of the actual data. Alternatively, the term ‘non-static’ suggests a search for a particular type of relational structure within the obtained proximity matrix and without the statement of a specific conjecture beforehand. Although the static class of problems is directly related to several inference procedures commonly used in classical statistics, the major emphasis in this paper is on applying a general computational heuristic to attack the non-static problem and in using the quadratic assignment orientation to discuss a variety of research tactics of importance in the behavioral sciences and, particularly, in psychology. An extensive set of numerical examples is given illustrating the application of the search procedure to hierarchical clustering, the identification of homogeneous object subsets, linear and circular seriation, and a discrete version of multidimensional scaling.

831 citations


Journal ArticleDOI
TL;DR: Regression component decompositions are defined as a special class of component decomposition where the pattern contains the regression weights for predicting the observed variables from the latent variables.
Abstract: Regression component decompositions (RCD) are defined as a special class of component decompositions where the pattern contains the regression weights for predicting the observed variables from the latent variables. Compared to factor analysis, RCD has a broader range of applicability, greater ease and simplicity of computation, and a more logical and straightforward theory. The usual distinction between factor analysis as a falsifiable model, and component analysis as a tautology, is shown to be misleading, since a special case of regression component decomposition can be defined which is not only falsifiable, but empirically indistinguishable from the factor model.

80 citations


Journal ArticleDOI
TL;DR: The data analysis problem of ordering or sequencing a set of objects using an asymmetric proximity function is reviewed in this paper, with an emphasis on literature not generally referenced in psychology, and a simple numerical example that illustrates some of the basic concepts is presented in the course of the discussion.
Abstract: The data analysis problem of ordering or sequencing a set of objects using an asymmetric proximity function is reviewed, with an emphasis on literature not generally referenced in psychology. In particular, an attempt is made to present some of the more important approaches to the asymmetric seriation task that have been developed independently under a number of different names, for instance, the triangularization of input-output matrices, minimum feedback arc sets, maximum likelihood paired comparison ranking, majority rule under transitivity constraints, and so on. A simple numerical example that illustrates some of the basic concepts is presented in the course of the discussion.

58 citations


Journal ArticleDOI
TL;DR: A general mathematical model for sudden outbreaks of disorder in an institution is outlined and why the policy of ‘playing it cool’ is generally likely to be successful is suggested.
Abstract: Disturbances in institutions are often thought to be due to special local circumstances. This paper outlines a general mathematical model for sudden outbreaks of disorder in an institution. The model is illustrated by applying it to the escalating sequence of events at Gartree Prison during 1972. Although the approach is largely theoretical some suggestions are made about the prediction and handling of disorder. In particular, the model suggests why the policy of ‘playing it cool’ is generally likely to be successful.

53 citations


Journal ArticleDOI
TL;DR: Seven models relating similarity in pairs of two-dimensional stimuli are stated; three derive from distance measures and four from so-called content models, and a normed distance model was found to be marginally superior to alternatives.
Abstract: Seven models relating similarity in pairs of two-dimensional stimuli are stated; three derive from distance measures and four from so-called content models. The metatheoretical status of assumptions underlying the various models is outlined. Two experiments, one using Dot pattern stimuli and the other Brick Wall patterns, on 54 and 50 subjects respectively, were chosen for their relation to previously published work and analysed using all of the seven models. A normed distance model, which can be equivalently expressed as a content model but not as a Minkowski distance model, was found to be marginally superior to alternatives. Under some conditions the predictions of the alternative models are practically indistinguishable, but a Euclidean distance model is inferior in this context. Implications for scaling and modelling are noted.

45 citations


Journal ArticleDOI
TL;DR: A mathematical model is proposed, based on catastrophe theory, to describe the qualitative effect of stress upon the neural mechanisms used for making judgements, such as estimating speed.
Abstract: A mathematical model is proposed, based on catastrophe theory, to describe the qualitative effect of stress upon the neural mechanisms used for making judgements, such as estimating speed. Teh model is used quantitatively to fit data, and to explain the cusp-shaped results of Drew et al. (1959), showing that introverts under alcohol tend to drive either too fast or too slow in a driving simulator. Experiments are suggested in which discontinuous jumps in perception of continuous variables like speed might well appear.

20 citations


Journal ArticleDOI
TL;DR: In this article, the equality of the diagonal elements of the covariance matrix with no restrictive assumptions concerning its off-diagonal elements is tested given an independent random sample from a p-variate normal population.
Abstract: Given an independent random sample from a p-variate normal population, a procedure is proposed for testing the equality of the diagonal elements of the covariance matrix with no restrictive assumptions concerning its off-diagonal elements. The procedure is simulated utilizing Monte Carlo techniques, and a method for post hoc probing is also suggested.

18 citations


Journal ArticleDOI
TL;DR: The linear programming solution of the maximum likelihood ranking from paired comparisons problem, due to DeCani (1969), is considered and it is suggested that the algorithm is usually efficient for k-replicated paired comparisons data but not for the single replicate experiment.
Abstract: The linear programming solution of the maximum likelihood ranking from paired comparisons problem, due to DeCani (1969), is considered. It is noted that Boolean methods exist for the solution of linear programmes in binary variables. The pseudo-Boolean programming method of Hammer & Rudeanu (1968) is applied to DeCani's formulation and also to the problem of finding Slater's i. It is suggested that the algorithm is usually efficient for k-replicated paired comparisons data but not for the single replicate experiment.

14 citations


Journal ArticleDOI
TL;DR: In this article, a method of testing for significant relationships between multidimensional configurations is presented, which is defined as the correlation between the inter-point distances from one configuration and the corresponding inter point distances from a second configuration.
Abstract: A method of testing for significant relationships between multidimensional configurations is presented. The ‘index of invariance” is defined as the correlation between the inter-point distances from one configuration and the corresponding inter-point distances from a second configuration. Empirical distributions of the index are developed for varying numbers of points and dimensions. Fitting the Gram-Charlier family of curves to these data, additional distributions of the index are estimated based on two parameters, namely, the number of points and the number of dimensions. Tables of significance are presented for the index in terms of the two parameters. Applications based on ordinal multidimensional scaling and other multivariate techniques are discussed.

12 citations


Journal ArticleDOI
TL;DR: In this article, the robustness of signal detection theory (SDT) is investigated with respect to the form of the underlying distributions, and it is shown that the SDT model with uniform distributions yields non-significant goodness-of-fit statistics for many sets of data.
Abstract: The robustness of signal detection theory (SDT) is investigated with respect to the form of the underlying distributions. Usually these distributions are taken to be normal; here an SDT model based on two overlapping uniform (rectangular) distributions is examined, for the Yes/No experiment and the rating-method experiment. In the Yes/No case the SDT measure (using uniform distributions) is found to be equivalent to a measure recently proposed by Hammerton & Altham (1971), and, from contingency-table considerations, it is a measure likely to give similar conclusions to the SDT measure using normal distributions. In the rating-method case it is surprising to find that the SDT model with uniform distributions yields non-significant goodness-of-fit statistics for many sets of data.

11 citations


Journal ArticleDOI
TL;DR: In this article, Monte Carlo determined factor structures are used to derive the basic random level of fit when restricted maximum likelihood factor analysis is used to evaluate the fit of a particular hypothetical model.
Abstract: Monte Carlo determined factor structures are used to derive the basic random level of fit when restricted maximum likelihood factor analysis is used to evaluate the fit of a particular hypothetical model. The performance of the hypothesized model is then assessed in relation to the maximal fit obtainable and the basic random level of fit.

Journal ArticleDOI
TL;DR: The approach described in the present paper, hierarchical grouping analysis (HGA), appears to meet criticisms of previous approaches and the possibilities extended by using HGA are demonstrated in an analysis of some experimental data which examines the relationship between semantic inter-item associations and recall structure.
Abstract: Previous approaches to the assessment of subjective organization in multi-trial free recall (MTFR) have depended upon the identification of sequential structure which is stable over two successive recall trials. While these techniques give adequate estimates of the degree of subjective organization they are incapable of revealing its content for two reasons. Firstly, the resultant characterizations contain considerable redundancy; and, secondly, they are highly susceptible to the operation of chance factors. The approach described in the present paper, hierarchical grouping analysis (HGA), appears to meet these criticisms. The possibilities extended by using HGA are demonstrated in an analysis of some experimental data which examines the relationship between semantic inter-item associations and recall structure. Finally, the possibility of applying HGA in other domains is discussed.

Journal ArticleDOI
TL;DR: This work attempts to state versions of the anxiety, feedback, and conflict theories of stuttering as mathematical models which may be tested against the structure of sequences of stuttered and non-stuttered words.
Abstract: Stuttered speech may be reduced to a sequence of stuttered and non-stuttered words, and is then readily amenable to statistical analysis. We attempt to state versions of the anxiety, feedback, and conflict theories of stuttering as mathematical models which may be tested against the structure of sequences of stuttered and non-stuttered words. In this way we hope to produce a means of deciding between the many theories of stuttering. Data from a single subject are used to illustrate the method.


Journal ArticleDOI
TL;DR: In this article, the authors proposed a method for making all possible pairwise contrasts among the parameters of k independent binomial populations based upon information obtained by the randomized response technique, and the results of a small Monte Carlo study are presented in support of the methods presently being proposed.
Abstract: Methods are proposed for making all possible pairwise contrasts among the parameters of k independent binomial populations based upon information obtained by the randomized response technique. In addition, the results of a small Monte Carlo study are presented in support of the methods presently being proposed.

Journal ArticleDOI
TL;DR: In this article, the authors used Johnson's sharp upper bound on the variance of the Wilcoxon-Mann-Whitney U-statistic for continuous symmetric distributions and shift alternatives.
Abstract: Using Johnson's (1975) sharp upper bound on the variance of the Wilcoxon-Mann-Whitney U-statistic for the case of continuous symmetric distributions and shift alternatives, an upper bound depending only on the sample sizes is obtained for this case. The new bound is smaller than the corresponding bound given by van Dantzig (1951) for a more general model. The decrease is smallest for equal sample sizes, in which case it ranges from 16 to 29 percent as the sample sizes increase.

Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo study of the distributions of 12 asymmetric association measures is reported, and in particular their basic properties such as mean, variance, skewness and closeness to normal shape provide valuable guides as to the expected performance of each of the association measures, at a given level of theoretical association in the underlying population.
Abstract: Comparatively little research has been reported on the nature of the distributions of existing association measures. In this paper, a Monte Carlo study of the distributions of 12 asymmetric association measures is reported. The derived distributions, and in particular their basic properties such as mean, variance, skewness and closeness to normal shape, provide valuable guides as to the expected performance of each of the association measures, at a given level of theoretical association in the underlying population.

Journal ArticleDOI
TL;DR: A simple learning model is developed for signal detection tasks that replaces the cut-off decision rule of signal detection theory by a probabilistic rule and predicts probability matching under some circumstances.
Abstract: A simple learning model is developed for signal detection tasks. The model replaces the cut-off decision rule of signal detection theory by a probabilistic rule. The ROC curves predicted are indistinguishable from those predicted by signal detection theory. The model has the added advantage however of describing well ‘non-optimal’ behaviour and predicts probability matching under some circumstances.

Journal ArticleDOI
TL;DR: The coefficient of hark-back may be interpreted variously as a measure of the working style, of the complexity of information-processing, or of the searcher's inherent limitations, depending upon the nature of the task.
Abstract: A measure is described which attempts to quantify one aspect of search behaviour, namely, the frequency of referring back to earlier parts of the search pattern or ‘harking back’. In a dialogue, harking back corresponds to bringing a topic up again when it was mentioned earlier; in a problem-solving task, it corresponds to putting aside one line and picking up again from an earlier position. The coefficient of hark-back may be interpreted variously as a measure of the working style, of the complexity of information-processing, or of the searcher's inherent limitations, depending upon the nature of the task. Analysis of data from an interviewing study, where harking back was deemed to show complex decision-making, gave encouraging results.