scispace - formally typeset
Search or ask a question
Author

J. Douglas Carroll

Bio: J. Douglas Carroll is an academic researcher from Bell Labs. The author has contributed to research in topics: Multidimensional scaling & Cluster analysis. The author has an hindex of 25, co-authored 51 publications receiving 8734 citations. Previous affiliations of J. Douglas Carroll include Saint Petersburg State University & University of Pennsylvania.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, an individual differences model for multidimensional scaling is outlined in which individuals are assumed differentially to weight the several dimensions of a common "psychological space" and a corresponding method of analyzing similarities data is proposed, involving a generalization of Eckart-Young analysis to decomposition of three-way (or higher-way) tables.
Abstract: An individual differences model for multidimensional scaling is outlined in which individuals are assumed differentially to weight the several dimensions of a common “psychological space”. A corresponding method of analyzing similarities data is proposed, involving a generalization of “Eckart-Young analysis” to decomposition of three-way (or higher-way) tables. In the present case this decomposition is applied to a derived three-way table of scalar products between stimuli for individuals. This analysis yields a stimulus by dimensions coordinate matrix and a subjects by dimensions matrix of weights. This method is illustrated with data on auditory stimuli and on perception of nations.

4,520 citations

Book
01 Jan 1978

1,561 citations

Journal ArticleDOI
TL;DR: This paper describes some of the features of POSSE (Product Optimization and Selected Segment Evaluation), a general procedure for optimizing product/service designs in marketing research that uses input data based on conjoint analysis methods.
Abstract: This paper describes some of the features of POSSE (Product Optimization and Selected Segment Evaluation), a general procedure for optimizing product/service designs in marketing research. The appr...

356 citations

Journal ArticleDOI
TL;DR: This editorial comments on those parts of the psychometrician's tool kit that seem most applicable to marketing researchers, including conjoint analysis and the problems that have motivated its more recent contributions to marketing research.
Abstract: Marketing research, similar to the business disciplines in general, has been a long time borrower of models, tools, and techniques from other sciences. Economists, statisticians, and operations researchers have made significant contributions to marketing, particularly in prescriptive model building. Over the past 30 years, psychometricians and mathematical psychologists have also provided a bounty of research riches in measurement and data analysis techniques. Our editorial comments on those parts of the psychometrician's tool kit that seem most applicable to marketing researchers. Our purview is limited. For example, we do not discuss covariance structure analysis and latent trait models, despite their popularity and utility, and we present a limited coverage of the subareas that we do survey. Here, we focus on conjoint analysis, discussing it in terms of the problems that have motivated its more recent contributions to marketing research. In subsequent editorials, we will consider multidimensional scaling and cluster analysis. Currently, conjoint analysis and the related technique of experimental choice analysis represent the most widely applied methodologies for measuring and analyzing consumer preferences. Note that the seminal theoretical contribution to conjoint analysis was made by Luce, a mathematical psychologist, and Tukey, a statistician (Luce and Tukey 1964). Early psychometric contributions to nonmetric conjoint analysis were also made by Kruskal (1965), Roskam (1968), Carroll (1969, 1973), and Young (1972). The evolution of conjoint analysis in marketing research and practice has been extensively documented in reviews by Green and Srinivasan (1978, 1990), Wittink and Cattin (1989), and Wittink, Vriens, and Burhenne (1994). In addition, Green and Krieger (1993) have surveyed conjoint methodology from the standpoint of new product design and optimization.

284 citations

BookDOI
01 Jan 1987
TL;DR: This volume takes up where multidimensional scaling leaves off assumes a working knowledge of the material covered in the earlier volumes and goes well beyond it with a sophisticated overview and analysis of 3-way scaling procedures.
Abstract: This volume takes up where multidimensional scaling leaves off assumes a working knowledge of the material covered in the earlier volumes and goes well beyond it. It begins with a review and application of the INDSCAL model. The authors begin their discussion with an example of the use of the INDSCAL model they then present the model itself and finally they return to another extended example. The initial example is the Rosenberg-Kim study of English kinship terms. The presentation of the model and concepts grows nicely from this context. A 2nd example firms up the readers understanding of the model. After they cover the INDSCAL model the authors present a detailed analysis of SINDSCAL and provide an introduction to other 3-way scaling models as well as individual differences clustering models. The Rosenberg and Kim data are used again to illustrate the INDCLUS clustering model. A series of appendices provide readers with the control cards to analyze ont of the examples using SINDSCAL and discuss several procedures for fitting the INDSCAL model. This monograph provides a sophisticated overview and analysis of 3-way scaling procedures.

248 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new method for automatic indexing and retrieval to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries.
Abstract: A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.

12,443 citations

01 Jan 1988

9,439 citations

Journal ArticleDOI
TL;DR: This survey provides an overview of higher-order tensor decompositions, their applications, and available software.
Abstract: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.

9,227 citations

Book
01 Jan 1988

8,586 citations