scispace - formally typeset
Search or ask a question

Showing papers by "Paul Andrew Bottomley published in 2001"


Journal ArticleDOI
TL;DR: The authors investigate the empirical generalizability of Aaker and Keller's model of how consumers evaluate brand extensions and find evidence that the level of contribution of each of these components varies by brand and culture.
Abstract: The authors investigate the empirical generalizability of Aaker and Keller’s model of how consumers evaluate brand extensions. Various replications have reported different results. Using a comprehensive data set containing the data from the original study and seven replications conducted around the world, the authors undertake a secondary analysis to understand what generalizations emerge. The study has implications for the understanding of how brand extensions are evaluated and how empirical generalizations are made. For brand extensions, Aaker and Keller’s model hypothesizes that evaluations of brand extensions are based on the quality of the original brand, the fit between the parent and extension categories, and the interaction of the two. The authors find support for this full model despite published results, including Aaker and Keller’s own, that support only some of the hypotheses. The authors find evidence that the level of contribution of each of these components varies by brand and culture. With respect to empirical generalizations, the key implication is that it is premature to make firm conclusions about theory on the basis of only one study.

386 citations


Journal ArticleDOI
TL;DR: Three weight elicitation methods are shown to have very distinct "signatures", that is profiles relating weights to rank position, and people actually preferred using Max100 and DR rather than Min10, an important pragmatic consideration.
Abstract: This paper compares the properties and performance of three weight elicitation methods It is in effect a second round contest in which the Bottomley et al (2000) champion, direct rating (DR), locks horns with two new challengers People using DR rate each attribute in turn on a scale of 0–100, whilst people using Max100 first assign to the most important attribute(s) a rating of 100, and then rate the other attributes relative to it/them People using Min10 first assign the least important attribute(s) a rating of 10, and then rate the other attributes relative to it/them The weights produced by Max100 were somewhat more test–retest reliable than DR Both methods were considerably more reliable than Min10 Using people's test–retest data as attribute weights on simulated alternative values in a multi-attribute choice scenario, the same alternative would be chosen on 91% of occasions using Max100, 87% of occasions using DR, but only 75% of occasions using Min10 Moreover, the three methods are shown to have very distinct “signatures�, that is profiles relating weights to rank position Finally, people actually preferred using Max100 and DR rather than Min10, an important pragmatic consideration

159 citations