scispace - formally typeset
Search or ask a question

Showing papers in "Marketing Science in 2000"


Journal ArticleDOI
TL;DR: A structural model based on the previous conceptual model of flow that embodies the components of what makes for a compelling online experience is developed and provides marketing scientists with operational definitions of key model constructs and establishes reliability and validity in a comprehensive measurement framework.
Abstract: Intuition and previous research suggest that creating a compelling online environment for Web consumers will have numerous positive consequences for commercial Web providers. Online executives note that creating a compelling online experience for cyber customers is critical to creating competitive advantage on the Internet. Yet, very little is known about the factors that make using the Web a compelling experience for its users, and of the key consumer behavior outcomes of this compelling experience.Recently, the flow construct has been proposed as important for understanding consumer behavior on the World Wide Web, and as a way of defining the nature of compelling online experience. Although widely studied over the past 20 years, quantitative modeling efforts of the flow construct have been neither systematic nor comprehensive. In large parts, these efforts have been hampered by considerable confusion regarding the exact conceptual definition of flow. Lacking precise definition, it has been difficult to measure flow empirically, let alone apply the concept in practice.Following the conceptual model of flow proposed by Hoffman and Novak (1996), we conceptualize flow on the Web as a cognitive state experienced during navigation that is determined by (1) high levels of skill and control; (2) high levels of challenge and arousal; and (3) focused attention; and (4) is enhanced by interactivity and telepresence. Consumers who achieve flow on the Web are so acutely involved in the act of online navigation that thoughts and perceptions not relevant to navigation are screened out, and the consumer focuses entirely on the interaction. Concentration on the navigation experience is so intense that there is little attention left to consider anything else, and consequently, other events occurring in the consumer's surrounding physical environment lose significance. Self-consciousness disappears, the consumer's sense of time becomes distorted, and the state of mind arising as a result of achieving flow on the Web is extremely gratifying.In a quantitative modeling framework, we develop a structural model based on our previous conceptual model of flow that embodies the components of what makes for a compelling online experience. We use data collected from a largesample, Web-based consumer survey to measure these constructs, and we fit a series of structural equation models that test related prior theory. The conceptual model is largely supported, and the improved fit offered by the revised model provides additional insights into the direct and indirect influences of flow, as well as into the relationship of flow to key consumer behavior and Web usage variables.Our formulation provides marketing scientists with operational definitions of key model constructs and establishes reliability and validity in a comprehensive measurement framework. A key insight from the paper is that the degree to which the online experience is compelling can be defined, measured, and related well to important marketing variables. Our model constructs relate in significant ways to key consumer behavior variables, including online shopping and Web use applications such as the extent to which consumers search for product information and participate in chat rooms. As such, our model may be useful both theoretically and in practice as marketers strive to decipher the secrets of commercial success in interactive online environments.

2,881 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the effect of interactive decision aids on consumer decision making in online shopping environments and find that they have a significant impact on the quality and efficiency of decision making.
Abstract: Despite the explosive growth of electronic commerce and the rapidly increasing number of consumers who use interactive media such as the World Wide Web for prepurchase information search and online shopping, very little is known about how consumers make purchase decisions in such settings. A unique characteristic of online shopping environments is that they allow vendors to create retail interfaces with highly interactive features. One desirable form of interactivity from a consumer perspective is the implementation of sophisticated tools to assist shoppers in their purchase decisions by customizing the electronic shopping environment to their individual preferences. The availability of such tools, which we refer to as interactive decision aids for consumers, may lead to a transformation of the way in which shoppers search for product information and make purchase decisions. The primary objective of this paper is to investigate the nature of the effects that interactive decision aids may have on consumer decision making in online shopping environments. While making purchase decisions, consumers are often unable to evaluate all available alternatives in great depth and, thus, tend to use two-stage processes to reach their decisions. At the first stage, consumers typically screen a large set of available products and identify a subset of the most promising alternatives. Subsequently, they evaluate the latter in more depth, perform relative comparisons across products on important attributes, and make a purchase decision. Given the different tasks to be performed in such a two-stage process, interactive tools that provide support to consumers in the following respects are particularly valuable: 1 the initial screening of available products to determine which ones are worth considering further, and 2 the in-depth comparison of selected products before making the actual purchase decision. This paper examines the effects of two decision aids, each designed to assist consumers in performing one of the above tasks, on purchase decision making in an online store. The first interactive tool, a recommendation agent RA, allows consumers to more efficiently screen the potentially very large set of alternatives available in an online shopping environment. Based on self-explicated information about a consumer's own utility function attribute importance weights and minimum acceptable attribute levels, the RA generates a personalized list of recommended alternatives. The second decision aid, a comparison matrix CM, is designed to help consumers make in-depth comparisons among selected alternatives. The CM allows consumers to organize attribute information about multiple products in an alternatives × attributes matrix and to have alternatives sorted by any attribute. Based on theoretical and empirical work in marketing, judgment and decision making, psychology, and decision support systems, we develop a set of hypotheses pertaining to the effects of these two decision aids on various aspects of consumer decision making. In particular, we focus on how use of the RA and CM affects consumers' search for product information, the size and quality of their consideration sets, and the quality of their purchase decisions in an online shopping environment. A controlled experiment using a simulated online store was conducted to test the hypotheses. The results indicate that both interactive decision aids have a substantial impact on consumer decision making. As predicted, use of the RA reduces consumers' search effort for product information, decreases the size but increases the quality of their consideration sets, and improves the quality of their purchase decisions. Use of the CM also leads to a decrease in the size but an increase in the quality of consumers' consideration sets, and has a favorable effect on some indicators of decision quality. In sum, our findings suggest that interactive tools designed to assist consumers in the initial screening of available alternatives and to facilitate in-depth comparisons among selected alternatives in an online shopping environment may have strong favorable effects on both the quality and the efficiency of purchase decisions-shoppers can make much better decisions while expending substantially less effort. This suggests that interactive decision aids have the potential to drastically transform the way in which consumers search for product information and make purchase decisions.

1,643 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the effect of price promotions on the short-and long-term effects of consumer price promotions in terms of category demand in Dutch supermarket sales over a 4-year period and concluded that the power of price promotion lies primarily in preserving the status quo in the category.
Abstract: Although price promotions have increased in both commercial use and quantity of academic research over the last decade, most of the attention has been focused on their effects on brand choice and brand sales. By contrast, little is known about the conditions under which price promotions expand short-run and long-run category demand, even though the benefits of category expansion can be substantial to manufacturers and retailers alike. This paper studies the category-demand effects of consumer price promotions across 560 consumer product categories over a 4-year period. The data describe national sales in Dutch supermarkets and cover virtually the entire marketing mix, i.e., prices, promotions, advertising, distribution, and new-product activity. We focus on the estimation of main effects i.e., the dynamic category expansive impact of price promotions as well as the moderating effects of marketing intensity and competition both conduct and structure on short-and long-run promotional effectiveness. The research design uses modern multivariate time-series analysis to disentangle short-run and long-run effects. First, we conduct a series of unit-root tests to determine whether or not category demand is stationary or evolving over time. The results are incorporated in the specification of vector-autoregressive models with exogenous variables VARX models. The impulse-response functions derived from these VARX models provide estimates of the short-and long-term effects of price promotions on category demand. These estimates, in turn, are used as dependent variables in a series of second-stage regressions that assess the explanatory power of marketing intensity and competition. Several model validation tests support the robustness of the empirical findings. We present our results in the form of empirical generalizations on the main effects of price promotions on category demand in the short and the long run and through statistical tests on how these effects change with marketing intensity and competition. The findings generate an overall picture of the power and limitations of consumer price promotions in expanding category demand, as follows. Category demand is found to be predominantly stationary, either around a fixed mean or a deterministic trend. Although the total net short-term effects of price promotions are generally strong, with an average elasticity of 2.21 and a more conservative median elasticity of 1.75, they rarely exhibit persistent effects. Instead, the effects dissipate over a time period lasting approximately 10 weeks on average, and their long-term impact is essentially zero. By contrast, the successful introduction of new products into a category is more frequently associated with a permanent category-demand increase. Several moderating effects on price-promotion effectiveness exist. More frequent promotions increase their effectiveness, but only in the short run. The use of nonprice advertising reduces the category-demand effects of price promotions, both in the short run and in the long run. Competitive structure matters as well: The less oligopolistic the category, the smaller the short-run effectiveness of price promotions. At the same time, we find that the dominant form of competitive reaction, either in price promotion or in advertising, is no reaction. Short-run category-demand effectiveness of price promotions is lower in categories experiencing major new-product introductions. Finally, both the short-and long-run price promotion effectiveness is higher in perishable product categories. The paper discusses several managerial implications of these empirical findings and suggests various avenues for future research. Overall, we conclude that the power of price promotions lies primarily in the preservation of the status quo in the category.

499 citations


Journal ArticleDOI
TL;DR: In this article, the Bakos-Brynjolfsson bundling model is extended to settings with several different types of competition, including both upstream and downstream, as well as competition between a bundler and single good and competition between two bundlers.
Abstract: The Internet has signi.cantly reduced the marginal cost of producing and distributing digital information goods. It also coincides with the emergence of new competitive strategies such as large-scale bundling. In this paper, we show that bundling can create "economies of aggregation" for information goods if their marginal costs are very low, even in the absence of network externalities or economies of scale or scope.We extend the Bakos-Brynjolfsson bundling model (1999) to settings with several different types of competition, including both upstream and downstream, as well as competition between a bundler and single good and competition between two bundlers. Our key results are based on the "predictive value of bundling," the fact that it is easier for a seller to predict how a consumer will value a collection of goods than it is to value any good individually. Using a model with fully rational and informed consumers, we use the Law of Large Numbers to show that this will be true as long as the goods are not perfectly correlated and do not affect each other's valuations significantly. As a result, a seller typically can extract more value from each information good when it is part of a bundle than when it is sold separately. Moreover, at the optimal price, more consumers will find the bundle worth buying than would have bought the same goods sold separately. Because of the predictive value of bundling, large aggregators will often be more pro.table than small aggregators, including sellers of single goods.We find that these economies of aggregation have several important competitive implications:1. When competing for upstream content, larger bundlers are able to outbid smaller ones, all else being equal. This is because the predictive value of bundling enables bundlers to extract more value from any given good.2. When competing for downstream consumers, the act of bundling information goods makes an incumbent seem "tougher" to single-product competitors selling similar goods. The resulting equilibrium is less profitable for potential entrants and can discourage entry in the bundler's markets, even when the entrants have a superior cost structure or quality.3. Conversely, by simply adding an information good to an existing bundle, a bundler may be able to profitably enter a new market and dislodge an incumbent who does not bundle, capturing most of the market share from the incumbent firm and even driving the incumbent out of business.4. Because a bundler can potentially capture a large share of profits in new markets, single-product firms may have lower incentives to innovate and create such markets. At the same time, bundlers may have higher incentives to innovate.For most physical goods, which have nontrivial marginal costs, the potential impact of large-scale aggregation is limited. However, we find that these effects can be decisive for the success or failure of information goods. Our results have particular empirical relevance to the markets for software and Internet content and suggest that aggregation strategies may take on particular relevance in these markets.

466 citations


Journal ArticleDOI
TL;DR: The model is applied in a study involving a sample of 88 consumers who were exposed to 65 print ads appearing in their natural context in two magazines and shows how the model supports advertising planning and testing and offers recommendations for further research on the effectiveness of brand communication.
Abstract: The number of brands in the marketplace has vastly increased in the 1980s and 1990s, and the amount of money spent on advertising has run parallel. Print advertising is a major communication instrument for advertisers, but print media have become cluttered with advertisements for brands. Therefore, it has become difficult to attract and keep consumers' attention. Advertisements that fail to gain and retain consumers' attention cannot be effective, but attention is not sufficient: Advertising needs to leave durable traces of brands in memory. Eye movements are eminent indicators of visual attention. However, what is currently missing in eye movementresearch is a serious account of the processing that takes place to store information in long-term memory. We attempt to provide such an account through the development of a formal model. We model the process by which eye fixations on print advertisements lead to memory for the advertised brands, using a hierarchical Bayesian model, but, rather than postulating such a model as a mere data-analysis tool, we derive it from substantive theory on attention and memory. The model is calibrated to eye-movement data that are collected during exposure of subjects to ads in magazines, and subsequent recognition of the brand in a perceptual memory task. During exposure to the ads we record the frequencies of fixations on three ad elements; brand, pictorial and text and, during the memory task, the accuracy and latency of memory. Thus, the available data for each subject consist of the frequency of fixations on the ad elements and the accuracy and the latency of memory. The model that we develop is grounded in attention and memory theory and describes information extraction and accumulation during ad exposure and their effect on the accuracy and latency of brand memory. In formulating it, we assume that subjects have different eye-fixation rates for the different ad elements, because of which a negative binomial model of fixation frequency arises, and we specify the influence of the size of the ad elements. It is assumed that the number of fixations, not their duration, is related to the amount of information a consumer extracts from an ad. The information chunks extracted at each fixation are assumed to be random, varying across ads and consumers, and are estimated from the observed data. The accumulation of information across multiple fixations to the ad elements in long-term memory is assumed to be additive. The total amount of accumulated information that is not directly observed but estimated using our model influences both the accuracy and latency of subsequent brand memory. Accurate memory is assumed to occur when the accumulated information exceeds a threshold that varies randomly across ads and consumers in a binary probit-type of model component. The effect of two media-planning variables, the ad's serial position in a magazine and the ad's location on the double page, on the brand memory threshold are specified. We formulate hypotheses on the effects of ad element surface, serial position, and location. The model is applied in a study involving a sample of 88 consumers who were exposed to 65 print ads appearing in their natural context in two magazines. The frequency of eye fixations was recorded for each consumer and advertisement with infrared eye-tracking methodology. In a subsequent indirect memory task, consumers identified the brands from pixelated images of the ads. Across the two magazines, fixations to the pictorial and the brand systematically promote accurate brand memory, but text fixations do not. Brand surface has a particularly prominent effect. The more information is extracted from an ad during fixations, the shorter the latency of brand memory is. We find a systematic recency effect: When subjects are exposed to an ad later, they tend to identify it better. In addition, there is a small primacy effect. The effect of the ad's location on the right or left of the page depends on the advertising context. We show how the model supports advertising planning and testing and offer recommendations for further research on the effectiveness of brand communication. In future research the model may be extended to accommodate the effects of repeated exposure to ads, to further detail the representation of strength and association of memory, and to include the effects of creative tactics and media planning variables beyond the ones we included in the present study.

415 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide an empirical method to measure the power of channel members and to understand the reasons demand factors, cost factors, and nature of channel interactions for this power.
Abstract: The issue of "power" in the marketing channels for consumer products has received considerable attention in both academic and practitioner journals as well as in the popular press. Our objective in this paper is to provide an empirical method to measure the power of channel members and to understand the reasons demand factors, cost factors, nature of channel interactions for this power. We confine our analysis to pricing power in channels. We use methods from the game-theory literature in marketing on channel interactions to obtain the theoretical framework for our empirical model. This literature provides us a definition of power-one that is based on the proportion or percentage of channel profits that accrue to each of the channel members. There can be a variety of possible channel interactions between manufacturers and retailers in channels. The theoretical literature has examined some of these games. For example, Choi 1991 examines how channel profits for manufacturers and retailer vary if channel interactions are either vertical Nash, or if they are Stackelberg leaderfollower with either the manufacturer or the retailer being the price leader. Each of these three channel interaction games has different implications for profits made by manufacturers and retailers, and consequently for the relative power of the channel members. In contrast to the previous literature that has focused largely on the above three channel interaction games, our model extends the game-theoretic literature by allowing for a continuum of possible channel interactions between manufacturers and a retailer. Furthermore, for a given product market, we empirically estimate from the data where the channel interactions lie in this continuum. More critically, we obtain measures of how channel profits are divided between manufacturers and the retailer in the product market, where a higher share of channel profit is associated with higher channel power. We then examine how channel power is related to demand conditions facing various brands and cost parameters of various manufacturers. In going from game-theory-based theoretical models of channel interactions to empirical estimation, we use the "new empirical industrial organization" framework Bresnahan 1988. As part of this structural modeling framework, we build retail-level demand functions for the various brands manufacturer and private label in a given product category. Given these demand functions, we obtain optimal pricing rules for manufacturers and the retailer. In determining their optimal prices, manufacturers and the retailer account for how all the players in the channel choose their optimal prices. That is, we account for dependencies in decision making across channel members. These dependencies are characterized by a set of "conduct parameters," which are estimated from market data. The conduct parameters enable us to identify the nature of channel interactions between manufacturers and the retailer along the continuum mentioned previously. In addition to the demand and conduct parameters, manufacturers' marginal costs are also estimated in the model. These marginal cost estimates, along with the manufacturer prices and retail prices available in our dataset, enable us to compute the division of channel profits among the channel members. Hence, we are able to obtain insights into who has pricing power in the channel. In the empirical application of the model, we analyze a local market for two product categories: refrigerated juice and tuna. In both categories, there are three major brands. The difference between them is that the private label has an insignificant market share in the tuna category. Our main empirical results show that the usual games examined in the marketing literature do not hold for the given data. We also.nd that the retailer's market power is very significant in both these product categories, and that the estimated demand and cost parameters are consistent with the estimated pattern of conduct between the manufacturers and the retailer. Given the evidence from the trade press of intense manufacturer competition in these categories, as well as the "commodity" nature of these products, the result of retailer power appears intuitive.

322 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the competitive ramifications of individual marketing and information management in today's information-intensive marketing environments and develop a simple model A la Narasimhan (1988), in which each of the two competing firms have their own loyal customers and compete for common switchers.
Abstract: Our research investigates the competitive ramifications of individual marketing and information management in today's information-intensive marketing environments. The specific managerial issues we address are as follows. First, what kinds of incentive environments do competing firms face when they can only target individual customers imperfectly? Second, does the improvement in an industry's targetability intensify price competition in the industry such that all competing firms become worse off? Third, should a firm share its customer knowledge so as to improve its rival's targetability? Fourth, how should an information vendor sell its information that can improve a firm's targetability? Finally, do competing firms have the same incentives to invest in their own targetability?To answer those questions, we develop a simple model A la Narasimhan (1988), in which each of the two competing firms have their own loyal customers and compete for common switchers. We assume that each firm can classify its own loyal customers and switchers correctly only with a less-than-perfect probability. This means that each firm's perceived customer segmentation differs from the actual customer segmentation. Based on their perceived reality, these two competing firms engage in price competition. As an extension, we also allow the competing firms to make their investment decisions to acquire targetability.We show that when individual marketing is feasible,but imperfect, improvements in targetability by either or both competing firms can lead to win-win competition for both even if both players behave noncooperatively and the market does not expand. Win-win competition results from the fact that as a firm becomes better at distinguishing its price-insensitive loyal customers from the switchers, it is motivated to charge a higher price to the former. However, due to imperfect targetability, each firm mistakenly perceives some price-sensitive switchers as price-insensitive loyal customers and charges them all a higher price. These misperceptions thus allow its competitors to acquire those mistargeted customers without lowering their prices and, hence, reduce the rival firm's incentive to cut prices. This effect softens price competition in the market and qualitatively changes the incentive environment for competing firms engaged in individual marketing. A ''prisoner's dilemma'' occurs only when targetability in a market reaches a sufficiently high level.This win-win perspective on individual marketing has many managerial implications. First, we show that superior knowledge of individual customers can be a competitive advantage. However, this does not mean that a firm should always protect its customer information from its competitors. To the contrary, we find that competing firms can all benefit from exchanging individual customer information with each other at the nascent stage of individual marketing, when firms' targetability is low. Indeed, under certain circumstances, a firm may even find it profitable to give away this information unilaterally. However, as individual marketing matures (as firms' targetability becomes sufficiently high), further improvements in targetability will intensify price competition and lead to prisoner's dilemma. Therefore, it is not only prudent politics but also a business imperative for an industry to seize the initiative on the issue of protecting customer privacy so as to ensure win-win competition in the industry.Second, we show that the firm with a larger number of loyal customers tends to invest more in targetability when the cost of acquiring targetability is high. However, the firm with a smaller loyal base can, through information investment, acquire a higher level of targetability than the firm with a larger loyal base as long as the cost of acquiring targetability is not too high. As the cost further decreases, competing firms will all have more incentives to increase their investments in targetability until they achieve the highest feasible level.Third, an information vendor should make its information available nonexclusively (exclusively) when its information is associated with a low (high) level of targetability. When the vendor does sell its information exclusively, it should target a firm with a small loyal following if it can impart a high level of targetability to that firm.Finally, our analysis shows that an information-intensive environment does not doom small firms. In fact, individual marketing may provide a good opportunity for a small firm to leapfrog a large firm. The key to leapfrogging is a high level of targetability or customer knowledge.

319 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate when referral rewards should be offered to motivate referral and derive the optimal combination of reward and price that will lead to the most profitable referrals, and highlight the difference between lowering price and offering rewards as tools to motivate referrals.
Abstract: Sellers who plan to capitalize on the lifetime value of customers need to manage the sales potential from customer referrals proactively. To encourage existing customers to generate referrals, a seller can offer exceptional value to current customers through either excellent quality or a very attractive price. Rewards to customers for referring other customers can also encourage referrals. We investigate when referral rewards should be offered to motivate referrals and derive the optimal combination of reward and price that will lead to the most profitable referrals. We define a delighted customer as one who obtains a positive level of surplus above a threshold level and, consequently, recommends the product to another customer. We show that the use of referral rewards depends on how demanding consumers are before they are willing to recommend i.e., on the delight threshold level. The optimal mix of price and referral reward falls into three regions: 1 When customers are easy to delight, the optimal strategy is to lower the price below that of a seller who ignores the referral effect but not to offer rewards. 2 In an intermediate level of customer delight threshold, a seller should use a reward to complement a low-price strategy. As the delight threshold gets higher in this region, price should be higher and the rewards should be raised. 3 When the delight threshold is even higher, the seller should forsake the referral strategy all together. No rewards should be given, and price reverts back to that of a seller who ignores referrals. These results are consistent with the fact that referral rewards are not offered in all markets. Our analysis highlights the differences between lowering price and offering rewards as tools to motivate referrals. Lowering price is attractive because the seller "kills two birds with one stone": a lower price increases the probability of an initial purchase and the likelihood of referral. Unfortunately, a low price also creates a "free-riding" problem, because some customers benefit from the low price but do not refer other customers. Free riding becomes more severe with an increasing delight threshold; therefore, motivating referrals through low price is less attractive at high threshold levels. A referral reward helps to alleviate this problem, because of its "pay for performance" incentive only actual referrals are rewarded. Unfortunately, rewards can sometimes be given to customers who would have recommended anyway, causing a waste of company resources. The lower the delight threshold level, the bigger the waste and, therefore, motivating referrals through rewards loses attractiveness. Our theory highlights the advantage of using referral rewards in addition to lowering price to motivate referrals. It explains why referral programs are offered sometimes but not always and provides guidelines to managers on how to set the price and reward optimally.

304 citations


Journal ArticleDOI
TL;DR: In this article, the authors examine both theoretically and experimentally how the type of an alliance and the prescribed profit-sharing arrangement affect the resource commitments of partners, and they find that the aggregate behavior of the subjects is accounted for remarkably well by the equilibrium solution.
Abstract: In collaborating to compete, firms forge different types of strategic alliances: same-function alliances, parallel development of new products, and cross-functional alliances. A major challenge in the management of these alliances is how to control the resource commitment of partners to the collaboration. In this research we examine both theoretically and experimentally how the type of an alliance and the prescribed profit-sharing arrangement affect the resource commitments of partners. We model the interaction within an alliance as a noncooperative variable-sum game, in which each firm invests part of its resources to increase the utility of a new product offering. Different types of alliances are modeled by varying how the resources committed by partners in an alliance determine the utility of the jointly-developed new product. We then model the interalliance competition by nesting two independent intra-alliance games in a supergame in which the groups compete for a market. The partners of the winning alliance share the profits in one of two ways: equally or proportionally to their investments. The Nash equilibrium solutions for the resulting games are examined. In the case of same-function alliances, when the market is large the predicted investment patterns under both profit-sharing rules are comparable. Partners developing new products in parallel, unlike the partners in a same-function alliance, commit fewer resources to their alliance. Further, the profit-sharing arrangement matters in such alliances-partners commit more resources when profits are shared proportionally rather than equally. We test the predictions of the model in two laboratory experiments. We find that the aggregate behavior of the subjects is accounted for remarkably well by the equilibrium solution. As predicted, profit-sharing arrangement did not affect the investment pattern of subjects in same-function alliances when they were in the high-reward condition. Subjects developing products in parallel invested less than subjects in same-function alliance, irrespective of the reward condition. We notice that theory seems to underpredict investments in low-reward conditions. Aplausible explanation for this departure from the normative benchmark is that subjects in the low-reward condition were influenced by altruistic regard for their partners. These experiments also clarify the support for the mixed strategy equilibrium: aggregate behavior conforms to the equilibrium solution, though the behavior of individual subjects varies substantially from the norm. Individual-level analysis suggests that subjects employ mixed strategies, but not as fully as the theory demands. This inertia in choice of strategies is consistent with learning trends observed in the investment pattern. A new analysis of Robertson and Gatignon's 1998 field survey data on the conduct of corporate partners in technology alliances is also consistent with our model of samefunction alliances. We extend the model to consider asymmetric distribution of endowments among partners in a same-function alliance. Then we examine the implication of extending the strategy space to include more levels of investment. Finally, we outline an extension of the model to consider cross-functional alliances.

301 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an interactive Markov chain model for predicting the box-office performance of motion pictures based on a behavioral representation of the consumer adoption process for movies as a macroflow process.
Abstract: In spite of the high financial stakes involved in marketing new motion pictures, marketing science models have not been applied to the prerelease market evaluation of motion pictures. The motion picture industry poses some unique challenges. For example, the consumer adoption process for movies is very sensitive to word-of-mouth interactions, which are difficult to measure and predict before the movie has been released. In this article, we undertake the challenge to develop and implement MOVIEMOD-a prerelease market evaluation model for the motion picture industry. MOVIEMOD is designed to generate box-office forecasts and to support marketing decisions for a new movie after the movie has been produced or when it is available in a rough cut but before it has been released. Unlike other forecasting models for motion pictures, the calibration of MOVIEMOD does not require any actual sales data. Also, the data collection time for a product with a limited lifetime such as a movie should not take too long. For MOVIEMOD it takes only three hours in a "consumer clinic" to collect the data needed for the prediction of box-office sales and the evaluation of alternative marketing plans. The model is based on a behavioral representation of the consumer adoption process for movies as a macroflow process. The heart of MOVIEMOD is an interactive Markov chain model describing the macro-flow process. According to this model, at any point in time with respect to the movie under study, a consumer can be found in one of the following behavioral states: undecided, considerer, rejecter, positive spreader, negative spreader, and inactive. The progression of consumers through the behavioral states depends on a set of movie-specific factors that are related to the marketing mix, as well as on a set of more general behavioral factors that characterize the movie-going behavior in the population of interest. This interactive Markov chain model allows us to account for word-of-mouth interactions among potential adopters and several types of word-of-mouth spreaders in the population. Marketing variables that influence the transitions among the states are movie theme acceptability, promotion strategy, distribution strategy, and the movie experience. The model is calibrated in a consumer clinic experiment. Respondents fill out a questionnaire with general items related to their movie-going and movie communication behavior, they are exposed to different sets of information stimuli, they are actually shown the movie, and finally, they fill outpostmovie evaluations, including word-of-mouth intentions.These measures are used to estimate the word-of-mouth parameters and other behavioral factors, as well as the movie-specific parameters of the model. MOVIEMOD produces forecasts of the awareness, adoption intention, and cumulative penetration for a new movie within the population of interest for a given base marketing plan. It also provides diagnostic information on the likely impact of alternative marketing plans on the commercial performance of a new movie. We describe two applications of MOVIEMOD: One is a pilot study conducted without studio cooperation in the United States, and the other is a full-fledged implementation conducted with cooperation of the movie's distributor and exhibitor in the Netherlands. The implementations suggest that MOVIEMOD produces reasonably accurate forecasts of box-office performance. More importantly, the model offers the opportunity to simulate the effects of alternative marketing plans. In the Dutch application, the effects of extra advertising, extra magazine articles, extra TV commercials, and higher trailer intensity compared to the base marketing plan of the distributor were analyzed. We demonstrate the value of these decision-support capabilities of MOVIEMOD in assisting managers to identify a final plan that resulted in an almost 50% increase in the test movie's revenue performance, compared to the marketing plan initially contemplated. Management implemented this recommended plan, which resulted in box-office sales that were within 5% of the MOVIEMOD prediction. MOVIEMOD was also tested against several benchmark models, and its prediction was better in all cases. An evaluation of MOVIEMOD jointly by the Dutch exhibitor and the distributor showed that both parties were positive about and appreciated its performance as a decision-support tool. In particular, the distributor, who has more stakes in the domestic performance of its movies, showed a great interest in using MOVIEMOD for subsequent evaluations of new movies prior to their release. Based on such evaluations and the initial validation results, MOVIEMOD can fruitfully and inexpensively be used to provide researchers and managers with a deeper understanding of the factors that drive audience response to new motion pictures, and it can be instrumental in developing other decision-support systems that can improve the odds of commercial success of new experiential products.

254 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that the measurement of loss aversion in empirical applications of the reference-dependent choice model is confounded by the presence of unaccounted-for heterogeneity in consumer price responsiveness.
Abstract: Recent work in marketing has drawn on behavioral decision theory to advance the notion that consumers evaluate attributes (and therefore choice alternatives) not only in absolute terms, but asdeviations from a reference point. The theory has important substantive and practical implications for the timing and execution of price promotions and other marketing activities.Choice modelers using scanner panel data have tested for the presence of these "reference effects"in consumer response to an attribute such as price. In applications of the theory of reference-dependent choice (Tversky and Kahneman 1991), some modelers report empirical evidence of loss aversion: When a consumer encounters a price above his or her established reference point (a "loss"), the response is greater than for a price below the reference point (a "gain"). Researchers have gone so far as to suggest that evidence for the so-called reference effect make it an empirical generalization in marketing (e.g., Kalyanaram and Winer 1995, Meyer and Johnson 1995).It is our contention that the measurement of loss aversion in empirical applications of the reference-dependent choice model is confounded by the presence of unaccounted-for heterogeneity in consumer price responsiveness. Our reasoning is that the kinked price response curve implied by loss aversion is confounded with the slopes of the response curves across segments that are differentially responsive to price. A more price-responsive consumer (with a steeper response function) tends to have a lower price level as a reference point. This consumer faces a larger proportion of prices above his reference point, thus the response curve is steeperin the domain of losses. Similarly, the less price-responsive consumer sees a greater proportion of prices below his reference point, so the response curve is less steep within the domain of gains. As a result, any cross-sectional estimate of loss aversion that does not take this into account will be biased upward--researchers who do not control for heterogeneity in price responsiveness may arrive at incorrect substantive conclusions about the phenomenon. It is interesting to note that in this instance, failure to control for heterogeneity induces a bias infavor of finding an effect, rather than the more typical case of attenuation of the effect toward zero.We first test our assertion regarding the referencedependent model using scanner panel data on refrigerated orange juice and subsequently extend this analysis to 11 additional product categories. In all cases we find, as predicted, that accounting for price-response heterogeneity leads to lower and frequently nonsignificant estimates of loss aversion. We do, however, find some categories in which the effect does not disappear altogether. We also estimate loss aversion using a "sticker shock" model of brand choice in which the reference prices arebrand-specific. In line with the results of the majority of prior literature, we find smaller and insignificant estimates of loss aversion in this model. We show that this is because in the sticker shock model, there is no apparent correlation between the price responsiveness of the consumer and the representation of reference effects as losses or gains. Our findings strongly suggest that loss aversion may not in fact be a universal phenomenon, at least in the context of frequently purchased grocery products.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated changes in diffusion speed in the United States over a period of 74 years (1923--1996) using data on 31 electrical household durables.
Abstract: It is a popular contention that products launched today diffuse faster than products launched in the past. However, the evidence of diffusion acceleration is rather scant, and the methodology used in previous studies has several weaknesses. Also, little is known about why such acceleration would have occurred. This study investigates changes in diffusion speed in the United States over a period of 74 years (1923--1996) using data on 31 electrical household durables. This study defines diffusion speed as the time it takes to go from one penetration level to a higher level, and it measures speed using the slope coefficient of the logistic diffusion model. This metric relates unambiguously both to speed as just defined and to the empirical growth rate, a measure of instantaneous penetration growth. The data are analyzed using a single-stage hierarchical modeling approach for all products simultaneously in which parameters capturing the adoption ceilings are estimated jointly with diffusion speed parameters. The variance in diffusion speed across and within products is represented separately but analyzed simultaneously.The focus of this study is on description and explanation rather than forecasting or normative prescription. There are three main findings.1. On average, there has been an increase in diffusion speed that is statistically significant and rather sizable. For the set of 31 consumer durables, the average value of the slope parameter in the logistic model's hazard function was roughly 0.48, increasing with 0.09 about every 10 years. It took an innovation reaching 5% household penetration in 1946 an estimated 13.8 years to go from 10% to 90% of its estimated maximum adoption ceiling. For an innovation reaching 5% penetration in 1980, that time would have been halved to 6.9 years. This corresponds to a compound growth rate in diffusion speed of roughly 2% between 1946 and 1980. 2. Economic conditions and demographic change are related to diffusion speed. Whether the innovation is an expensive item also has a sizable effect. Finally, products that required large investments in complementary infrastructure (radio, black and white television, color television, cellular telephone) and products for which multiple competing standards were available early on (PCs and VCRs) diffused faster than other products once 5% household penetration had been achieved. 3. Almost all the variance in diffusion speed among the products in this study can be explained by (1) the systematic increase in purchasing power and variations in the business cycle (unemployment), (2) demographic changes, and (3) the changing nature of the products studied (e.g., products with competing standards appear only late in the data set). After controlling for these factors, no systematic trend in diffusion speed remains unaccounted for.These findings are of interest to researchers attempting to identify patterns of difference and similarity among the diffusion paths of many innovations, either by jointly modeling the diffusion of multiple products (as in this study) or by retrospective meta-analysis. The finding that purchasing power, demographics, and the nature of the products capture nearly all the variance is of particular interest. Specifically, one does not need to invoke unobserved changes in tastes and values, as some researchers have done, to account for long-term changes in the speed at which households adopt new products. The findings also suggest that new product diffusion modelers should attempt to control not only for marketing mix variables but also for broader environmental factors. The hierarchical model structure and the findings on the systematic variance in diffusion speed across products are also of interest to forecasting applications when very little or no data are available.

Journal ArticleDOI
Preyas S. Desai1
TL;DR: In this article, a high-demand manufacturer can use advertising, slotting allowances, and wholesale prices to signal its high demand to retailers, and the authors study the relative importance of advertising and slotting allowance in signaling demand.
Abstract: With the increase in new product introductions in consumer packaged goods categories, supermarkets are reluctant to accept new products. Therefore, it is very important for manufacturers to convince retailers of the high-demand potential of their products. We study how a high-demand manufacturer can use advertising, slotting allowances, and wholesale prices to signal its high demand to retailers. Specifically, we examine the relative importance of advertising and slotting allowance in signaling demand. That is, when is it optimal for the manufacturer to use high advertising support, and when is it optimal for it to offer slotting allowance as a signal of its demand? We show that when a high-demand manufacturer is trying to signal its demand to retailers, advertising and slotting allowance are partial substitutes of one another in the sense that the manufacturer can increase one in order to compensate for a reduction in the other. We find that the high-demand manufacturer's signaling strategy depends on three factors: the retailer's stocking costs, the intensity of retail competition, and the advertising response rate in the given product market. We begin with a model of one manufacturer dealing with one retailer. The manufacturer has private information about the potential demand for its new product. The retailer is uncertain about the likely demand of the new product and is willing to accept the product only if it is convinced that the demand is high. We characterize the high-demand manufacturer's separating equilibrium strategies. We find that the slotting allowance plays an important role in signaling when the retailer's stocking costs are high and the advertising effectiveness is low. On the other hand, the manufacturer does not offer any slotting allowance, and advertising plays a bigger role when the stocking costs are low or the advertising effectiveness is high. We then examine the effects of retail competition on the manufacturer strategy. We find that the slotting allowance plays a more important role when the retail level competition is very intense. The manufacturer may have to offer a positive slotting allowance even in the absence of retailers' demand uncertainty when the retail competition is sufficiently intense. This result shows that the slotting allowance may have an important role to play even in the absence of signaling or screening considerations. Thus, our analysis of competitive setting provides an alternative explanation for slotting allowances. It also offers support to the views of many retailers who believe that slotting allowances can help retailers recover high stocking costs in highly competitive retail markets. In the presence of retailers' demand uncertainty, the manufacturer offers a higher slotting allowance in order to signal its high demand. We also investigate the effect of retailer's uncertainty about the effectiveness of the manufacturer's advertising. We show that if the high-demand manufacturer also has a higher advertising response rate, the manufacturer provides even higher advertising support to alleviate the retailer's advertising-related uncertainty. By increasing the advertising support, the manufacturer credibly tells the retailer; that it would not be optimal for the manufacturer to provide such high advertising support unless it had high enough advertising effectiveness.

Journal ArticleDOI
Hao Zhao1
TL;DR: In this article, the authors investigate the firm's optimal advertising and pricing strategies when introducing a new product and construct a model in which advertising is used both to raise awareness about the product and to signal its quality.
Abstract: The objective of this paper is to investigate the firm's optimal advertising and pricing strategies when introducing a new product. We extend the existing signaling literature on advertising spending and price by constructing a model in which advertising is used both to raise awareness about the product and to signal its quality. By comparing the complete information game and the incomplete information game, we find that the high-quality firm will reduce advertising spending and increase price from their respective complete information levels. In the separating equilibrium, the high-quality firm will actually spend less on advertising than the low-quality firm, resulting in a negative correlation between product quality and advertising spending.What sets our analysis apart from previous studies is that we consider advertising spending not only as a signaling device but also as an informational device. When advertising spending is just a signaling device, it is purely a dissipative expense. It can be an effective signal of quality because only the high-quality firm can afford it; thus, consumers can infer the product's quality by its advertising spending. In this case, advertising spending and product quality are positively correlated. However, when advertising also serves the purpose of raising awareness, it endogenizes the size of the market for the firm, so it is not just a dissipative expense any more.Consider the low-quality firm's mimicking strategy in this case. When the low-quality firm is believed to be a highquality one, it can charge a much higher price than if its true quality were known. Given that its marginal cost is lower than the high-quality firm's, its profit margin will be much larger in mimicry than in revealing its true quality. Indeed, its profit margin will be even greater than the high-quality firm's. Therefore, the low-quality firm in mimicry has a strong incentive to increase its advertising spending from its optimal level when its true quality is known. To deter the low-quality firm's mimicking tendency, the high-quality firm should decrease its advertising spending so that mimicry is not as appealing to the low-quality firm as revealing its true quality. Indeed, the high-quality firm should reduce its advertising spending so much that it advertises less than the low-quality firm in equilibriumMany have interpreted signaling as "burning money" or "throwing money down the drain." In the case of advertising, the claim is that its purpose is simply to show consumers that the firm can afford to squander money on advertising to signal its quality. Hence, the advertising content need not be informative. However, our results show that simply "burning money" is not enough to signal quality. How the money is burned is also important. When advertising raises awareness as well as signals quality, "saving money" rather than "burning money" is the correct signaling approach, although ultimately the high-quality firm will sacrifice some profit by reducing its market size. The intuition behind this result is that when information is incomplete, the high-quality firm cannot fully exploit its advantages. Whenever its advantages in quality and/or marginal costs are lessened, a firm will want to spend less on advertising.

Journal ArticleDOI
TL;DR: In this paper, a two-country, three-stage model was proposed to quantitatively study the effects and strategies of parallel imports in the context of a discriminating monopolist that has different prices for the same good in different markets.
Abstract: We examine the problem of parallel imports: unauthorized flows of products across countries, which compete with authorized distribution channels. The traditional economics model of a discriminating monopolist that has different prices for the same good in different markets requires the markets to be separated in some way, usually geographically. The profits from price discrimination can be threatened by parallel imports that allow consumers in the high-priced region some access to the low-priced marketplace. However, as this article shows, there is a very real possibility that parallel imports may actually increase profits.The basic intuition is that parallel importation becomes another channel for the authentic goods and creates a new product version that allows the manufacturer to price discriminate. We propose a two-country, three-stage model to quantitatively study the effects and strategies. In the third stage, and in the higher priced country where parallel imports have entered, we characterize the resulting market segmentation. One segment of consumers stays with the authorized version as they place more value on the warranty and services that come with the authorized version. Another segment switches to parallel imports because a lower price is offered due to lack of country-specific features or warranties. Parallel imports also generate a third and new segment that would not have bought this product before. Unlike counterfeits that are fabricated by imitators, all parallel imports are genuine and sourced from the manufacturer in the lower-priced country through authorized dealers. Therefore, the manufacturer's global sales quantity should increase, but profit may rise or fall depending on the relative sizes and profitability of the segments. A profit-maximizing parallel importer should set price and quantity in the second stage after observing the manufacturer's prices in both countries. There will be a threshold of across-country price gap above which parallel imports would occur. In the first stage, the manufacturer can anticipate the possible occurrence of a parallel import, its price and quantity, and its effect on authorized sales in each country to make a coordinated pricing decision to maximize the global supply chain profit. Under some circumstances the manufacturer should allow parallel imports and under others should prevent them. Through a Stackelberg game we solve for the optimal pricing strategy in each scenario. We then find in one extension that when the number of parallel importers increases, the optimal authorized price gap should narrow, but the prices and quantities of parallel imports may rise or fall. In another extension, we .nd that when the manufacturer has other means--such as monitoring dealers, differentiating designs, and unbundling warranties--to contain parallel imports, the authorized price gap can widen as a function of the effectiveness of nonpricing controls.In summary, parallel imports may help the manufacturer to extend the global reach of its product and even boost its global profit. If the manufacturer offers a discount version through its authorized dealers, it is running a high risk of confusing customers and tarnishing brand images. Parallel imports may cause similar concerns for the manufacturer, but unauthorized dealers are perceived as further removed from the manufacturer. Therefore, there is less risk of confusing consumers when parallel imports are channeled through unauthorized dealers. Furthermore, they are more nimble in diverting the product whenever their transshipment and marketing costs are small enough not to offset the authorized price gap and the valuation discount. This may explain why some manufacturers fiercely fight parallel imports, while others knowingly use this alternative channel.

Journal ArticleDOI
TL;DR: A model for search engine performance that is able to represent key patterns of coverage and overlap among the engines is proposed and validated and used to examine how properties of a Web page and characteristics of a phrase affect the probability that a given search engine will find a given page.
Abstract: This research examines the ability of six popular Web search engines, individually and collectively, to locate Web pages containing common marketing/management phrases. We propose and validate a model for search engine performance that is able to represent key patterns of coverage and overlap among the engines. The model enables us to estimate the typical additional benefit of using multiple search engines, depending on the particular set of engines being considered. It also provides an estimate of the number of relevant Web pages not found by any of the engines. For a typical marketing/management phrase we estimate that the "best" search engine locates about 50% of the pages, and all six engines together find about 90% of the total. The model is also used to examine how properties of a Web page and characteristics of a phrase affect the probability that a given search engine will find a given page. For example, we find that the number of Web page links increases the prospect that each of the six search engines will find it. Finally, we summarize the relationship between major structural characteristics of a search engine and its performance in locating relevant Web pages.

Journal ArticleDOI
TL;DR: As the most important innovation since the development of the printing press, the Internet has the potential to radically transform the way individuals go about conducting their business with each other, but also the very essence of what it means to be a human being in society.
Abstract: There is a revolution happening-a startling and amazing revolution that is altering everything from our traditional views of how advertising and communication media work to how people can and should communicate with each other. That revolution is the Internet-the massive global network of interconnected packet-switched computer networks-and as the most important innovation since the development of the printing press, the Internet has the potential to radically transform not just the way individuals go about conducting their business with each other, but also the very essence of what it means to be a human being in society. Since the introduction of the first graphicallyoriented Web browser, Mosaic, in 1993, the Internet has experienced phenomenal growth, both in terms of the number of computers and devices connected to it and the number of individuals and firms providing and accessing content on it Hoffman et al. 2000. The first significant commercial activity appeared on the Web by 1994and in the ensuing five years, the commercialization of the Internet has exploded. There are now very few countries and territories left in the entire world that do not have at least one host computer connected to the Internet Rutkowski 1999. At the same time, electronic commerce, as a research area, a business, and, indeed, an entire new industry, is still very much in its infancy. There is much confusion and complexity and not nearly enough solid information.

Journal ArticleDOI
TL;DR: A method for resolving two key questions in merchandise testing: which stores to use for the test and how to extrapolate from test sales to create a forecast of total season demand for each product for the chain.
Abstract: In a merchandise depth test, a retail chain introduces new products at a small sample of selected stores for a short period prior to the primary selling season and uses the observed sales to forecast demand for the entire chain. We describe a method for resolving two key questions in merchandise testing: 1 which stores to use for the test and 2 how to extrapolate from test sales to create a forecast of total season demand for each product for the chain. Our method uses sales history of products sold in a prior season, similar to those to be tested, to devise a testing program that would have been optimal if it had been applied to this historical sample. Optimality is defined as minimizing the cost of conducting the test, plus the cost of over-and understocking of the products whose supply is to be guided by the test. To determine the best set of test stores, we apply a k-median model to cluster the stores of the chain based on a store similarity measure defined by sales history, and then choose one test store from each cluster. A linear programming model is used to fit a formula that is then used to predict total sales from test sales. We applied our method at a large retailer that specializes in women's apparel and at two major shoe retailers, comparing results in each case to the existing process used by the apparel retailer and to some standard statistical approaches such as forward selection and backward elimination. We also tested a version of our method in which clustering was based on a combination of several store descriptors such as location, type of store, ethnicity of the neighborhood of location, total store sales, and average temperature of the store location. We found that relative to these other methods, our approach could significantly improve forecasts and reduce markdowns that result from excessive inventory, and lost margins resulting from stockouts. At the apparel retailer the improvement was enough to increase profits by more than 100%. We believe that one reason our method outperforms the forward selection and backward elimination methods is that these methods seek to minimize squared errors, while our method optimizes the true cost of forecast errors. In addition, our approach, which is based purely on sales, outperforms descriptor variables because it is not always clear which are the best store descriptors and how best to combine them. However, the sales-based process is completely objective and directly corresponds to the retailer's objective of minimizing the understock and overstock costs of forecast error. We examined the stores within each of the clusters formed by our method to identify common factors that might explain their similar sales patterns. The main factor was the similarity in climate within a cluster. This was followed by the ethnicity of the neighborhood where the store is located, and the type of store. We also found that, contrary to popular belief, store size and location had little impact on sales patterns. In addition, this technique could also be used to determine the inventory allocation to individual stores within a cluster and to minimize lost demand resulting from inaccurate distribution across size. Finally, our method provides a logical framework for implementing micromerchandising, a practice followed by a significant number of retailers in which a unique assortment of merchandise is offered in each store or a group of similar stores tuned to maximize the appeal to customers of that store. Each cluster formed by our algorithm could be treated as a "virtual chain" within the larger chain, which is managed separately and in a consistent manner in terms of product mix, timing of delivery, advertising message, and store layout.

Journal ArticleDOI
TL;DR: In this paper, a hierarchical Bayesian framework for modeling general forms of heterogeneity in partially recursive structural equation models is proposed, which is suitable for studies in which panel data or multiple observations are available for a given set of respondents or objects.
Abstract: Structural equation models are widely used in marketing and psychometric literature to model relationships between unobserved constructs and manifest variables and to control for measurement error. Most applications of structural equation models assume that data come from a homogeneous population. This assumption may be unrealistic, as individuals are likely to be heterogeneous in their perceptions and evaluations of unobserved constructs. In addition, individuals may exhibitdifferent measurement reliabilities. It is well-known in statistical literature that failure to account for unobserved sources of individual differences can resultin misleading inferences and incorrect conclusions. We develop a hierarchical Bayesian framework for modeling general forms of heterogeneity in partially recursive structural equation models. Our framework elucidates the motivations for accommodating heterogeneity and illustrates theoretically the types of misleading inferences that can result when unobserved heterogeneity is ignored. We describe in detail the choices that researchers can make in incorporating different forms of measurement and structural heterogeneity. Current random-coefficient models in psychometric literature can accommodate heterogeneity solely in mean structures. We extend these models by allowing for heterogeneity both in mean and covariance structures. Specifically, in addition to heterogeneity in measurement intercepts and factor means, we account for heterogeneity in factor covariance structure, measurement error, and structural parameters. Models such as random-coefficient factor analysis, random-coefficientsecond-order factor analysis, and random-coefficient, partially recursive simultaneous equation models are special cases of our proposed framework. We also develop Markov Chain Monte Carlo MCMC procedures to perform Bayesian inference in partially recursive, random-coefficient structural equation models. These procedures provide individual-specific estimates of the factor scores, structural coefficients, and other model parameters. We illustrate our approach using two applications. The first application illustrates our methods on synthetic data, whereas the second application uses consumer satisfaction data involving measurements on satisfaction, expectation disconfirmation, and performance variables obtained from a panel of subjects. Our results from the synthetic data application show that our Bayesian procedures perform well in recovering the true parameters. More importantly, we find that models that ignore heterogeneity can yield a severely distorted picture of the nature of associations among variables and can therefore generate misleading inferences. Specifically, we find that ignoring heterogeneity can result in inflated estimates of measurement reliability, wrong signs of factor covariances, and can yield attenuated model fit and standard errors. The results from the consumer satisfaction study show that individuals vary both in means and covariances and indicate that conventional psychometric methods are not appropriate for our data. In addition, we find that heterogeneous models outperform the standard structural equation model in predictive ability. Managerially, we show how one can use the individual-level factor scores and structural parameter estimates from the Bayesian approach to perform quadrantanalysis and refine marketing policy e.g., develop a one-on-one marketing policy. The framework introduced in this paper and the inference procedures we describe should be of interest to researchers in a wide range of disciplines in which measurement error and unobserved heterogeneity are problematic. In particular, our approach is suitable for studies in which panel data or multiple observations are available for a given set of respondents or objects e.g., firms, organizations, markets. At a practical level, our procedures can be used by managers and other policymakers to customize marketing activities or policies. Future research should extend our procedures to deal with the general nonrecursive structural equation model and to handle binary and ordinal data situations.

Journal ArticleDOI
TL;DR: In this article, a simple game-theoretical model was developed to capture the most essential factors in a firm's market entry decision, such as market uncertainty, firm heterogeneity, competition, cannibalization, and order-of-entry effects.
Abstract: How should a firm decide whether or not to enter an untested market when a competing firm is vying for the same market? Should a firm always speed to the market in an effort to capitalize on pioneering advantages? We address those questions by developing a simple game-theoretical model that captures the most essential factors in a firm's market entry decision, such as market uncertainty, firm heterogeneity, competition, cannibalization, and order-of-entry effects. Our analysis shows that in a competitive context, both pioneering advantages and laggard's disadvantages can motivate a firm to speed to an untested market. Therefore, pioneering advantages alone are not an adequate guide for a firm to formulate its market entry strategy. The optimal decision may call for a firm to be a prudent laggard when pioneering advantages to the firm are substantial, or to become a market pioneer when facing pioneering disadvantages. We characterize different patterns of market entry as equilibrium outcomes for different configurations of the market reward structure and offer a conceptual framework for formulating market entry strategies that go beyond the conventional dichotomy: speed or wait. We show that the paradoxical phenomenon of "disadvantaged pioneers" can arise in a competitive context as the outcome of rational firms making rational choices. To show that pioneering advantages alone are not the right litmus test for market entry decisions, we apply our general framework to a concrete case where consumer preference or the premium that consumers are willing to pay for the pioneering brand gives rise to pioneering advantages and laggard's disadvantages. We conclude that the firm with a larger pioneering premium may choose to wait, while a firm with a smaller pioneering premium speeds to the market. Our analysis also sheds light on empirical research on pioneering advantages. Because firms may race into a market solely to avoid laggard's disadvantages rather than to capture pioneering advantages, pioneers are not necessarily the firms best positioned to establish, exploit, and maintain pioneering advantages. Therefore, it is not surprising that a significant percentage of pioneers fail, as documented by recent empirical research. Our normative investigation further suggests that this predicament in empirical research will not disappear even if we have complete data, use the right measurements, and employ perfect statistical techniques. Therefore, it is perhaps more fruitful to redirect our research effort in the search for pioneering advantages. Finally, we extend our analysis to incorporate the effect of cannibalization on an incumbent firm's market entry strategy. We conclude that cannibalization can motivate an incumbent firm to wait, as the conventional wisdom suggests, but it can also be an impetus for a firm to become a market pioneer. We offer supporting evidence for our analysis and discuss managerial implications of our conclusions.

Journal ArticleDOI
TL;DR: In this paper, the authors consider a situation where a manufacturer and an exclusive, independent distributor are negotiating the transfer (wholesale) price of a new product, and they assume that the negotiations occur in an incomplete and asymmetric information environment such that the manufacturer is uncertain about the consumers' reservation price, whereas the distributor knows it precisely because of proximity to the consumer.
Abstract: Manufacturers and distributors in marketing channels commonly establish prices, margins, and other trade terms through negotiations. These negotiations have significant impact on channel members' profit streams over the duration of the business relationship. We consider a situation where a manufacturer and an exclusive, independent distributor are negotiating the transfer (wholesale) price of a new product. The transfer price should lie between the manufacturer's production cost and the maximum resale price that the distributor can charge end consumers (consumers' reservation price). We assume that the negotiations occur in an incomplete and asymmetric information environment such that the manufacturer is uncertain about the consumers' reservation price, whereas the distributor knows it precisely because of proximity to the consumer. The negotiation is time-sensitive because of the threat of potential competitive entry. Both parties have identical opportunity costs of delay in reaching agreement.In this incomplete and asymmetric information environment, the negotiators must learn before they can reach agreement. However, each negotiator has an incentive to convince the other that the available surplus is smaller than it really is. Hence, a high (low) offer (counteroffer) has little credibility without opportunity costs of delay. For any given manufacturer offer, a distributor facing a low consumer reservation price has a small available surplus and therefore more incentive to delay agreement than if the price is high. Willingness to delay agreement and incur delay costs lends credibility to the price signal in an offer (counteroffer), providing a means for communicating credibly and facilitating agreement. Thus, with incomplete, asymmetric information and opportunity costs of delay, a signaling formulation with alternating offers and counteroffers captures key strategic characteristics of marketing channel negotiations.We adapt a game-theoretic model (Grossman and Perry 1986a, 1986b) to predict bargaining behavior and outcomes in this channel negotiation scenario. We derive both point predictions and directional implications from this sequential equilibrium (SE) bargaining model regarding how manufacturer uncertainty about distributor value (consumers' reservation price), opportunity cost of delay, and the actual reservation price (total surplus) should influence bargaining outcomes. The predictions are tested in two experiments. The point predictions serve as benchmarks against which we evaluate the observed bargaining outcomes, as we focus on testing the model's directional implications. We also explore the underlying bargaining process to assess the extent to which subjects conform to the SE signaling rationale in optimizing channel profits.Both experiments show that the point predictions of the SE model fall considerably short in describing bargaining behavior and outcomes. The players bargained suboptimally, took longer to agree, and could not extract the total available surplus. Nevertheless, the data are consistent with several directional predictions of the SE model. There is consistent support for the predicted directional effects of manufacturer uncertainty and consumer reservation prices. As expected, high uncertainty impeded efficient negotiation, eliciting high first offers from manufacturers and increasing bargaining duration. Also, higher reservation prices (higher surplus) lowered bargaining duration, increased bargaining efficiency, and raised profits for both parties. However, support for the predicted directional effects of opportunity cost of delay is mixed. Higher delay costs produced quicker agreements, but distributors did not benefit from their informational advantage.Although the directional results suggest that the SE model is a good representation of bargaining behavior, a closer analysis shows that the bargaining process data did not correspond to the specific signaling rationale of the SE model. Rather, these data suggest that the bargainers created simplified representations of the price negotiation and used heuristics to develop their offers and counteroffers. We observe two systematic patterns of deviations from the SE model. Some manufacturers may have used the counteroffer levels to infer the distributors' competitive stance and factored this into their responses. Thus, even though the distributor counteroffers carried signals of the consumer reservation price, the manufacturers delayed agreement because they either did not recognize the signal or thought it was unreliable. In other cases, the data are consistent with a simple, nonstrategic model (EMP) in which the manufacturer and the distributor divide the monetary payoff (surplus) equally. The results show that the effectiveness of signaling mechanisms depends not only on the economic characteristics of the bargaining situation, but also on shared individual and social contexts that influence how signals are transmitted and interpreted.

Journal ArticleDOI
TL;DR: In this article, the authors examine how incorporating discrete bidding and bidder aggressiveness affect optimal strategies for an important decision for auction sellers, which is setting the lowest acceptable bid at which to sell the property.
Abstract: In practice, the rules in most open English auctions require participants to raise bids by a sizeable, discrete amount. Furthermore, some bidders are typically more aggressive in seeking to become the "current bidder" during competitive bidding. Most auction theory, however, has assumed bidders can place any tiny "continuous" bid increase, and recommend as optimal the tiniest possible increase. This article examines how incorporating discrete bidding and bidder aggressiveness affect optimal strategies for an important decision for auction sellers, which is setting the lowest acceptable bid at which to sell the property. We investigate two alternative methods sellers often use to enforce this decision. These are setting an irrevocable reserve before the auction, and covert shilling, where the seller or confederates pose as bona fide bidders and raise bona fide bids, unsuspected by bidders. These optimal strategies interest auction participants, especially sellers who must recognize the bidding rules and bidder aggressiveness they will encounter in actual auctions. We also examine how these strategies change with the auction context, such as the number of bidders, and how they differ from corresponding strategies already identified for continuous bidding. Our model examines open English auctions where bidders have independent, private valuations. We find that discrete bidding does affect these strategies, as does the aggressiveness of the bidder with the highest valuation, relative to the average aggressiveness of all other remaining bidders. We identify the seller's optimal discrete reserve, and show that if the highest valuator is relatively more less aggressive, this increases decreases from the optimal continuous reserve, and also increases decreases as the number of bidders increases. With continuous bidding, by contrast, this reserve is invariant to the number of bidders. As this bidder becomes relatively more aggressive, for a given number of bidders, the optimal discrete reserve increases, while as he or she becomes less aggressive, the seller's expected auction utility increases, which increases the set of auctions where discrete bidding generates higher seller welfare than continuous. We propose a covert shilling model that requires shilling sellers, and any confederates and auctioneers, to outwardly act no differently than with reserves, to avoid detection. We identify cases where the seller optimally shills once the bona fide bidding has stopped, and identify the corresponding optimal point to stop shilling and accept the next bona fide bid, if offered. This stopping point does not depend on where bona fide bidding stops, or aggressiveness, or the number of bidders, or on whether shill bids alternate with bona fide bids or are consecutively entered. We also find that the optimal lowest acceptable bid with shilling can be higher lower than that with reserves if the highest valuator is sufficiently unaggressive aggressive. By comparison, in continuous bidding shilling and reserves yield identical lowest acceptable bids. Sometimes the seller using a shilling strategy optimally should not shill at all, and instead accept the bid where bona fide bidding stops. This can occur when that bid, or the number of bidders, is sufficiently high, or when the highest valuator is as, or less, aggressive than other bidders. Optimal shilling can be as practical to implement as reserves, because it does not require sellers to have any information beyond that needed in a reserve auction. If sellers shill optimally, they can never be worse off compared to using a reserve, and can be better off. Shilling can make bidders worse off, but can also make them better off when the seller using a shilling strategy optimally accepts bids below the optimal reserve. In these latter cases, shilling Pareto dominates reserves, ex ante. We provide numerical examples to illustrate these results. We discuss how our results might be affected if shilling is not covert, or bidders' valuations have a common value component rather than being independent, or by the rules used in many discrete bid Internet auctions.

Journal ArticleDOI
TL;DR: In this paper, the authors suggest that creating a compelling online environment for Web consumers will have numerous positive consequences for commercial Web providers, and they also suggest that online consumers will benefit from creating compelling online environments for web consumers.
Abstract: Intuition and previous research suggest that creating a compelling online environment for Web consumers will have numerous positive consequences for commercial Web providers. Online executives note...

Journal ArticleDOI
TL;DR: In this article, the authors investigate three unresolved predictions involving the incentive-insurance trade-off posited in the model and conclude that the model fails when there is no material insurance-incentive tradeoff.
Abstract: Academic work on sales compensation plans features agency models prominently, and these models have also been used to build decision aids for managers. However, empirical support remains sketchy. We conducted three experiments to investigate three unresolved predictions involving the incentive-insurance trade-off posited in the model. First, compensation should be less incentive loaded with greater effort-output uncertainty so as to provide additional insurance to a risk-averse agent. Second, flat wages should be used for verifiable effort so as to avoid unnecessary incentives. Third, less incentive-loaded plans should be used with more risk-averse agents so as to provide additional insurance.Our design implemented explicit solutions from a specific agency model, which offers greater internal validity, compared to extant laboratory designs that either did not implement explicit solutions or excluded certain parameters. In Experiment I, data from working manager subjects supported the first prediction but only when risk-averse agents undertook nonverifiable effort. We interpret this as disclosing the model's "core" circumstance, wherein it orders the data when the incentive-insurance trade-off is relevant. Thus, when verifiable effort made incentives moot, as is the case for the second prediction, the model failed to order the data.Building on these results, we reasoned that the third prediction should find support among risk-averse agents but not among risk-neutral agents, because insurance is a moot point with the latter agents. To this end, we added risk-neutral utility functions for agents in Experiment II. Data from MBA-candidate student subjects supported the predictions, but only when risk-averse agents undertook nonverifiable effort. In those cells in which the incentive-insurance trade-off was moot (either because of risk-neutrality or else verifiability), the data did not support the predictions.We confronted several validity threats to these results. To begin, Experiment I used the standard agency solution, which equalizes an agent's expected utility from the predicted plan with his expected utility from rejecting it. Subjects might have broken these ties on such grounds as fairness. To assess whether this confounded the results, we derived new solutions in Experiment II that broke ties in favor of the predicted plan (by a 10% margin in the expected utility). Our results were robust to this change.Second, our agents' behavior in Experiments I and II was much more consistent with predictions, compared to the principals' behavior, which broughtup task comprehension as a validity threat because our principals faced a more complex experimental task than the agents. To address this threat, we used three decision rounds in Experiment III to reduce the principals' task comprehension problems. A related validity threat arose from the relatively small gap in some cells between a principal's predicted expected utility and the principal's next best choice. To address this threat, we derived new solutions with larger gaps to make the principal's choices "easier." The results were again robust to these changes, which removes these validity threats.We also addressed two alternative explanations. Might principals be predisposed to pick salary plus commission plans regardless of the model's predictions? If so, we should find such plans chosen uniformly across different experimental conditions. Pooling the data from our three experiments, we rejected this predisposition explanation by finding variation that was more consistent with treatment differences across cells. Second, mightagents choose higher effort levels because of a demand bias? If so, we should find agents picking high effort regardless of the plan actually offered to them. Using pooled data, we rejected this explanation by finding variation that was more consistent with a utility-maximizing reaction to the plan actually offered to them.Finally, we included manipulation checks to assess whether principals and agents perceived experimental stimuli identically, as per the "common knowledge" assumption in game theory. These data showed no differences between agents' and principals' perceptions of stimuli.Our experiments move the literature from simply asking whether the model works to pinpointing the circumstances in which itorders behavior. The primary stylized fact we uncovered is the persistent and striking lack of support for the agency model outside of the circumstance in which riskaverse agents undertake nonverifiable effort. The model's failure when there is no material insurance-incentive tradeoff deserves scrutiny in future work.

Journal ArticleDOI
TL;DR: In this paper, the authors consider two types of value-add modifications that are often facilitated by marketing information: retention-type modifications that increase the attractiveness of a product to a firm's loyal customers, and conquesting-type modification that allow a firm to increase the appeal of its product to its competitors' loyal customers.
Abstract: An important product strategy for firms in mature markets is value-adding modifications to existing products. Marketing information that reveals consumers' preferences, buying habits, and lifestyle is critical for the identification of such product modifications. We consider two types of value-adding modifications that are often facilitated by marketing information:retention-type modifications that increase the attractiveness of a product to a firm's loyal customers, andconquesting-type modifications that allow a firm to increase the appeal of its product to a competitor's loyal customers. We examine two aspects of the markets for product modification information: (1) the manner in which retention and conquesting modifications affect competition between downstream firms, and (2) the optimal selling and pricing policies for a vendor who markets product modification information. We consider several aspects of the vendor's contracting problem, including how a vendor should package and target the information to the downstream firms and whether the vendor should limit the type of information that is sold. This research also examines when a vendor can gain by offering exclusivity to a firm.We address these issues in a model consisting of an information vendor facing two downstream firms that sell differentiated products. The model analyzes how information contracting is affected by differentiation in the downstream market and the quality of the information (in terms of how "impactful" the resulting modifications are). We analyze two possible scenarios. In the first, the information facilitates modifications that increase the appeal of products to the loyal customers of only one of the two downstream firms (i.e., one-sided information). In the second scenario, the information facilitates modifications that are attractive to the loyal consumers of both the firms (i.e., two-sided information).The effect of modifications on downstream competition depends on whether they are of the retention or the conquesting type. A retention-type modification increases the "effective" differentiation between the firms and softens price competition. Conquesting modifications, however, have benefits as well as associated costs. A conquesting modification of low impact reduces the "effective" differentiation between competing products and leads to increased price competition. However, when conquesting modifications are of sufficiently high impact, they also have the benefit of helping a firm to capture the customers of the competitor.The vendor's strategy for one-sided information always involves selling to one firm, the firm for which the modifications are of the retention type. When the identified modifications are of low impact, this result is expected because conquesting modifications areprofit-reducing for downstream firms. However, even when the information identifies high-impact modifications (and positive profits are generated by selling the information as conquesting information), the vendor is strictly better off by targeting his information to the firm for which the modification is the retention type. With two-sided information, the equilibrium strategy is for the vendor to sell the complete packet of information (information on both retention and conquesting modifications) to both downstream firms. However, in equilibrium, both firms only implement retention-type modifications. The information on conquesting modifications is "passive" in the sense that it is never used by downstream firms. Yet the vendor makes strictly greater profit by including it in the packet. This obtains because the price charged for information depends critically on the situation an individual firm encounters bynot buying the information. The presence of conquesting information in the packet puts a nonbuyer in a worse situation, and this underlines the "passive power of information." The vendor gains by including the conquesting information even though it is not used in equilibrium.

Journal ArticleDOI
TL;DR: Despite the explosive growth of electronic commerce and the rapidly increasing number of consumers who use interactive media such as the World Wide Web for prepurchase information search and online...
Abstract: Despite the explosive growth of electronic commerce and the rapidly increasing number of consumers who use interactive media such as the World Wide Web for prepurchase information search and online...

Journal ArticleDOI
TL;DR: In this article, a nonparametric hierarchical Bayes model is proposed to model the relationship between consumer preference for product features and observable covariates, such as reliability or durability, and covariates that describe consumers and how they use the product.
Abstract: This paper provides a method for nonparametrically modeling the relationship between consumer preference for product features, such as reliability or durability, and covariates that describe consumers and how they use the product. This relationship is of interest to firms designing and delivering products to a market because the extent to which consumers are sensitive to particular features determines the potential profitability of product offerings, and affects decisions relating to appropriate distribution outlets and advertising strategies. The successful identification of these relationships also aids in efficiently targeting marketing activities to specific segments of the consumer population. The relationship between consumer preference for product features and observable covariates is important but is typically unknown. In addition, these relationships are often deeply embedded in a model hierarchy and are not observed directly. For example, in models of household choice, the observed outcomes are multinomial with probabilities driven by latent utilities or values that consumers place on the choice alternatives. These utilities are in turn a function of characteristics, such as price and product features, which are differentially valued. Of primary interest is the relationship between consumer sensitivity to product characteristics and readily observed covariates such as household demographics or aspects of product usage. Because the relationships of interest are not directly observed, it is difficult to draw inferences about them without formal statistical models. This paper presents a three-level hierarchical Bayes model for modeling binary consumer preferences as a function of observable covariates. The hierarchical model nonparametrically estimates the relationships between consumer preferences for product features and the covariates without assuming a specific functional form. A nonparametric model is particularly useful in the exploratory analysis of consumer data in which the primary purpose of the analysis is to generate further questions rather than provide specific answers to well-posed questions. This type of analysis is frequently encountered in marketing where a series of studies are commissioned to better understand the nature of demand. The first level of the hierarchy in the Bayesian model relates the binary consumer choice to the sensitivities of the consumer to product attributes such as brand name, price, reliability, and durability. The second level of the hierarchy models the heterogeneity across consumers using functions that relate attribute sensitivities to observable covariates. This level of the hierarchy also allows each respondent to have unique demand coefficients by introducing random effect components. The third level of the hierarchy specifies a smoothness prior for each of the unknown functions used in the second level. The approach is flexible and works well both when the unknown function can be closely approximated by a linear function and when it cannot be. A Bayesian model selection technique is used to determine which functions can be modeled using a linear function and which ones should be modeled nonparametrically to provide the necessary flexibility to estimate the function accurately. The proposed methodology is illustrated using data from a survey of consumer preferences for features of marine outboard engines that was collected as part of a consulting project. Our analysis focuses on measuring consumer preferences for engine features and their relationships to two variables related to boat length and engine size. Consumer preferences for engine features were obtained through a national survey conducted over the telephone. Preferences were elicited by means of a pairwise evaluation in which respondents chose between two engines that were identical in every respect except for two engine features. The methodology can be modified to allow for more complex comparisons such as conjoint data collected in full profiles. The application of a Bayesian model selection procedure indicates that 4 of the 28 covariate relationships in the model are nonlinear, while the other 24 are linear. The preferences associated with these four functions are involved in 56% of the pairwise comparisons in the study. Therefore, in practice, if the nonlinear functions are not properly estimated there is the potential to draw misleading inferences regarding 56% of the pairwise choices. Firms can use the estimates of the functions relating preferences to covariates in a number of ways. First, they can use the covariates to determine the total number of consumers who have high demand for a particular product feature, and then they can target communication efforts to those individuals. Alternatively, the empirical results can be used as a basis of subsequent analysis to obtain a more complete characterization of a market segment.

Journal ArticleDOI
TL;DR: The Internet has signi.cantly reduced the marginal cost of producing and distributing digital information goods and it also coincides with the emergence of new competitive strategies such as large-scale mergers and acquisitions.
Abstract: The Internet has signi.cantly reduced the marginal cost of producing and distributing digital information goods. It also coincides with the emergence of new competitive strategies such as large-sca...

Journal ArticleDOI
TL;DR: The issue of "power" in the marketing channels for consumer products has received considerable attention in both academic and practitioner journals as well as in the popular press as mentioned in this paper, and the issue of power in marketing channels has been studied extensively.
Abstract: The issue of "power" in the marketing channels for consumer products has received considerable attention in both academic and practitioner journals as well as in the popular press. Our objective in...

Journal ArticleDOI
TL;DR: The number of brands in the marketplace has vastly increased in the 1980s and 1990s and the amount of money spent on advertising has run parallel as mentioned in this paper, and print advertising is a major communication medium.
Abstract: The number of brands in the marketplace has vastly increased in the 1980s and 1990s, and the amount of money spent on advertising has run parallel. Print advertising is a major communication instru...